Typical white knight capture routine

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Mincho Georgiev
Posts: 454
Joined: Sat Apr 04, 2009 6:44 pm
Location: Bulgaria

Re: Typical white knight capture routine

Post by Mincho Georgiev »

bob wrote:
xcomponent wrote:It is a matter of taste. There is nothing wrong to assign static scoring during generate, as long as it is superseded properly by the dynamic factors later. I like that.
Efficiency. Never do now what you can postpone until later, as with alpha/beta, later may never come.

But it is an OK thing to do, just not optimal.
Of course! Maybe i didn't express myself very clear. I meant that is not a decisive factor, of course is not optimal.
Last edited by Mincho Georgiev on Sat Jan 23, 2010 7:25 pm, edited 1 time in total.
mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 9:17 pm

Re: Typical white knight capture routine

Post by mcostalba »

Don wrote:
BubbaTough wrote:
mcostalba wrote: I have already done the mistake with unified middlegame and endgame scores (two values in one integer) where I spent almost one week of development and testing and we had very subtle bugs even after release (take a look at value.h) just for a very minimal speed increase.
This is a useful comment. I have been seriously considering adding this myself, but you have convinced me its not worth the bother and potential bugs. My last engine was darn fast, but bug ridden. The bugs are not worth it :oops:.

-Sam
I do the unified scores and it took a good day of work getting it all in place and it turned out to be less than a 1 percent speedup. However, in my opinion I get cleaner code out of it and when I add things now it's less likely to have bugs. I guess the devil is in the details.
Yes, that's the reason we didn't revert the patch: we saved 70 lines of code and it seems better now.

Regarding less bugs I am not sure, as ageneral rule any trick is bug prone and this is a trick because the correct way to handle is to use two fields, so it's up to the compiler keep them separated.

For instance you have to take care in avoiding overflow that can happen with additions/subtractions and easily happens with multiplications, also division by an integer needs to overload operator/() to get it correct.

Another open door for bugs is the portability because enums are compiled differently by different compilers (standard does not mandate the type) and you can have really difficult to find bugs.

Finally functions to extract midgamen and endgame values out of the unified score are not trivial to get correct and in a portable way because you are dealing with signed values.
mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 9:17 pm

Re: Typical white knight capture routine

Post by mcostalba »

bob wrote:
xcomponent wrote:It is a matter of taste. There is nothing wrong to assign static scoring during generate, as long as it is superseded properly by the dynamic factors later. I like that.
Efficiency. Never do now what you can postpone until later, as with alpha/beta, later may never come.

But it is an OK thing to do, just not optimal.
No, if you generate non-captures (in a staged generation fashion, we are talking of this) then you also score and sort them, otherwise if a TT or a capture does the job you never generate the non-captures.

You already come out with this objection and I have already warned you that we are talking of staged generation (about two months ago).
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Typical white knight capture routine

Post by Don »

mcostalba wrote:
Don wrote:
BubbaTough wrote:
mcostalba wrote: I have already done the mistake with unified middlegame and endgame scores (two values in one integer) where I spent almost one week of development and testing and we had very subtle bugs even after release (take a look at value.h) just for a very minimal speed increase.
This is a useful comment. I have been seriously considering adding this myself, but you have convinced me its not worth the bother and potential bugs. My last engine was darn fast, but bug ridden. The bugs are not worth it :oops:.

-Sam
I do the unified scores and it took a good day of work getting it all in place and it turned out to be less than a 1 percent speedup. However, in my opinion I get cleaner code out of it and when I add things now it's less likely to have bugs. I guess the devil is in the details.
Yes, that's the reason we didn't revert the patch: we saved 70 lines of code and it seems better now.

Regarding less bugs I am not sure, as ageneral rule any trick is bug prone and this is a trick because the correct way to handle is to use two fields, so it's up to the compiler keep them separated.

For instance you have to take care in avoiding overflow that can happen with additions/subtractions and easily happens with multiplications, also division by an integer needs to overload operator/() to get it correct.

Another open door for bugs is the portability because enums are compiled differently by different compilers (standard does not mandate the type) and you can have really difficult to find bugs.

Finally functions to extract midgamen and endgame values out of the unified score are not trivial to get correct and in a portable way because you are dealing with signed values.
I don't know if I implemented it the most optimal way, but I worked out the details myself. Essentially I use 31 bits instead of 32 bits and pack them into a 64 bit variable. Then you can do addition and subtraction the usual way, as long as you mask out bit 31 which is a meaningless overflow bit. All the multiplication is done by tables so efficiency is no concern. I rarely have to multiply by anything greater than 8 so the tables are very tiny. A few macros hide the gory details.
User avatar
Eelco de Groot
Posts: 4565
Joined: Sun Mar 12, 2006 2:40 am
Full name:   

Re: bugs in value.h Stockfish question

Post by Eelco de Groot »

mcostalba wrote:
This well modularized and disjoint set of functions is for me more important then a bit of (theoretical) extra speed. I have already done the mistake with unified middlegame and endgame scores (two values in one integer) where I spent almost one week of development and testing and we had very subtle bugs even after release (take a look at value.h) just for a very minimal speed increase.
Marco I'm a bit vexed now, are you saying there are bugs in value.h but we will not know about it :( If you go back to the old scoring method now, will we ever know what the bugs are. I don't think I'm very good at figuring out what went wrong, I'm not so good with all the bit twiddling tricks. Also, if you use larger scores anywhere than default Stockfish, larger values for King attacks etc. will the bugs not affect these and is the maximum value that does not cause an overflow somewhere documented? Or can we figure out any bugfixes if there is a new version?

I don't have any bugfixes to offer yet in return, sorry. Apart from some changes that are also just experimental of course, but no bugs found in the code. Here is a small wrinkle in the code I think that was already there a long time in Glaurung but not used because there was no IID done on Non-PV nodes, now that UseIIDAtNonPVNodes is enabled in Stockfish 1.6 would it not be better to at least save the evaluate() score needed for later use?

Referring to these lines from search() in search.cpp

Code: Select all


    // Go with internal iterative deepening if we don't have a TT move
    if (UseIIDAtNonPVNodes && ttMove == MOVE_NONE && depth >= 8*OnePly &&
            !isCheck && evaluate(pos, ei, threadID) >= beta - IIDMargin)
    {
        search(pos, ss, beta, Min(depth/2, depth-2*OnePly), ply, false, threadID);
        ttMove = ss[ply].pv[ply];
        tte = TT.retrieve(pos.get_key());
    }
I'm not sure yet for Rainbow Serpent code if it is not better to just do an evaluate somewhere in the beginning and be done with it, the problem is that at this point, where you want to do Internal Iterative Deepening, you may already have done a verification search in null move, or a quiescence search in razoring, you already have a value from the hash table if there was a hash hit, and/or, depending on the code base, there is already an approximateEval from quick_evaluate() done. Sadly these values are all different and not really interchangeable, and you never know beforehand which of these values you have. So maybe it is better to just have the uniform evaluate() anyway, do that somewhere in the beginning and for efficiency pass the value to quiescence search. Everything I have tried so far in using all the other values instead messes up the search 8-)

Regards,
Eelco
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Typical white knight capture routine

Post by bob »

mcostalba wrote:
bob wrote:
xcomponent wrote:It is a matter of taste. There is nothing wrong to assign static scoring during generate, as long as it is superseded properly by the dynamic factors later. I like that.
Efficiency. Never do now what you can postpone until later, as with alpha/beta, later may never come.

But it is an OK thing to do, just not optimal.
No, if you generate non-captures (in a staged generation fashion, we are talking of this) then you also score and sort them, otherwise if a TT or a capture does the job you never generate the non-captures.

You already come out with this objection and I have already warned you that we are talking of staged generation (about two months ago).
I'm apparently missing what you are saying. I _never_ "sort" non-captures as that is always bad. I used a "selection sort" when I chose to do this kind of thing, which I no longer do at all.

In any case, what he is doing will work correctly, it just wont be as efficient as the best solution, which is all I claimed.
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Typical white knight capture routine

Post by Don »

mcostalba wrote:
bob wrote:
xcomponent wrote:It is a matter of taste. There is nothing wrong to assign static scoring during generate, as long as it is superseded properly by the dynamic factors later. I like that.
Efficiency. Never do now what you can postpone until later, as with alpha/beta, later may never come.

But it is an OK thing to do, just not optimal.
No, if you generate non-captures (in a staged generation fashion, we are talking of this) then you also score and sort them, otherwise if a TT or a capture does the job you never generate the non-captures.

You already come out with this objection and I have already warned you that we are talking of staged generation (about two months ago).
I used to do the sorting on the fly, using a selection sort because it's technically the most lazy way to do it. However, tests indicate that this is not true. I think when you do the entire sort at once there must be huge advantages in pipelining or something, but it's actually faster not to do the sorting lazy - at least for me.