A question about null move pruning in Crafty

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

A question about null move pruning in Crafty

Post by sje »

Here's a question about null move pruning in Crafty. I mention this in the forum as perhaps others have used the code for inspiration.

Consider the code in search.c:

Code: Select all

      if (depth - null_depth - 1 > 0)
        value =
            -Search(tree, -beta, 1 - beta, Flip(wtm), depth - null_depth - 1,
            ply + 1, NO_NULL);
      else
        value = -QuiesceChecks(tree, -beta, 1 - beta, Flip(wtm), ply + 1);
      HashKey = save_hash_key;
      if (abort_search || tree->stop)
        return (0);
      if (value >= beta) {
        HashStore(tree, ply, depth, wtm, LOWER, value, tree->curmv[ply]);
        return (value);
The value taken from the null move search is stored in the transposition table as a lower bound with with a draft of "depth". However, isn't it true that the null move search really only had a draft of "depth - null_depth - 1", a value with a bit less confidence? In fact, if recursive null move searching is in place, the actual draft may be even less.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: A question about null move pruning in Crafty

Post by bob »

sje wrote:Here's a question about null move pruning in Crafty. I mention this in the forum as perhaps others have used the code for inspiration.

Consider the code in search.c:

Code: Select all

      if (depth - null_depth - 1 > 0)
        value =
            -Search(tree, -beta, 1 - beta, Flip(wtm), depth - null_depth - 1,
            ply + 1, NO_NULL);
      else
        value = -QuiesceChecks(tree, -beta, 1 - beta, Flip(wtm), ply + 1);
      HashKey = save_hash_key;
      if (abort_search || tree->stop)
        return (0);
      if (value >= beta) {
        HashStore(tree, ply, depth, wtm, LOWER, value, tree->curmv[ply]);
        return (value);
The value taken from the null move search is stored in the transposition table as a lower bound with with a draft of "depth". However, isn't it true that the null move search really only had a draft of "depth - null_depth - 1", a value with a bit less confidence? In fact, if recursive null move searching is in place, the actual draft may be even less.
Here's the way to think about it. You reach this position with remaining depth D. First thing you do is a hash probe. If you have previously reached this same position, with the same draft, and the null-move search failed high, don't you want to fail high now without even doing the null-move search?

Positions below the current ply will be stored with a very shallow depth, for sure, since R=3 in Crafty. But at the current ply, if you reduce the depth, then when you reach this position again, you will do the null search again, it will fail high again, and you just wasted that effort...

BTW, if you think about it, this is an optimization, not a new idea. If I didn't store the entry there, what would happen? I would go one ply deeper, and then get a hash hit from the reduced-depth entry stored there. But I went thru the extra effort of another recursive search level...
User avatar
sje
Posts: 4675
Joined: Mon Mar 13, 2006 7:43 pm

Re: A question about null move pruning in Crafty

Post by sje »

bob wrote:Here's the way to think about it. You reach this position with remaining depth D. First thing you do is a hash probe. If you have previously reached this same position, with the same draft, and the null-move search failed high, don't you want to fail high now without even doing the null-move search?
Yes, your point is valid as long as the transposition table entry will be probed only from a node that is a candidate for null move pruning with compatible parameters.

But what if this isn't the case? Consider the possibility that a subsequent search will hit the same position and the old transposition table entry is still around. If this happens, the stored draft might be too big and a bogus cutoff may occur.
adams161
Posts: 626
Joined: Sun May 13, 2007 9:55 pm
Location: Bay Area, CA USA
Full name: Mike Adams

Re: A question about null move pruning in Crafty

Post by adams161 »

I"m clearing hash from move to move so i dont have this issue of an old entry staying around, I dont hash in null move itself, the hash of the postion when null move returns a fail high is the same in future postions, i.e. you can assume you'r usual null move conditions that you check are inherent in the postion, for example if you dont hash when in check, well its teh same position you must not be in check. I figure even the end game condition really doesnt matter much even if i wasnt clearing hash. Apparently it was thought before this wasnt an endgame. If some discrete jump happens where a flag is set to call it endgame, i figure allowing the exception wont kill anything, it got by the endgame flag once before, and when i set an endgame flag, which creates a discrete jump, in all scores, i clear hash anyway even if i dont have a policy of clearing hash. So basicly i can't see a reason not to do this, so i implemented it last night.

Mike
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: A question about null move pruning in Crafty

Post by bob »

sje wrote:
bob wrote:Here's the way to think about it. You reach this position with remaining depth D. First thing you do is a hash probe. If you have previously reached this same position, with the same draft, and the null-move search failed high, don't you want to fail high now without even doing the null-move search?
Yes, your point is valid as long as the transposition table entry will be probed only from a node that is a candidate for null move pruning with compatible parameters.
That is _always_ the case. Same piece locations, same depth (or hash table depth is deeper which is even better). Same potential moves with regard to EP, castling, side to move.
But what if this isn't the case? Consider the possibility that a subsequent search will hit the same position and the old transposition table entry is still around. If this happens, the stored draft might be too big and a bogus cutoff may occur.
Why would we care? Would not a null-move search to depth 10 be better than a null-move search to depth 8? I'd rather do the 10, but it is too costly. But if I occasionally get one for free, I'll take it. We have exactly the same issue with normal searches with regard to excessive depth from the hash entry. If two positions are identical in every way, then how can one be "different"??? If you would do a null-move search at the top of position A, why would you not do a null-move search at the top of position B? I can't think of any reason. And if that is the case, why would you want to do the second null-move search and waste the time when you could look up the result in the hash table and get out of here quickly with little effort?
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: A question about null move pruning in Crafty

Post by bob »

adams161 wrote:I"m clearing hash from move to move so i dont have this issue of an old entry staying around, I dont hash in null move itself, the hash of the postion when null move returns a fail high is the same in future postions, i.e. you can assume you'r usual null move conditions that you check are inherent in the postion, for example if you dont hash when in check, well its teh same position you must not be in check. I figure even the end game condition really doesnt matter much even if i wasnt clearing hash. Apparently it was thought before this wasnt an endgame. If some discrete jump happens where a flag is set to call it endgame, i figure allowing the exception wont kill anything, it got by the endgame flag once before, and when i set an endgame flag, which creates a discrete jump, in all scores, i clear hash anyway even if i dont have a policy of clearing hash. So basicly i can't see a reason not to do this, so i implemented it last night.

Mike
I don't see any viable reason to clear hash between searches. Why throw away good data? I look at my last year's income tax return to help me save time on this year's. Why ignore that old data that is perfectly usable?
adams161
Posts: 626
Joined: Sun May 13, 2007 9:55 pm
Location: Bay Area, CA USA
Full name: Mike Adams

Re: A question about null move pruning in Crafty

Post by adams161 »

I didn't get good intial results on not clearing the hash. I agree conceptually i should work to make it work. I've made changes in my replacement policy and i've made changes in the quiescent policies so maybe my next tests will improve. If i dont clear the hash i still need a replacement policy that gives preference to regular search nodes over qsearch nodes. Maybe they need to be split into two hash tables. That way a quiescent would never over write a regular search node. Another possiblity is adding another variable to my hash structure that helps me determine things like age or if its quiescent. The extra memory that creates has to be weighed against the extra memory of another variable. Finnaly i compute a start hash and and incremental hash for each search, Its a complex process with the incremental search drawing info from search and the start hash drawing info ( like enpassant rights ) from the main loop with winboard. Though i dont think this is an issue, i need to lock down that there is no distorion in the hash from move to move. I'll probably investigate this by inspection, simplification, and maybe moving the hash functions to another program and running test cases for confirmation there is no distorion in the hash. I figure for example if the start hash ever handled an enpassant case differently than the incremental hash, all postions would originate from the start hash and the distorion would likely be evenly distributed, but going from search to search, you'll get killed.

Mike
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: A question about null move pruning in Crafty

Post by bob »

adams161 wrote:I didn't get good intial results on not clearing the hash. I agree conceptually i should work to make it work. I've made changes in my replacement policy and i've made changes in the quiescent policies so maybe my next tests will improve. If i dont clear the hash i still need a replacement policy that gives preference to regular search nodes over qsearch nodes. Maybe they need to be split into two hash tables. That way a quiescent would never over write a regular search node. Another possiblity is adding another variable to my hash structure that helps me determine things like age or if its quiescent. The extra memory that creates has to be weighed against the extra memory of another variable. Finnaly i compute a start hash and and incremental hash for each search, Its a complex process with the incremental search drawing info from search and the start hash drawing info ( like enpassant rights ) from the main loop with winboard. Though i dont think this is an issue, i need to lock down that there is no distorion in the hash from move to move. I'll probably investigate this by inspection, simplification, and maybe moving the hash functions to another program and running test cases for confirmation there is no distorion in the hash. I figure for example if the start hash ever handled an enpassant case differently than the incremental hash, all postions would originate from the start hash and the distorion would likely be evenly distributed, but going from search to search, you'll get killed.

Mike
You need an "age" field. Otherwise you will find it hard to store new stuff in the table because of the deep draft entries from previous searches. The idea is that you keep old stuff, but when you do a store, you _always_ overwrite something that is old regardless of the depth of the entry.
adams161
Posts: 626
Joined: Sun May 13, 2007 9:55 pm
Location: Bay Area, CA USA
Full name: Mike Adams

Re: A question about null move pruning in Crafty

Post by adams161 »

i'm beginining to see that i've had it a bit mixed up, my replacement policy.

it was automaticly replace if the currenthash number didnt equale teh hash number in the hash array and if they were equal only replace if the current depth was greater. It was pointed out that if the current depth was not greater than the depth on the hash array and they were the same real number i actualy want to replace because that meant I could have played the hash move but something must be wrong wtih the bounds and more recent bounds are better.

but assuming i deal with the age issue i should not be replacing any hashes from the current search if the depth is > and its a different hash key i.e. a different position. This solves the problem i had with qsearch hashing, it was always replacing if the hashnumber in the hash array was different.

I used teh idea that i was cleaning hash after each turn to confine qseach hash number to only replace if the hash array was empty at that spot or the depth was allready 0 or negative. so as to not have qsearch hash numbers ever replace search hash numbers from teh current search. but i need to extend it because my best hash values, i.e my best depths in regular search, should not be replaced just because teh new position is a different hash number.

this was my old code

if(hasharray[modnum].depth<=depth || hasharray[modnum].hnumber!=currenthash)

i think i just had it backwards.

Mike