Page 1 of 1

adding an existing key to the hashtable

Posted: Thu Nov 22, 2018 11:04 pm
by sandermvdb
I was wondering what to do if I want to add a value to the hashtable of which the 64-bit key already exists in the hashtable. I can see it is quite common to always overwrite the existing value but I really don't understand why you want to overwrite the SAME position if the new depth is LOWER than the existing one? I can definitely see that always overwriting performs better in for instance the Fine 70 position.

An example of the difference (100x more nodes):

Code: Select all

info depth 29 seldepth 31 time 209 score cp 327 nps 678813 nodes 141872 pv a1b1 a7b7 b1c1 b7c7 c1d1 c7d7 d1c2 d7c7 c2d3 c7b7 
info depth 29 seldepth 30 time 2721 score cp 239 nps 5002355 nodes 13611408 pv a1b1 a7a8 b1b2 a8a7 b2b3 a7a6 b3c2 a6b6 c2d2 b6a6 
Is this normal behaviour?

Re: adding an existing key to the hashtable

Posted: Fri Nov 23, 2018 10:22 am
by Joost Buijs
sandermvdb wrote: Thu Nov 22, 2018 11:04 pm I was wondering what to do if I want to add a value to the hashtable of which the 64-bit key already exists in the hashtable. I can see it is quite common to always overwrite the existing value but I really don't understand why you want to overwrite the SAME position if the new depth is LOWER than the existing one? I can definitely see that always overwriting performs better in for instance the Fine 70 position.

An example of the difference (100x more nodes):

Code: Select all

info depth 29 seldepth 31 time 209 score cp 327 nps 678813 nodes 141872 pv a1b1 a7b7 b1c1 b7c7 c1d1 c7d7 d1c2 d7c7 c2d3 c7b7 
info depth 29 seldepth 30 time 2721 score cp 239 nps 5002355 nodes 13611408 pv a1b1 a7a8 b1b2 a8a7 b2b3 a7a6 b3c2 a6b6 c2d2 b6a6 
Is this normal behaviour?
In my engine I only overwrite positions with the same 64 bit key when the depth >= the old depth. Indeed, it doesn't perform very well at fine 70, My guess is that fine 70 is very sensitive to the GHI problem and that this has a very big influence on performance. Just do what gives you the highest Elo when playing gauntlets against other engines. I've also been thinking about always overwriting positions with pawns only and use depth >= old depth for all others, but I simply never tried.

Re: adding an existing key to the hashtable

Posted: Fri Nov 23, 2018 4:04 pm
by hgm
The point is that you can never get a result for the same position that has lower depth if the window bounds in the hash table were usable: you would have taken the hash cutoff. So the situation can only occur when the bounds in the hash table were useless. And why would you want to keep precious hash-table slots occupied by useless info, and discard somewhat useful info to make that possible?

Re: adding an existing key to the hashtable

Posted: Fri Nov 23, 2018 7:50 pm
by Joost Buijs
I don't want to argue with that, maybe for a very plain and simple search this will hold, but especially in a SMP search with many re-searches at different bounds (that usually happen in a modern search) it sometimes performs better when you keep entries with the highest depth, and entries with an exact bound are (at least in my search) always usable. That is why I said: just do what gives you the best performance in practice according to your tests.

Re: adding an existing key to the hashtable

Posted: Sat Nov 24, 2018 4:04 am
by bob
Here's the reason for overwriting. Apparently whatever is there was not good enough to terminate the search at the start of this position, so what's the point of saving something that didn't work when you now have something that is correct and useful??

IE exactly what HGM said but with fewer words... :)

Re: adding an existing key to the hashtable

Posted: Sat Nov 24, 2018 7:37 am
by Joost Buijs
I remember that, several years ago, we have had this discussion before, I just do what gives my engine the best performance Elo wise, and I couldn't care less if it is theoretically sound or not.

Re: adding an existing key to the hashtable

Posted: Sat Nov 24, 2018 11:17 am
by hgm
Well, the danger of that approach is that it makes it easy to 'paint yourself into a corner'. If something that is theoretically sound does not work for you, it usually means something else is wrong, which invalidates some of the assumptions the theory was based on. By empirically tuning your engine around the underlying error, you basically freeze in that error, so that fixing that problem might not give you a measurable improvement because it doesn't agree with the tuning anymore.

E.g. if what you say is true, and your SMP search has many searches going on all with different windows, it suggests that your search is poorly synchronized, and that branches that have already been cut off by tightening of the window closer to the root are still allowed to search on. In a sequential PVS search almost all nodes will have the same null-window. Adapting the hash replacement to this situation, will make that there is less to gain by better communication of the window bounds.

Re: adding an existing key to the hashtable

Posted: Sat Nov 24, 2018 12:41 pm
by Joost Buijs
That could be possible, I really don't know.

This night I ran 10.000 games (single threaded without YBWC) depth >= depth vs. store always, and depth >= depth scored 6 Elo higher, 10.000 games is maybe not be enough to determine the true difference, but it certainly doesn't score worse. That is all I can say, maybe there is something wrong with the implementation of my transposition table, but to be honest I don't think that this is the case.

What I mean with different window bounds is that I often call quiescence with several different offsets from the main window, since I use the transposition table in quiescence this could have some influence which is very difficult to predict, especially in an SMP environment.

Re: adding an existing key to the hashtable

Posted: Sat Nov 24, 2018 11:33 pm
by hgm
Joost Buijs wrote: Sat Nov 24, 2018 12:41 pm What I mean with different window bounds is that I often call quiescence with several different offsets from the main window, since I use the transposition table in quiescence this could have some influence which is very difficult to predict, especially in an SMP environment.
That explains a lot; under those conditions it would obvious be detrimental to overwrite results of deeper searches that are useful w.r.t. the true window by QS results for the offsetted window. One could wonder if changing the replacement algorithm would be the best way to solve this, however. E.g. if you only use offsetted windows with QS, you might just refrain from overwriting deeper results with QS results, but still allow (say) d=1 to overwrite d=5. Or offset the hash key for offsetted searches, so that the offsetted QS results would never collide with a deeper result for the same position. Or reserve fields in the hash entry for the score of an offsetted search as well as the true window. (If these only occur in QS, you would not have to store the depth with those.) Or store all d=0 entries in separate entries anyway, (in the same bucket), even if a deeper result with that same key already is in the table.

Re: adding an existing key to the hashtable

Posted: Sun Nov 25, 2018 8:54 am
by Joost Buijs
hgm wrote: Sat Nov 24, 2018 11:33 pm
Joost Buijs wrote: Sat Nov 24, 2018 12:41 pm What I mean with different window bounds is that I often call quiescence with several different offsets from the main window, since I use the transposition table in quiescence this could have some influence which is very difficult to predict, especially in an SMP environment.
That explains a lot; under those conditions it would obvious be detrimental to overwrite results of deeper searches that are useful w.r.t. the true window by QS results for the offsetted window. One could wonder if changing the replacement algorithm would be the best way to solve this, however. E.g. if you only use offsetted windows with QS, you might just refrain from overwriting deeper results with QS results, but still allow (say) d=1 to overwrite d=5. Or offset the hash key for offsetted searches, so that the offsetted QS results would never collide with a deeper result for the same position. Or reserve fields in the hash entry for the score of an offsetted search as well as the true window. (If these only occur in QS, you would not have to store the depth with those.) Or store all d=0 entries in separate entries anyway, (in the same bucket), even if a deeper result with that same key already is in the table.
Modifying the hash key for offsetted searches seems like a good idea, certainly something I want to try. Adding extra fields to an entry is a possibility, but atm. a single entry fits exactly in 16 bytes, 4 entries in a cache line, and this is something I would like to keep.

In the past I tried many different replacement schemes, most off them gave hardly any improvement over depth >= depth, that is why I kept it this way. My TT code is rather old, in 8 years time I didn't change anything, maybe time to revise it a little bit.

Of course another option is to use a separate table in quiescence, that always seemed like a waste of memory to me, but with current memory sizes it should be no problem at all.