I noticed that my engine suddenly needs a lot of time to finish an iteration where the score of the PV move changed a lot. This can be understood from the fact that the refutation trees for the other moves stored in the hash table are all wrong. The cut moves there strive to get the score above A, but what is needed in this new iteration is to get the score above B > A, and many hash moves just are not good enough for that.
My move ordering tries hash move directly after null move, and only tries Internal Iterative Deepening if there is no hash move. So when it does find a hash move from the previous iteration, which has a draft that is just one short of what is needed now, it happily starts searching allmoves at the full requested depth.
But if the hash move was not good enough to get above B, it has to look for other moves, without really having any idea what good moves are. In fact, when the hash move was a non-capture, and it fizzles, it will try the normal move ordering, starting with MVV/LVA captures. This seems a bit silly, as the non-capture that was hash move only could have become that if at somelower depth all the moves searched before it in the move ordering (so certainly all captures) would have failed low already at least at some lower depth, even against the less challenging window limit A. So it would be sort of a miracle if they failed high against the larger B now.
It seems a smarter move ordering in this case might be able to save a lot of time, otherwise wasted on searching moves at full depth that are very unlikely to be any good. It might also be beneficial to start searching all other moves at a lower depth (i.e. do IID) to weed out those that are no good, before focusing on a move that might be able to beat B. In the case that a hash move does not cause a cutoff against a raised window.
Does anyone use such a search strategy?
Inadequate hash moves
Moderators: hgm, Dann Corbit, Harvey Williamson
-
hgm
- Posts: 27703
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
-
petero2
- Posts: 680
- Joined: Mon Apr 19, 2010 7:07 pm
- Location: Sweden
- Full name: Peter Osterlund
Re: Inadequate hash moves
I have thought about this in the past and have tried to improve the situation, but so far all attempts to fix this have made my program weaker. To understand what is going on I added code to collect statistics for nodes where this situation occurs.hgm wrote:I noticed that my engine suddenly needs a lot of time to finish an iteration where the score of the PV move changed a lot. This can be understood from the fact that the refutation trees for the other moves stored in the hash table are all wrong. The cut moves there strive to get the score above A, but what is needed in this new iteration is to get the score above B > A, and many hash moves just are not good enough for that.
...
It seems a smarter move ordering in this case might be able to save a lot of time, otherwise wasted on searching moves at full depth that are very unlikely to be any good. It might also be beneficial to start searching all other moves at a lower depth (i.e. do IID) to weed out those that are no good, before focusing on a move that might be able to beat B. In the case that a hash move does not cause a cutoff against a raised window.
Does anyone use such a search strategy?
In nodes where beta==alpha+1, the hash probe gave a lower bound result, and the score is at least 15cp below beta (15cp is also the size of my aspiration window), I collect statistics. If the hash move fails high I increase one counter, and if some other move fails high I increase another counter.
Single core tests on some tactical positions (WAC 2, 71 and 100) showed that when there is a fail high in such a node, the hash move fails high about 94%-97% of the time. So my conclusion is that there is no problem to fix, at least in my engine.
I have not tried to analyze search trees to come up with a high-level explanation for why this happens, but one guess is that the "real" score for the hash move is often higher than its calculated score would suggest because alpha/beta stops trying to improve the score once it is >= beta. My engine uses fail-soft, but I don't think that makes a difference in this case.