Searching fail highs shallower..

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Searching fail highs shallower..

Post by bob »

Back when this was a topic on r.g.c.c, someone mentioned that on a root fail low, they set the iteration counter back to one. After thinking about it, the idea seemed to be that the fail low entry at the root will hang around with a bound lower than at all previous iterations. Which means it will continue to fail low, letting the search find a better move using iterative deepening.

I played around with it, but encountered at least two problems. First, if that key entry gets overwritten, then you quickly have a problem in that you will repeat the process. And cycle back around to depth=1 again. Not so good. Second, if there are several moves that look bad at some big depth, you rinse and repeat several times. After seeing both of these issues, I decided against it. Guess I should test again as it is a very simple change... I can easily choose the lower alpha bound to trigger this. IE don't do it on the first or second fail low, only when it is apparent this move really sucks.

And that reminds me of the third issue I found. Positions where your opponent makes a sacrifice that you accept. At early iterations, it looks like you are winning material. But as you go deeper, you sense the real outcome when you fail low back to a normal score. But THAT can trigger this nonsense, and there is no possible move that will help...
User avatar
Ovyron
Posts: 4556
Joined: Tue Jul 03, 2007 4:30 am

Re: Searching fail highs shallower..

Post by Ovyron »

Does Stockfish search fail highs/fail lows shallower? If I recall correctly Houdart removed this from his engine (Houdini 6 does not do it anymore) and it has great behavior on fail lows so I'd presume searching shallower failing moves is bad (i.e. if you implement it on your engine and it improves its play then it means it has a bad fail high/fail low behavior to begin with.)
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Searching fail highs shallower..

Post by bob »

Can't say what he did/does. I don't do it either. I have a somewhat complex algorithm for allocating time now. Has to do with fail highs / fail lows vs normal search behavior sticking with first move for an iteration (or more). As I mentioned, I ALWAYS resolve the score of the first move before moving on. I never got any positive results with trying other things, although that doesn't mean there are not better approaches possible.
User avatar
Ovyron
Posts: 4556
Joined: Tue Jul 03, 2007 4:30 am

Re: Searching fail highs shallower..

Post by Ovyron »

Rybka 4 would have the most terrible behavior I've seen, she'd spend so much time resolving the fail low that it didn't matter what move she played, because she had burned so much time on her clock that the remaining left wasn't enough to save the game, an example of "just play anything, but do it faster." The surprise was that Rybka 4.1 fixed this and yet the engine played mostly at the same elo (the CCRL reports a 3 elo loss :shock: )

Whatever you do, it better be worth the time you spend on it.
User avatar
xr_a_y
Posts: 1871
Joined: Sat Nov 25, 2017 2:28 pm
Location: France

Re: Searching fail highs shallower..

Post by xr_a_y »

How about combining this with a window size that depends on ID depth ? like

Code: Select all

delta = d>4?(6+max(0,20-d)):MATE;
[code]
User avatar
Ovyron
Posts: 4556
Joined: Tue Jul 03, 2007 4:30 am

Re: Searching fail highs shallower..

Post by Ovyron »

Stockfish does modify the depth to make it shallower on failing moves. It does it silently (not shown on the output, you have to keep an eye on the current depth part of the GUI, where it'd show increasingly shallower depth than the current iteration).
User avatar
hgm
Posts: 27809
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Searching fail highs shallower..

Post by hgm »

bob wrote: Wed Mar 04, 2020 10:19 pmFirst, if that key entry gets overwritten, then you quickly have a problem in that you will repeat the process.
This problem could be circumvented by keeping the depth/score-bound info in the move list of the node. Then you are no longer dependent on hash hits in the daughters to know what you have done before.