time usage

Discussion of chess software programming and technical issues.

Moderator: Ras

zamar
Posts: 613
Joined: Sun Jan 18, 2009 7:03 am

Re: time usage

Post by zamar »

bob wrote:There are two cases:

(1) you fail high. There is no need to resolve the fail high, unless a second move fails high. If that happened, we relaxed beta, searched the first move to get a better score, then searched the second move (the second that failed high) again with a null-window search, but now based on the new best score. If it fails low, the first move is best. If it fails high, we do not need to re-search it unless another move fails high. This has a problem. By not re-searching and getting a score, the hash table is not complete with respect to the search of this move, which means the next iteration will start off with missing data, since every other ply won't have a best (PV) move.
Here is what Stockfish does currently when failing high: it immediately stops current iteration, allocates much extra time, moves to next iteration and starts searching it with wider window. (Logic is that we want to be sure that move which failed high, really is as good as it looks like, nothing else matters)

Sounds crazy, I know.
(2) you fail low. Here you have several choices:

(a) re-search immediately to determine how bad the move really is.

(b) just continue searching with original alpha/beta window and hope one of the remaining moves will fall inside and produce a PV. If not, you just wasted a ton of time and still have to go back and re-search the first move again.

(c) start the search over at iteration 1. SInce the best move thru the first N iterations failed low at N+1, that hash entry will continue to make it fail low, and you find a new best move, but you find it iteratively rather than having to start at depth=n with no move ordering info in the hash table. Drawback is that the hash entry for the fail-low can be overwritten. Or other moves look worse at iterations 1-N so that the original best move is still best. And you just wasted a ton of time.
Here is choice (d) that Stockfish currently uses:
When facing aspiration fail low, it sets flag to not stop search until fail low is resolved (or when really short of time). Then it starts looking for the alternatives for the first move. If it doesn't find one, it moves to next iteration and starts searching it with wider window. (Logic is that because we seem to be in very bad trouble, we should use much extra here and try to see deep enough to find the way out of the trouble [and we do not want to waste valuable time for costly research as in choice (b)]).

Sounds even more crazy, I know. But it works surprisingly often. And when we are facing multiple fail lows in row and can't find a way out, we are very likely losing anyway, no matter what we play. But sometimes this behaviour really sucks and causes score drops from +4 to +1 (luckily this seems to be very rear).
Joona Kiiski
frankp
Posts: 233
Joined: Sun Mar 12, 2006 3:11 pm

Re: time usage

Post by frankp »

bob wrote:
That's the easy part (testing). A more difficult question is "How to determine that things are turning south in the game and more time is justified?" We could use the usual fail-high on first move percentage, we could use the time per iteration divided by time per previous iteration to catch a sudden jump in complexity. Etc. That's the part that needs thought and perhaps ideas.

I can quite easily tell whether an idea is good or bad, since testing is easy. It is coming up with workable ideas that is interesting.
OK, I misinterpreted your post. I thought Hsu had already defined a metric for the idea. Of course, if he had you would have already tested it - doh!
jwes
Posts: 778
Joined: Sat Jul 01, 2006 7:11 am

Re: time usage

Post by jwes »

bob wrote:
zamar wrote:
One example from the mid 70's in my program, is the idea of a fail-low forcing you to use more time to avoid a tactical problem. And it works well.
Stockfish also uses this idea, but instead of researching with bigger window, it goes directly searching for alternatives for the first move. If aspiration window is small enough, this behaviour should somewhat prevent the EBF explosion, and is a good indicator that we should allocate more time.
bob wrote: Thoughts???
The problem is that if we know that have a very bad EBF, an extra iteration can be very costly. It could take ages just to complete the first move. I'm not saying that this couldn't work, but it's very tricky to decide how much extra time you are willing to spend before giving up.
I am going to address this specific issue once and for all before long. I have tried lots of approaches over the years. In the late 70's we used to use the approach you are describing. There are two cases:

(1) you fail high. There is no need to resolve the fail high, unless a second move fails high. If that happened, we relaxed beta, searched the first move to get a better score, then searched the second move (the second that failed high) again with a null-window search, but now based on the new best score. If it fails low, the first move is best. If it fails high, we do not need to re-search it unless another move fails high. This has a problem. By not re-searching and getting a score, the hash table is not complete with respect to the search of this move, which means the next iteration will start off with missing data, since every other ply won't have a best (PV) move.

(2) you fail low. Here you have several choices:

(a) re-search immediately to determine how bad the move really is.

(b) just continue searching with original alpha/beta window and hope one of the remaining moves will fall inside and produce a PV. If not, you just wasted a ton of time and still have to go back and re-search the first move again.

(c) start the search over at iteration 1. SInce the best move thru the first N iterations failed low at N+1, that hash entry will continue to make it fail low, and you find a new best move, but you find it iteratively rather than having to start at depth=n with no move ordering info in the hash table. Drawback is that the hash entry for the fail-low can be overwritten.

This should virtually never happen. It requires another position with the same hash table address to be searched to a deeper depth. if this is actually occurring, you may have a hash table bug. A simple way to avoid it is by removing the move failing low from the root move list for the first N iterations after the fail low. Another question is what window do you use for the re-search.
bob wrote:Or other moves look worse at iterations 1-N so that the original best move is still best. And you just wasted a ton of time.
If you search with a zero window at the fail low value, it is almost the same search you would have done after resolving the fail low. I did some experiments a while ago and it appeared the best stratagy for crafty was re-searching the fail low on the first fail low "-1", but searching the other moves after the second fail low "-3".
Another idea is to split at the root after a fail low.
bob wrote:Which is best? It's really time to pose that question to the cluster and find out exactly.