Time management and estimating search times

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Jakob Progsch
Posts: 40
Joined: Fri Apr 16, 2021 4:44 pm
Full name: Jakob Progsch

Time management and estimating search times

Post by Jakob Progsch »

The wiki page says:
Iterative deepening in conjunction with its predictable effective branching factor allows a flexible time management either to terminate the current iteration and to fall back on best move and PV of the previous iteration, or to decide about termination prior to start a new iteration or to search a next root-move.
Which makes sense... You have some heuristic that allocates time to a search and probably also a hard upper bound so you don't lose on time. Then the iterative deepening estimates how expensive going one deeper would be at every iteration and if that exceeds your target time stops. In the worst case the upper bound "abort time" kicks in, the search is truncated and you return the previous iterations results.

However I struggle with the estimation part. The quote from the wiki talks about the "predictable branching factor" but that doesn't really work out that way in practice unless I'm doing something wrong. If at the previous move I have searched to a depth of say 12 then the transposition table is preloaded to a depth of at least 10 when I search the next move. So if my opponent played a PV move the iterative deepening will blitz trough the first 10 depths in almost no time, then go through a highly accelerated depth 11 search and hit a relative brick wall of cost/nodes somewhere on the next two iterations that dive into "unexplored territory" which aren't well predicted by the previous iterations. So the behavior is not very predictable at all I find and just predicting from previous iterations leads to wildly oscillating estimates.

Are there some "tricks" to make this as predictable as the wiki alludes to?
User avatar
hgm
Posts: 27796
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Time management and estimating search times

Post by hgm »

Even without TT the duration of iterations is quite unpredictable. In particular, iterations that change PV move will take far longer than iterations that don't. And finishing those usually is the most important of all, strengthwise. Otherwise it is tantamount to: "I discovered that the move I had in mind is no good, but it takes time to find an alternative, so let's play it anyway".

The way I usually do it is this:
1) You calculate how much time per move you still have, (the 'nominal' thinking time) depending on what is left on your clock, and the TC type.
2) When you have already spent more than a certain fraction of that (say 60%), you won't start a new iteration. This fraction is determined experimentally, such that time usage is resonably distributed over the game.
3) You won't search new moves in the root if you exceed another factor (2-3) times the nominal time, and your score has not dropped too much compared to the previous iteration.
4) Finally, there is a 'cold-turkey timeout' that aborts the search no matter what. I usually set that such that a move can take 10 times as much time as any other remaining move. So with T on the clock and still N moves to go this would be 10*T/(N+9).
Henk
Posts: 7218
Joined: Mon May 27, 2013 10:31 am

Re: Time management and estimating search times

Post by Henk »

I remember in the past that Fairy-max was sometimes loosing on time. And a bit too often. So maybe in short time controls give higher priority to cold turkey timeout.

By the way I find out that skipper looses on time when performing a legal move wrongly. So the next time it thinks one of the opponent moves is invalid and it stops moving. So it looses on time while there was nothing wrong with time management.
User avatar
lithander
Posts: 881
Joined: Sun Dec 27, 2020 2:40 am
Location: Bremen, Germany
Full name: Thomas Jahn

Re: Time management and estimating search times

Post by lithander »

Henk wrote: Fri May 28, 2021 10:50 am I remember in the past that Fairy-max was sometimes loosing on time. And a bit too often. So maybe in short time controls give higher priority to cold turkey timeout.
Yeah, I really wanted to include Fairy-max in my gauntlet but it couldn't deal with the fast time controls I usually do. :( (5s + 0.5s increments)
Minimal Chess (simple, open source, C#) - Youtube & Github
Leorik (competitive, in active development, C#) - Github & Lichess
User avatar
hgm
Posts: 27796
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Time management and estimating search times

Post by hgm »

Fairy-Max had inherited the minimalistic time management of micro-Max, which only checked time after every iteration. Method (3) and (4) were not implemented. The latest version, btw, also implements the cold-turkey time-out (4), so it should not be able to lose on time. I am not sure whether I ever distributed a binary of that, though. The latetst commit in the on-line source repository has it, though.
User avatar
Ronald
Posts: 160
Joined: Tue Jan 23, 2018 10:18 am
Location: Rotterdam
Full name: Ronald Friederich

Re: Time management and estimating search times

Post by Ronald »

Do you always return the value found in the hashtable (if depth is enough and correct bound) or do you only return the hashvalue for non PV nodes?

Long time ago I experienced the same problem and, if I remember correctly, by not returning the hash value for PV nodes a "side effect" was that the search iterations became more stable.

So something like:

Code: Select all

if (!pvNode && hashEntry.depth >= depth) {
	if (hashEntry.nodeType != HASHUPPER && hashEntry.score >= beta) 
		return hashEntry.score;
	
	etc.
}
Jakob Progsch
Posts: 40
Joined: Fri Apr 16, 2021 4:44 pm
Full name: Jakob Progsch

Re: Time management and estimating search times

Post by Jakob Progsch »

Ronald wrote: Fri May 28, 2021 2:51 pm Do you always return the value found in the hashtable (if depth is enough and correct bound) or do you only return the hashvalue for non PV nodes?
I do directly return hits from the TT. Additionally I probably make this worse by not using any LMR or similar pruning. I'm trying to be systematic and do most of my current testing with fairly vanilla PVS (null move + some delta pruning in QS both if which I frequently disable for testing) since that is easier to reason about when I see something suspicious. Which probably also means this "brick wall effect" is more pronounced since context based extensions/reductions would smooth this out a bit. I noticed that for example if you let Stockfish search to the same depth on the same position twice in a row the second search is only about 2x faster and will also produce different results. While my current code will just instantly return the previous PV again.

So this may be something that just goes away once I get more into selectivity.
hgm wrote: Fri May 28, 2021 10:39 am 1) You calculate how much time per move you still have, (the 'nominal' thinking time) depending on what is left on your clock, and the TC type.
2) When you have already spent more than a certain fraction of that (say 60%), you won't start a new iteration. This fraction is determined experimentally, such that time usage is resonably distributed over the game.
Right, I might just have to tune that a bit more. I tried to keep a table of "cost to depth" per depth that I would only overwrite if the search was over some time threshold. So if the previous deepening iteration didn't provide a plausible prediction I'd use that instead. That had issues if at some point searching to depth 10 was expensive but subsequently wasn't. It would overestimate the cost and start playing unreasonably fast for a couple of moves.

What I don't like about this approach of fudge factors (60% etc.) is that it drifts from game to game. I guess one option would be to keep some "overflow buffer" of time. Essentially, whenever I find myself at say 80% of the allotted time I don't start a new iteration but add those 20% on the next move. That way the time consumption would be more steady. Intuitively that already happens implicitly since on the next move since w/btime will be larger by whatever amount you haven't used. However if you calculate times by dividing the leftover time by some move estimate you under predict this and are likely to keep making "fast" moves for a while.
hgm wrote: Fri May 28, 2021 10:39 am 3) You won't search new moves in the root if you exceed another factor (2-3) times the nominal time, and your score has not dropped too much compared to the previous iteration.
4) Finally, there is a 'cold-turkey timeout' that aborts the search no matter what. I usually set that such that a move can take 10 times as much time as any other remaining move. So with T on the clock and still N moves to go this would be 10*T/(N+9).
I hadn't really considered 3. I guess you gain some granularity by doing that. As in always go one deeper and then decide whether to continue after you observed whether the PV still looks good or not.

There is also some temptation to just always do the "cold-turkey" approach and just argue that the truncated search between the last complete search and the truncation isn't wasted since it still populates the TT for the next search where you "reclaim" some of that effort.
User avatar
hgm
Posts: 27796
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Time management and estimating search times

Post by hgm »

The problem with the latter approach is that you can never give extra time to any move. So when you already have found out the PV move turns out to be losing, but did not have time to finish the search for an alternative (which, scoring above alpha, would take at least as long), you would play the losing move. Based on the idea that in the next turn you would benefit from finding out very fast through a TT hit that you are now in a lost position, and can search it deeper.