Consider the situation where a program is in extreme time pressure and the time allocation for a move is very small, the hardware is not very fast, and the position is not very simple. It's possible that the search's time control monitor could detect an out of time condition before a move has been selected.
I've seen this happen in more than one open source program when attempting fast time controls (game in five seconds or less). The situation may also occur when the program is given a node count limit and that limit is very small. As might be expected, the result is usually a crash.
My solution for this problem is to disable the limit check until the very first predicted variation has been produced. The idea here is that it's better to lose by time forfeit than to lose by crashing.
Treating a zero (or near zero) time allocation problem
Moderator: Ras
-
sje
- Posts: 4675
- Joined: Mon Mar 13, 2006 7:43 pm
-
Daniel Shawul
- Posts: 4186
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: Treating a zero (or near zero) time allocation problem
Why not just save the first root move ? When the root moves are scored and sorted before iteration 0, I always have the first move to return no matter what. So there shouldn't be a crash.
-
sje
- Posts: 4675
- Joined: Mon Mar 13, 2006 7:43 pm
Re: Treating a zero (or near zero) time allocation problem
I suppose that saving the first root move is the same as saving the first predicted variation if not too much is expected of the term "predicted variation".
Note that some programs, like Symbolic, use a bare-bones capture search with A/B pruning deactivated at the root to help with sorting the ply zero moves before any iterations begin. For goofy positions with 18 queens and the like, this preliminary analysis can take a relatively long time.
Note that some programs, like Symbolic, use a bare-bones capture search with A/B pruning deactivated at the root to help with sorting the ply zero moves before any iterations begin. For goofy positions with 18 queens and the like, this preliminary analysis can take a relatively long time.
-
hgm
- Posts: 28440
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: Treating a zero (or near zero) time allocation problem
Well, so before you do all that, you just store the first move you generate as root best move...
-
sje
- Posts: 4675
- Joined: Mon Mar 13, 2006 7:43 pm
Re: Treating a zero (or near zero) time allocation problem
Returning the first move generated is certainly better than crashing.
But consider the case where the opponent is not guaranteed to declare a win on time. It would be better to spend a second or so to get a somewhat decent move instead of playing a random move. Also, there would be less need to explain why a ridiculous move escaped from the program.
But consider the case where the opponent is not guaranteed to declare a win on time. It would be better to spend a second or so to get a somewhat decent move instead of playing a random move. Also, there would be less need to explain why a ridiculous move escaped from the program.
-
Daniel Shawul
- Posts: 4186
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: Treating a zero (or near zero) time allocation problem
You are just making hypothetical stuff up. First it was crushing, then it is the 80 queens problem, now it is the quality of moves??
For the time controlled search, we have a minimum and maximum time to spend on the search. The maximum is allocated in such a way that your program never fails in time. If you preferquality of moves (as it seems that is what you want), increase the maximum time limit and evaluate if to extend the minimum time by some criteria. Usually fail low is an indicator for extending time. Or in your case first 'predicted variation' which I don't understand. Bear in mind one advantage of iterative deepening is in time management, so you always have a variation from the previous iteration.So start from first root move ,sorted root move by eval , sorted root move by qsearch, depth 1 ,depth 2 iteration etc...
If your requirement to finish qsearch no matter what Or to finish the current depth that we started when time runs out, that is fine but know that it has disadvantages too.
Some even use the _opposite_ of what you are proposing. That is don't start an iteration that you probably won't finish ! That way you save more time to spend on the next moves.
It is a compromise so there is no right or wrong choice here. Just do what works for you but there is no problem with other people's choice.
You shouldn't cheat on hard limits such as fixed nodes. They are there to evaluate performance of algorithms despite CPU/Program speed.I've seen this happen in more than one open source program when attempting fast time controls (game in five seconds or less). The situation may also occur when the program is given a node count limit and that limit is very small. As might be expected, the result is usually a crash.
For the time controlled search, we have a minimum and maximum time to spend on the search. The maximum is allocated in such a way that your program never fails in time. If you preferquality of moves (as it seems that is what you want), increase the maximum time limit and evaluate if to extend the minimum time by some criteria. Usually fail low is an indicator for extending time. Or in your case first 'predicted variation' which I don't understand. Bear in mind one advantage of iterative deepening is in time management, so you always have a variation from the previous iteration.So start from first root move ,sorted root move by eval , sorted root move by qsearch, depth 1 ,depth 2 iteration etc...
If your requirement to finish qsearch no matter what Or to finish the current depth that we started when time runs out, that is fine but know that it has disadvantages too.
Some even use the _opposite_ of what you are proposing. That is don't start an iteration that you probably won't finish ! That way you save more time to spend on the next moves.
It is a compromise so there is no right or wrong choice here. Just do what works for you but there is no problem with other people's choice.