Proportion of time for fixed depth vs. quiescent search

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

lojic
Posts: 71
Joined: Thu Jan 28, 2021 9:50 pm
Full name: Brian Adkins

Proportion of time for fixed depth vs. quiescent search

Post by lojic »

I haven't implemented progressive deepening yet (I am sold on the idea), but I do have a simple quiescent search that searches capture moves that have been ordered using MVV-LVA. My current approach is to search to a fixed depth and then return the result of the quiescent search which also has a max number of plys so it'll finish in a "reasonable" amount of time. I've fooled around with changing my fixed depth for the alpha/beta and the fixed depth for quiescence, but I expect there is a better way than that.

I'm not too concerned about my current approach, but when I implement progressive deepening, it seems like I will still need a mechanism for managing how much time is given to the fixed depth alpha/beta vs. how much to allow for the quiescent search.

I've looked through a number of CPW pages, and I haven't found anything specific.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Proportion of time for fixed depth vs. quiescent search

Post by bob »

easy enough. Don't "give" it an amount of time to use and fix the depth based on that. Instead, start the search and stop when time runs out...
eligolf
Posts: 114
Joined: Sat Nov 14, 2020 12:49 pm
Full name: Elias Nilsson

Re: Proportion of time for fixed depth vs. quiescent search

Post by eligolf »

As Bob said, you want to stop the search if your given time runs out. You need to add some logic to both your alpha-beta and your quiescence search, something like:

Code: Select all

if nodes % 1000 == 0:
    if search_time >= max_allowed time:
        return None
So every 1000 nodes you check if your time has run out, and if so return a None object. Then in your iterative deepening you check if the score returned is a value, if not you use the previously calculated best score and move.

Your engine can also get a specific depth to calculate to. If a depth is given, you don't set a max_allowed_time and instead stop the iterative deepening at that given depth.
User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Proportion of time for fixed depth vs. quiescent search

Post by hgm »

Limiting the depth of QS is usually a bad idea; in positions with many captures you would get exactly the same problems as with a fixed-depth search, where your QS hits the depth limit, and the search will start to return nonsense (grabbing valuable but protected pieces on the last move, and scoring the resulting bad trade as a gain). And when you don't hit the depth limit, you might as well not have had it. It is of course always possible to return nonsense much faster than to return an accurate result, but it will usually not lead to stronger play. And there is a good chance that it actually hurts you speedwise, once you use iterative deepening to gather information for move ordering on the next depth. Because the tree for the next depth will look totally different, since it will correct most of the nonsense by now having the recapture within the horizon, but generating its own, independent nonsence one ply deeper. So the moves suggested to it by the previous iteration as good will have turned bad, and searching them first only wastes time.

A minimum requirement is that you always search recapture to the to-square of the previous ply, no matter how much you already have exceeded your target depth. What is possible is to first do a fixed maximum number of all-capture plies, and then switch to recapture-only. The latter can never take very many nodes.

Some engines limit the depth of QS to some fixed fraction of the depth of the full-width search. This is OK, since it will effectively lift the restriction as the search depth increases, and only the first few iterations (which take negligible time anyway) can be counterproductive. This helps against choking on pathological positions (such as with 16 Queens), where you would not even manage to finish a single QS otehrwise. Of course it still returns non-sense in that case, but at least it doesn't hang.
lojic
Posts: 71
Joined: Thu Jan 28, 2021 9:50 pm
Full name: Brian Adkins

Re: Proportion of time for fixed depth vs. quiescent search

Post by lojic »

hgm wrote: Mon Feb 01, 2021 12:04 pm Limiting the depth of QS is usually a bad idea; in positions with many captures you would get exactly the same problems as with a fixed-depth search, where your QS hits the depth limit, and the search will start to return nonsense (grabbing valuable but protected pieces on the last move, and scoring the resulting bad trade as a gain). And when you don't hit the depth limit, you might as well not have had it. It is of course always possible to return nonsense much faster than to return an accurate result, but it will usually not lead to stronger play. And there is a good chance that it actually hurts you speedwise, once you use iterative deepening to gather information for move ordering on the next depth. Because the tree for the next depth will look totally different, since it will correct most of the nonsense by now having the recapture within the horizon, but generating its own, independent nonsence one ply deeper. So the moves suggested to it by the previous iteration as good will have turned bad, and searching them first only wastes time.

A minimum requirement is that you always search recapture to the to-square of the previous ply, no matter how much you already have exceeded your target depth. What is possible is to first do a fixed maximum number of all-capture plies, and then switch to recapture-only. The latter can never take very many nodes.

Some engines limit the depth of QS to some fixed fraction of the depth of the full-width search. This is OK, since it will effectively lift the restriction as the search depth increases, and only the first few iterations (which take negligible time anyway) can be counterproductive. This helps against choking on pathological positions (such as with 16 Queens), where you would not even manage to finish a single QS otehrwise. Of course it still returns non-sense in that case, but at least it doesn't hang.
Intuitively, making the QS depth proportionate to the full-width depth make sense to me - maybe with the "always search recapture...". When testing this out, I had a position where the queens were pretty free to roam, and there were a lot of captures, but I *think* that might've been before I used some simple MVV-LVA move ordering. Maybe with excellent move ordering, my fear of the QS sucking up all the time is unfounded.
lojic
Posts: 71
Joined: Thu Jan 28, 2021 9:50 pm
Full name: Brian Adkins

Re: Proportion of time for fixed depth vs. quiescent search

Post by lojic »

bob wrote: Mon Feb 01, 2021 2:47 am easy enough. Don't "give" it an amount of time to use and fix the depth based on that. Instead, start the search and stop when time runs out...
I probably didn't articulate my question clearly enough. Let's say, I've implemented iterative deepening, and for a certain position I've used up all the time with a depth of 3 because the QS went wild with captures down one path. Is that better than restricting the QS a little and getting deeper in the same amount of time to consider more breadth?

I expect the answer may be, "it depends", and I can probably just test it out by having my engine play itself with two settings. Also, as I mentioned in my other post, maybe once I have iterative deepening and good move ordering, the QS never sucks up an unreasonable amount of time.
User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Proportion of time for fixed depth vs. quiescent search

Post by hgm »

The problem is that with a restricted QS you will get lines with unknown score, as what the QS returns is likely to be nonsense. But when you combine that with a search that takes the scores seriously, this will lead to strange play. It could be that one of the players can force the line, and will do so because he thinks it is good for him, while in fact it was losing. Or, when it doesn't end up on the PV, it might have unjustly served as a refutation of an earlier move that was in fact good. Which could scare you off from the best move in the root, or make you play something unsound there, because you imagine you can deal with the opponent's strongest defense.

So yes, you would search a lot of the other lines deeper, but the result that search comes up with will still be unreliable, as the wrong score from the truncated line will corrupt the propagation of the accurate score from the deep lines when they propagate towards the root, in the nodes where these branches meet. Of course in some cases it might turn out that the problematic line would have been alpha-beta prunible if you would have searched it last. In which case a faulty score would not spoil anything, and all time spent on it was wasted.

In Chess MVV/LVA sorting should be enough to prevent 'search explosion'. With random ordering of captures search explosion is almost unavoidable, during some moves in a normal Chess game. Then you suddenly see the depth drop from 10-11 ply to 2-3 ply, and the move that comes out will often be a blunder. With pieces more powerful than a Queen, MVV/LVA might not be enough. Especially pieces that can do rifle capture are problematic, as they can take protected pieces without having to fear retaliation. In Chess pieces tend to be protected, so a rampaging Queen will meet her doom rather quickly. The goal of MVV sorting is to remove pieces that could make more captures later as quickly as possible, and hope their value is large enough to cause a beta cutoff, so that you don't have to search the alternatives where it survives.

I suppose a good way to handle lines that are too complex to resolve would be to score them as -200cP for the player who has the move in the root. (E.g. allow a QS to use 100 nodes, and abort it when it doesn't return before that, and use the fake score.) Then the engine would not take the risk that he would have to play that line, unless it is so much behind that it would lose anyway. Then the only surprise that has any effect can be that it is better than expected.