Easy Moves

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Edmund
Posts: 670
Joined: Mon Dec 03, 2007 3:01 pm
Location: Barcelona, Spain

Easy Moves

Post by Edmund »

Yesterday I tried to implement a detection for easy moves in the Glass Chess Engine, for which the search time should be reduced. I wanted to determine the complexity of the position by getting the ratio of the subnode-count of the bestmove compared to the second highest subnode-count of the other moves in the root position.

I found that in quiet positions or positions where two or more moves are possible this ratio was around 0-30 for Glass and in cases where for example a Queen-recapture was required I had ratios of around 500-1500. Forced Knight/Bishop/Rook recaptures being inbetween.

So I then tried a scaled reduction of searchtime:
50 - 100: 2/3
100 - 200: 1/2
200+: 1/4

However the test failed in a quick tournament (20s/game; 1000 games): 45%

Can you see any flaws in my reasoning? Are there any cases where this node ratio could be high and still no time reduction should be done?

Thanks in advance,
Edmund
User avatar
hgm
Posts: 27808
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Easy Moves

Post by hgm »

It sounds like a very unreliable method, going by node counts. I wuld never consider implementing easy-move on anything other than hard score bounds. E.g. in the root reduce all moves that fail low on PVscore - 300cP by 4 ply, and if the re-search fails low on PVscore - 100cP by two ply. And reduce the search time a factor 10 if no moves exceed the threshold.
Edmund
Posts: 670
Joined: Mon Dec 03, 2007 3:01 pm
Location: Barcelona, Spain

Re: Easy Moves

Post by Edmund »

hgm wrote:It sounds like a very unreliable method, going by node counts. I wuld never consider implementing easy-move on anything other than hard score bounds. E.g. in the root reduce all moves that fail low on PVscore - 300cP by 4 ply, and if the re-search fails low on PVscore - 100cP by two ply. And reduce the search time a factor 10 if no moves exceed the threshold.
The downside of score bounds is that they take additional calculations in an alpha-beta framework and this solution requires more source code. If that could be avoided, I would rather go with the simpler option. Could you tell, why node counts is unrelyable in this context?
my reasoning was that a low node count indicates a high score difference, as many beta-cutoffs and null-move prunings succeed.
User avatar
Bo Persson
Posts: 243
Joined: Sat Mar 11, 2006 8:31 am
Location: Malmö, Sweden
Full name: Bo Persson

Re: Easy Moves

Post by Bo Persson »

Edmund wrote:Yesterday I tried to implement a detection for easy moves in the Glass Chess Engine, for which the search time should be reduced. I wanted to determine the complexity of the position by getting the ratio of the subnode-count of the bestmove compared to the second highest subnode-count of the other moves in the root position.

I found that in quiet positions or positions where two or more moves are possible this ratio was around 0-30 for Glass and in cases where for example a Queen-recapture was required I had ratios of around 500-1500. Forced Knight/Bishop/Rook recaptures being inbetween.

So I then tried a scaled reduction of searchtime:
50 - 100: 2/3
100 - 200: 1/2
200+: 1/4

However the test failed in a quick tournament (20s/game; 1000 games): 45%

Can you see any flaws in my reasoning? Are there any cases where this node ratio could be high and still no time reduction should be done?
The most obvious case is where the move SEEMS easy, until you search it deep enough. Reductions wouldn't help you there!
jdart
Posts: 4367
Joined: Fri Mar 10, 2006 5:23 am
Location: http://www.arasanchess.org

Re: Easy Moves

Post by jdart »

I used to do this when there's a winning capture move, it appears much better than other moves, and it's consistently played for several iterations. (deciding the capture is "much better" takes some extra code because with a PVS search you don't get accurate scores, just bounds, for moves besides the PV move).

But I took this code out - it might be ok for human games, since weak players commonly drop material w/o compensation, but it's not good for engine-engine play because taking the material isn't always good - a deep search may show a problem.
jwes
Posts: 778
Joined: Sat Jul 01, 2006 7:11 am

Re: Easy Moves

Post by jwes »

Edmund wrote:Yesterday I tried to implement a detection for easy moves in the Glass Chess Engine, for which the search time should be reduced. I wanted to determine the complexity of the position by getting the ratio of the subnode-count of the bestmove compared to the second highest subnode-count of the other moves in the root position.

I found that in quiet positions or positions where two or more moves are possible this ratio was around 0-30 for Glass and in cases where for example a Queen-recapture was required I had ratios of around 500-1500. Forced Knight/Bishop/Rook recaptures being inbetween.

So I then tried a scaled reduction of searchtime:
50 - 100: 2/3
100 - 200: 1/2
200+: 1/4

However the test failed in a quick tournament (20s/game; 1000 games): 45%

Can you see any flaws in my reasoning? Are there any cases where this node ratio could be high and still no time reduction should be done?
The obvious case would be a sacrifice you need to decline.
User avatar
hgm
Posts: 27808
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Easy Moves

Post by hgm »

Edmund wrote:The downside of score bounds is that they take additional calculations in an alpha-beta framework and this solution requires more source code. If that could be avoided, I would rather go with the simpler option. Could you tell, why node counts is unrelyable in this context?
my reasoning was that a low node count indicates a high score difference, as many beta-cutoffs and null-move prunings succeed.
It is circumstantial evidence. It coud also be due to luck in picking cut moves.

And I don't think it takes extra calculations to first search moves with a hefty reduction and a more stringent bound, and leave it at that when it fais low. It saves calculations. If you did not do it, you would search these moves without reduction. Now that would be an expensive waste of time.
User avatar
hgm
Posts: 27808
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Easy Moves

Post by hgm »

jdart wrote:But I took this code out - it might be ok for human games, since weak players commonly drop material w/o compensation, but it's not good for engine-engine play because taking the material isn't always good - a deep search may show a problem.
This seems totally flawed reasoning. Even the strongest engines trade material now and then. That the capturer is hanging afterwards, and can be taken without compensation', is no deterrent at all when the compensation is already 'in the pocket'. In 99% of these cases the only reply that does not immediately lose is recapturing...

And yes, in the other 1% deeper seach might lead you to play onother move after all. Like happens on any non-easy move with about 15% probability. And becase you now spend the time on the easy move, with a low probabiity of paying off, you will not reach the higher depth for the non-easy moves, where it had a much higher probability to pay off.
yoshiharu
Posts: 56
Joined: Sat Nov 11, 2006 11:14 pm

Re: Easy Moves

Post by yoshiharu »

Edmund wrote: my reasoning was that a low node count indicates a high score difference, as many beta-cutoffs and null-move prunings succeed.
So why don't you count fail-highs instead?
Yet, I think that some sort of additional search is needed to detect easy moves...

Cheers, mr
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: Easy Moves

Post by Daniel Shawul »

Have you tried comparing the two methods to figure out where you have significant differences ? You can log the node counts and easy move info for a couple of games.

Code: Select all

So I then tried a scaled reduction of searchtime:
50 - 100: 2/3
100 - 200: 1/2
200+: 1/4 
If root move qsearch scoring signals an easy move, do you start searching by allocating 1/4 of time you would normally do ? And then at the end of that time check if the ratio is > 200 ,or is between 100 and 200 etc... to decide if to use more time ?
Easy move should also be _really_ easy. The next group of moves closer to them are singular moves. Then comes the normal moves. Does the ratio you mention (0 - 30) include the singular moves as well. If that is not the case may be you should increase the time factors to exclude the singular moves (say delta 50) from the easy moves ?
cheers
Daniel