Futility reductions

Discussion of chess software programming and technical issues.

Moderator: Ras

User avatar
j.t.
Posts: 268
Joined: Wed Jun 16, 2021 2:08 am
Location: Berlin
Full name: Jost Triller

Futility reductions

Post by j.t. »

The idea is to generalize futility pruning to all nodes (not only depth == 1, ..., depth == constant). The amount of reduction is dependent on how much staticEvaluation + see(move) differs from alpha (see for quiet and tactical moves).
In Nalwald I calculate the reductions like this (value is alpha - staticEval - see(move)):

Code: Select all

func futilityReduction(value: Value): Ply =
    if value < 150: return 0.Ply
    if value < 200: return 1.Ply
    if value < 300: return 2.Ply
    if value < 500: return 3.Ply
    if value < 750: return 4.Ply
    if value < 1050: return 5.Ply
    if value < 1400: return 6.Ply
    7.Ply
If after applying the reductions the new depth hits 0, I just skip th move and don't do quiesce (like with normal futility pruning).
Conditions for this are similar to conditions necessary for futility pruning (no check, ...).
Seems to give about ~20 Elo (maybe even more) compared to an equivalent futility pruning version.
I didn't find this described anywhere, so maybe this helps anyone who's looking for a few Elo points.
amanjpro
Posts: 883
Joined: Sat Mar 13, 2021 1:47 am
Full name: Amanj Sherwany

Re: Futility reductions

Post by amanjpro »

This probably makes a lot of sense, will try it out... I never feel comformtable with too much pruning based on some static eval
User avatar
hgm
Posts: 28381
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Futility reductions

Post by hgm »

I suppose you do these reductions in an LMR-like way, i.e. revoke them when the move turns out to score above alpha after all?

It does make some sense to do this, but I don't expect it to be optimal. Moves that leave you a Rook down can still be 'obviously good' moves, e.g. when a Knight forks King and Queen. By reducing those 4 ply, you would make the engine blind against such tactics in the last 5 ply of nominal depth.

To guard against that you could apply the reduction in the null-move reply instead. I.e. reduce the null move 4 extra ply if the static eval is more than 500 above beta. If the null move then fails low, the move was not so futile as the static eval after it suggested, as it apparently delivered a threat worth more than 500 cP, which gives the lagging side an exra tempo to get even. So there seems little cause for reducing it, and indeed it wouldn't be reduced, as only the null-move was reduced, and the alternatives you will be searching after the null-move failed low do not suffer any reduction.
User avatar
Rebel
Posts: 7381
Joined: Thu Aug 18, 2011 12:04 pm
Full name: Ed Schröder

Re: Futility reductions

Post by Rebel »

j.t. wrote: Tue Jul 06, 2021 1:00 am The idea is to generalize futility pruning to all nodes (not only depth == 1, ..., depth == constant). The amount of reduction is dependent on how much staticEvaluation + see(move) differs from alpha (see for quiet and tactical moves).
In Nalwald I calculate the reductions like this (value is alpha - staticEval - see(move)):

Code: Select all

func futilityReduction(value: Value): Ply =
    if value < 150: return 0.Ply
    if value < 200: return 1.Ply
    if value < 300: return 2.Ply
    if value < 500: return 3.Ply
    if value < 750: return 4.Ply
    if value < 1050: return 5.Ply
    if value < 1400: return 6.Ply
    7.Ply
If after applying the reductions the new depth hits 0, I just skip th move and don't do quiesce (like with normal futility pruning).
Conditions for this are similar to conditions necessary for futility pruning (no check, ...).
Seems to give about ~20 Elo (maybe even more) compared to an equivalent futility pruning version.
I didn't find this described anywhere, so maybe this helps anyone who's looking for a few Elo points.
Consider what HGM said.

Also, do you do this recursively?

In a tactical position a 9 ply search many root moves may end up being reduced with 2 plies 3 times.
90% of coding is debugging, the other 10% is writing bugs.
User avatar
j.t.
Posts: 268
Joined: Wed Jun 16, 2021 2:08 am
Location: Berlin
Full name: Jost Triller

Re: Futility reductions

Post by j.t. »

hgm wrote: Tue Jul 06, 2021 7:33 am I suppose you do these reductions in an LMR-like way, i.e. revoke them when the move turns out to score above alpha after all?
Yes, I think this makes sense for all kinds of reductions.
Rebel wrote: Tue Jul 06, 2021 12:22 pm Also, do you do this recursively?
Yes, and to be honest, I haven't though about this. This may be something I can tinker with, as well as the null move idea from hgm.
Maybe this could even be done independently: Adjust the null move reduction based on static eval.
Pio
Posts: 338
Joined: Sat Feb 25, 2012 10:42 pm
Location: Stockholm

Re: Futility reductions

Post by Pio »

j.t. wrote: Tue Jul 06, 2021 1:00 am The idea is to generalize futility pruning to all nodes (not only depth == 1, ..., depth == constant). The amount of reduction is dependent on how much staticEvaluation + see(move) differs from alpha (see for quiet and tactical moves).
In Nalwald I calculate the reductions like this (value is alpha - staticEval - see(move)):

Code: Select all

func futilityReduction(value: Value): Ply =
    if value < 150: return 0.Ply
    if value < 200: return 1.Ply
    if value < 300: return 2.Ply
    if value < 500: return 3.Ply
    if value < 750: return 4.Ply
    if value < 1050: return 5.Ply
    if value < 1400: return 6.Ply
    7.Ply
If after applying the reductions the new depth hits 0, I just skip th move and don't do quiesce (like with normal futility pruning).
Conditions for this are similar to conditions necessary for futility pruning (no check, ...).
Seems to give about ~20 Elo (maybe even more) compared to an equivalent futility pruning version.
I didn't find this described anywhere, so maybe this helps anyone who's looking for a few Elo points.
This looks interesting! Even though it is very aggressive it cuts the search tree a lot so maybe it can reach those six last plies very quickly.

It would be very interesting to see what results you would get if you changed your futility thresholds to:

200
283
346
400
447
490
529

The reason behind these numbers is that I assume that the evaluation is drifting +-delta at each ply so it will behave like a Brownian motion drifting on average by sqrt(depth).

Many thanks!
/Pio
User avatar
j.t.
Posts: 268
Joined: Wed Jun 16, 2021 2:08 am
Location: Berlin
Full name: Jost Triller

Re: Futility reductions

Post by j.t. »

Pio wrote: Tue Jul 06, 2021 1:28 pm It would be very interesting to see what results you would get if you changed your futility thresholds to:

200
283
346
400
447
490
529
I tried these, unfortunately they don't seem to work in Nalwald. Before, it won 56.6% against an earlier version, with these values only 54.5%. Though I am surprised that it isn't worst to be honest, considering how aggressive this variant is (e.g. reduce depth by 6 plies if the difference between alpha and staticEval + see(move) is only ~500 centipawns).
User avatar
j.t.
Posts: 268
Joined: Wed Jun 16, 2021 2:08 am
Location: Berlin
Full name: Jost Triller

Re: Futility reductions

Post by j.t. »

Stopping recursive application of futility reductions doesn't seem to work.
I tried strict stopping or recursive application and also with the exception if the resulting newDepth is 0 (i.e. which would be the equivalent to normal futility pruning).
Pio
Posts: 338
Joined: Sat Feb 25, 2012 10:42 pm
Location: Stockholm

Re: Futility reductions

Post by Pio »

j.t. wrote: Tue Jul 06, 2021 4:42 pm
Pio wrote: Tue Jul 06, 2021 1:28 pm It would be very interesting to see what results you would get if you changed your futility thresholds to:

200
283
346
400
447
490
529
I tried these, unfortunately they don't seem to work in Nalwald. Before, it won 56.6% against an earlier version, with these values only 54.5%. Though I am surprised that it isn't worst to be honest, considering how aggressive this variant is (e.g. reduce depth by 6 plies if the difference between alpha and staticEval + see(move) is only ~500 centipawns).
Thank you for the test! Yes I am also surprised my values were not worse than they were since they are so aggressive. But it was an interesting test. What would happen if you do the same as you do for ply 0 but for higher depths do reduced depth searches? What I had in mind was for those really bad positions with the same margins as you have do:

Depth 0 do the same as you do
Depth 1 return QS
Depth 2 and 3 return depth 1 search plus QS
Depth 4 and 5 return depth 2 search plus QS
Depth 6 and 7 return depth 3 search plus QS
User avatar
j.t.
Posts: 268
Joined: Wed Jun 16, 2021 2:08 am
Location: Berlin
Full name: Jost Triller

Re: Futility reductions

Post by j.t. »

Pio wrote: Tue Jul 06, 2021 6:32 pm What would happen if you do the same as you do for ply 0 but for higher depths do reduced depth searches? What I had in mind was for those really bad positions with the same margins as you have do:

Depth 0 do the same as you do
Depth 1 return QS
Depth 2 and 3 return depth 1 search plus QS
Depth 4 and 5 return depth 2 search plus QS
Depth 6 and 7 return depth 3 search plus QS
I am not sure, I understand you correctly. If the newDepth (newDepth = depth - futilityReduction(alpha - staticEval - see(move))) is equal to one, then this move would already jump directly into quiesce, because I call the search with value = -search(position = newPosition, alpha = -beta, beta = -alpha, depth = newDepth - 1), so the next node would have depth == 0.