How much to reduce ?

Discussion of chess software programming and technical issues.

Moderator: Ras

Henk
Posts: 7251
Joined: Mon May 27, 2013 10:31 am

How much to reduce ?

Post by Henk »

What are currently normal or good values for the amount to reduce in null move pruning (R) and LMR ? Or need it be complicated formulas with logarithms etcetera depending on node type, order number in the move list and depth ?
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: How much to reduce ?

Post by bob »

Henk wrote:What are currently normal or good values for the amount to reduce in null move pruning (R) and LMR ? Or need it be complicated formulas with logarithms etcetera depending on node type, order number in the move list and depth ?
I use R=3 for null move, period. For reductions, I use 1 or 2, period. I've not found any values > 2 that worked for me with reductions..
jdart
Posts: 4428
Joined: Fri Mar 10, 2006 5:23 am
Location: http://www.arasanchess.org

Re: How much to reduce ?

Post by jdart »

I am reducing with R>3 if the static score is high and depth > 6 ply. Dann Corbit termed this "smooth scaling" (http://talkchess.com/forum/viewtopic.php?t=31361). I can reduce by fractional depth increments and typically do in this case.

Similarly for LMR I reduce by amounts that vary by depth and by move count. I don't remember the max reduction but it is >2 ply for sure. PV nodes reduce less + there are various criteria that disable reduction. I have experimented with various formulae but not found one that works better than what I currently have. See https://github.com/jdart1/arasan-chess/ ... search.cpp. Other programs though may reduce more.

--Jon
lucasart
Posts: 3243
Joined: Mon May 31, 2010 1:29 pm
Full name: lucasart

Re: How much to reduce ?

Post by lucasart »

stop trolling and do you homework.

it's not just how much but which moves to reduce. move ordering is crucial here.

you must experiment by yourself, and no one can tell you what the right formula is for *your* engine.
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: How much to reduce ?

Post by bob »

jdart wrote:I am reducing with R>3 if the static score is high and depth > 6 ply. Dann Corbit termed this "smooth scaling" (http://talkchess.com/forum/viewtopic.php?t=31361). I can reduce by fractional depth increments and typically do in this case.

Similarly for LMR I reduce by amounts that vary by depth and by move count. I don't remember the max reduction but it is >2 ply for sure. PV nodes reduce less + there are various criteria that disable reduction. I have experimented with various formulae but not found one that works better than what I currently have. See https://github.com/jdart1/arasan-chess/ ... search.cpp. Other programs though may reduce more.

--Jon
I've tested it at length, but any value other than 3 does not help Crafty, and to answer your question before it is asked, I did try various depth limits to see if higher reductions work better near the root as opposed to near the tips. Everything I tried actually hurt Elo in cluster testing for me. Same result for trying different reduction amounts. But I am going to re-test this again at 5'+5" time controls and try a few options, but each test takes something like 24 hours so it is very slow if you want to try 5 different limits.
Henk
Posts: 7251
Joined: Mon May 27, 2013 10:31 am

Re: How much to reduce ?

Post by Henk »

lucasart wrote:stop trolling and do you homework.

it's not just how much but which moves to reduce. move ordering is crucial here.

you must experiment by yourself, and no one can tell you what the right formula is for *your* engine.
I use the standard move ordering plus ratings for non capturing moves that count the number of times they failed high. But these move ratings are to weak arguments to justify extra reductions. The only move I can rely on is the first move the so called best move which is computed from depth -1 or from hash table and the fact that after 25 moves have been tried the chance of a fail high or a better move is close to zero.
Henk
Posts: 7251
Joined: Mon May 27, 2013 10:31 am

Re: How much to reduce ?

Post by Henk »

jdart wrote:I am reducing with R>3 if the static score is high and depth > 6 ply. Dann Corbit termed this "smooth scaling" (http://talkchess.com/forum/viewtopic.php?t=31361). I can reduce by fractional depth increments and typically do in this case.

Similarly for LMR I reduce by amounts that vary by depth and by move count. I don't remember the max reduction but it is >2 ply for sure. PV nodes reduce less + there are various criteria that disable reduction. I have experimented with various formulae but not found one that works better than what I currently have. See https://github.com/jdart1/arasan-chess/ ... search.cpp. Other programs though may reduce more.

--Jon
If you reduce less for PV nodes don't you get trouble with look up values from hash table for if the same position is encountered when it is a PV node but it had been stored and computed as a non PV node. Or do you only consult the hash table when not in PV.
jdart
Posts: 4428
Joined: Fri Mar 10, 2006 5:23 am
Location: http://www.arasanchess.org

Re: How much to reduce ?

Post by jdart »

If you reduce less for PV nodes don't you get trouble with look up values from hash table for if the same position is encountered when it is a PV node but it had been stored and computed as a non PV node. Or do you only consult the hash table when not in PV.
I have a flag in the hashtable that indicates if it came from a PV or non-PV node. When I get a hashtable hit, if a non-PV entry is fetched at a PV node, then that entry's score is not used. Only its hash move (if any) and check status (which I also store) is used.

--Jon
lucasart
Posts: 3243
Joined: Mon May 31, 2010 1:29 pm
Full name: lucasart

Re: How much to reduce ?

Post by lucasart »

I never understood this business of treating PV and non PV nodes differently. It makes no sense from a theoretical standpoint, and it never worked in my testing (except things like razoring, futility pruning and null move search, because PV nodes are typically not expected to fail low/high).

Even in Stockfish, I've experimented quite a bit in removing PV conditions one by one. None of these PV conditions proved to be useful in testing.

In particular, reducing differently PV and non PV nodes never worked in my testing. The distinction that makes sense here is not PV vs. non PV, but Cut nodes vs. PV or All nodes. You can reduce Cut nodes a little bit more and squeeze a tiny bit of elo out of it (don't expect too much, and I'm still suspicious as to the scaling property of such hacks).
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
Henk
Posts: 7251
Joined: Mon May 27, 2013 10:31 am

Re: How much to reduce ?

Post by Henk »

I removed killer moves and history statistics (move ratings counting number of fail high per (move, depth)) from my chess program. I realized they weren't helpful for LMR reductions. Looks like my chess program plays better without them. Perhaps only the first move and winning captures makes any difference for move ordering in my chess program.

Less is more.