Null move pruning = lottery?

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

OliverBr
Posts: 725
Joined: Tue Dec 18, 2007 9:38 pm
Location: Munich, Germany
Full name: Dr. Oliver Brausch

Re: Null move pruning = lottery?

Post by OliverBr »

mhouppin wrote: Sat Jul 18, 2020 11:11 pm Two conclusions from here: Null Move Pruning is not the factor in fault here (it was actually added in version 12!), it's risky, but an engine without it will struggle to break the 2000 Elo bar; and one should never test new search/eval features solely on position solving, but rather make the engine play a lot of matches with/without the feature, and draw conclusions from this. Test-positions are good for sanity checking, but not for measuring engine progress.
Of course the concept of "Null Move Pruning" is not the issue, it's actually OliThink's "special" Null Move Pruning. It's very aggressive, R = 8 is very common and I found out, that this causes those problems.

Now comes the funny part: If I reduce the R, this very position becomes much more stable, but the overall performance will not improve.
Chess Engine OliThink: http://brausch.org/home/chess
OliThink GitHub:https://github.com/olithink
Dann Corbit
Posts: 12538
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Null move pruning = lottery?

Post by Dann Corbit »

Most of the really deep R pruning eng1ines use functions to determine R, most of the time as a function of difference in score and the current depth in plies. I like to add mobility so that extremely cramped positions (e.g. you have only three possible move choices, R should be tiny) and extremely mobile positions (e.g you have 100 candidate moves, you can prune a bit more freely). I use an S curve so that most positions are not affected. It does not improve Elo (or hurt it) but the effect it has is that certain rare mate positions are found that are not found with standard pruning. They are rare enough not to change Elo, but it is important to me.

There are often a bunch of qualifications thrown at it before pruning (more than just wood count). Here are the tests SF applies before doing it:

Code: Select all

   // Step 9. Null move search with verification search (~40 Elo)
    if (   !PvNode
            && (ss-1)->currentMove != MOVE_NULL
            && (ss-1)->statScore < 23824
            &&  eval >= beta
            &&  eval >= ss->staticEval
            &&  ss->staticEval >= beta - 33 * depth - 33 * improving + 112 * ttPv + 311
            && !excludedMove
            &&  pos.non_pawn_material(us)
            && (ss->ply >= thisThread->nmpMinPly || us != thisThread->nmpColor))
    {
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
mhouppin
Posts: 115
Joined: Wed Feb 12, 2020 5:00 pm
Full name: Morgan Houppin

Re: Null move pruning = lottery?

Post by mhouppin »

OliverBr wrote: Sat Jul 18, 2020 11:30 pm
mhouppin wrote: Sat Jul 18, 2020 11:11 pm Two conclusions from here: Null Move Pruning is not the factor in fault here (it was actually added in version 12!), it's risky, but an engine without it will struggle to break the 2000 Elo bar; and one should never test new search/eval features solely on position solving, but rather make the engine play a lot of matches with/without the feature, and draw conclusions from this. Test-positions are good for sanity checking, but not for measuring engine progress.
Of course the concept of "Null Move Pruning" is not the issue, it's actually OliThink's "special" Null Move Pruning. It's very aggressive, R = 8 is very common and I found out, that this causes those problems.

Now comes the funny part: If I reduce the R, this very position becomes much more stable, but the overall performance will not improve.
Oh, then the static reduction value is probably to blame: it will work fine at high depths, but may lead to disastrous results at low depths (imagine a forced mate in 2 being pruned at depth 9...). It would probably be better to have a formula like R = 3 + (depth / 4), so that the reduction stays proportional to the current search depth of your program.