Yes. It is recursive search(again like double null) with an exception that a first ply of the shallow search is simple search (like NoNull search from fruit). On a second ply it is again usual search (verification IID + NULL move). I didn't test idea to much because of lack of resources, but I found that it eats fewer nodes than plain NMP (+ it detects some zugzwang positions)Dann Corbit wrote:Do you do null move for shallow searches also?Karlo Bala wrote:It works "almost" like double NULL move, but without funny tricks. And, it dos not have problem with some depth like double NULL move. About drawbacks of double NULL move you can find in Christophe Theron's post somewhere in CCC archives.Karlo Bala wrote:No, I think. First I try verification, which is basically IID with depth-6. If score is >= beta then and only then I make NULL move and try NM search with depth-4.Dann Corbit wrote:Doesn't the verification search give you the same data as the null move search? IOW, if you perform a verification search, do you even need to perform the null move search or could you just use the verification search return value instead?Karlo Bala wrote:I found that NULL move work better if I try verification first (as condition to try NULL move at all) because verification search is cheaper then NULL move. Also it is useful as IID.Tord Romstad wrote:That depends on what you try to achieve: I am sure you are right that it does not help one bit from the perspective of practical playing strength. It does help in the sense that many simple positions which are never solved without zugzwang verification are solved reasonably fast with zugzwang verification. This is the whole point. I don't like having a program which is unable to find simple wins even when given an infinite amount of time. Given infinite time, a chess program should be able to play perfectly from any position, with no other code changes than increasing a few arrays.bob wrote:Based on testing I have done, and am right now repeating, you can throw this idea out. It does not help one bit.
Zugzwang verification is very cheap, both in terms of code complexity and playing strength. I wouldn't want to remove it even if it turned out to cost 5--10 Elo points, and in practice I believe the cost is much smaller. If it does turn out to be more expensive than I think, I would increase the depth reduction and/or the depth limit for zugzwang verification rather than removing it altogether.
Null-moves and zugzwang
Moderator: Ras
-
Karlo Bala
- Posts: 373
- Joined: Wed Mar 22, 2006 10:17 am
- Location: Novi Sad, Serbia
- Full name: Karlo Balla
Re: Null-moves and zugzwang
Best Regards,
Karlo Balla Jr.
Karlo Balla Jr.
-
adieguez
Re: Null-moves and zugzwang
Hi, thanks for bringing this issue, I forgot about about testing this value. And when not, I underestimated it. It looks it is worth an overnight or two 
But what is the overtiming implementation in crafty during an iteration? if you can stop searching after searching only 1 or 2 moves then I expect your gain will be of course bigger with this. I remember in another thread that you stop searching right away, but am not sure.
But what is the overtiming implementation in crafty during an iteration? if you can stop searching after searching only 1 or 2 moves then I expect your gain will be of course bigger with this. I remember in another thread that you stop searching right away, but am not sure.
bob wrote:
Note, I did find a nice +10 Elo change to Crafty's timing that I will explain. I used to have the "time overflow mode" that the Deep Blue guys referred to as "panic time". This was variable extra time allowed depending on the score dropping and how far. I was using a .25 pawn threshold to trigger extra time in Crafty, where it the current score is .25 pawns or more worse than the score for the previous iteration, I would allow 2x longer in an effort to find a better move. Often there is none, but the iteration ends very quickly in those cases, and I do not start a new iteration once I pass the target time anyway. The change was simply setting that threshold to zero. If the current score is worse than the score from the previous iteration, by _any_ amount, I allow extra time.
Here's why it works after I studied it a bit. If you use that .25 cutoff, you are basically saying "I consider any score within .25 of the last score to be acceptable and equivalent." I had carefully gone through a few games and found one example that stood out was a series of moves where the score dropped by .2, then by .1, then by .2, and now I am a half-pawn down and still using the normal target time.
I made this simple change (change the threshold to zero) and tried cluster tests to see what happened. It was a clear +10 elo benefit whether the games were zero-increment or not, whether the games were longer or shorter. I will try to look at the code, but I believe that I now allow 5x the time to improve the score if it drops _any_.
I know what you are thinking "Wait, won't this burn a lot of time?" The answer is "no". Because if the score drops and you can't prevent it, once you have searched the first move to get that score, the rest go by in a flash and you use almost no extra time anyway. The current version (23.1) is now at +25 over 23.0, this represents +10, there is some forward pruning stuff and other changes that account for the rest...
And remember, of course, this +25 is not a guess. It is based on millions of games.Ditto for the +10 for the time overflow change.