If you avoid null when the hash table shows a fail low, you have frequently eliminated the cases where verification helps.lucasart wrote:Just tried it, and it's proven completely useless. Result after 500 games in 6"+0.1" (average NPS on my hardware around 1M)I wonder if it's normal, and if others have come to the same conclusion, or if I'm doing something wrong.Code: Select all
Score of test vs DiscoCheck 3.5.1: 152 - 149 - 199 [0.50] 500
Basically I'm doing a verification search at high depths (depth >= 7) when the null move fails high.
Verification search
Moderators: hgm, Rebel, chrisw
-
- Posts: 778
- Joined: Sat Jul 01, 2006 7:11 am
Re: Verification search
-
- Posts: 3232
- Joined: Mon May 31, 2010 1:29 pm
- Full name: lucasart
Re: Verification search
After some testing, I've come to the conclusion that:lucasart wrote: * the piece precondition: only do a null move when you have at least a piece on the board. Well even that is almost useless. Adding that condition was unmeasurably better (better but below error bar). I kept it anyway, but perhaps it could be further improved, like at least one piece, or not in a king opposition (ie. king dist != 2). Even in K+P endgames, the zugzwang is more an exception than a general problem. And not doing null move misses a lot of good moves that would be found deeper in search, which is to be put in front of the zugzwang improvement.
null move precondition
1.1 doing null move all the time is fine
1.2 not doing it when we have no pieces left is very slightly better.
1.3 doing it all the time except if we have no piece left, and the kings are in direct opposition, is very slightly better than 2/
So 3/ is what I do now
verification search
2.1 doing it all the time hurts
2.2 doing it in all positions but only at high depths still improves nothing. it probably hurts but as Robert Hyatt said the overhead is just reduced.
2.3 doing it only when the side to move has no pieces (assuming 1.3 is the precondition) doesn't show an improvement either.
So I am very skeptical about this verif search hype. And I tested in 12+0.2, reaching average depths of 11 in the middle game, so I doubt it becomes useful at long time controls...
-
- Posts: 1822
- Joined: Thu Mar 09, 2006 11:54 pm
- Location: The Netherlands
Re: Verification search
If you expect to lose nearly all games already somewhere first few moves in opening, then obviously zugzwang detection won't help you much.lucasart wrote:I think this whole zogzwang problem is vastly exagerated. People seem to attach far too much attentionattention to this post null move hype:After testing I discovered the following:bob wrote:I personally tested this several times, with 30K game matches, and I found absolutely no case where it helped, and lots of cases where it hurts (you can limit the overhead by limiting the depth where you do it of course)...lucasart wrote:Just tried it, and it's proven completely useless. Result after 500 games in 6"+0.1" (average NPS on my hardware around 1M)I wonder if it's normal, and if others have come to the same conclusion, or if I'm doing something wrong.Code: Select all
Score of test vs DiscoCheck 3.5.1: 152 - 149 - 199 [0.50] 500
Basically I'm doing a verification search at high depths (depth >= 7) when the null move fails high.
* verification search is completely useless. elowise of course, meaning that there are perhaps some positions in which it helps, but overall it doesn't help.
* the piece precondition: only do a null move when you have at least a piece on the board. Well even that is almost useless. Adding that condition was unmeasurably better (better but below error bar). I kept it anyway, but perhaps it could be further improved, like at least one piece, or not in a king opposition (ie. king dist != 2). Even in K+P endgames, the zugzwang is more an exception than a general problem. And not doing null move misses a lot of good moves that would be found deeper in search, which is to be put in front of the zugzwang improvement.
* when null move fails low, do a mate threat extension (ie. search for a mate at null-reduced depth from opp perspective): saw that in the wiki, and found it to be not just useless but actually harmful. There is however the connected move trick by Tord Romstad, which is quite a smart idea. I haven't experimented with it, and wonder how much elo it adds to SF.
If you expect to survive against the strongest engines and intend to not lose it in the endgame somewhere because of a silly zugzwang, then using double nullmove or verification search is quite a clever idea.
I'm using double nullmove. Cheaper than verification search.
-
- Posts: 5106
- Joined: Tue Apr 29, 2008 4:27 pm
Re: Verification search
It's very program specific. It MAY work, it may not. Komodo has used it off and on.lucasart wrote:Just tried it, and it's proven completely useless. Result after 500 games in 6"+0.1" (average NPS on my hardware around 1M)I wonder if it's normal, and if others have come to the same conclusion, or if I'm doing something wrong.Code: Select all
Score of test vs DiscoCheck 3.5.1: 152 - 149 - 199 [0.50] 500
Basically I'm doing a verification search at high depths (depth >= 7) when the null move fails high.
Also to be considered is how you set it up, such as which depths you start using it and how much do you reduce? How it's set up might determine if it helps or not. We do it in Komodo and it appears to be a small gain for us but nothing to get excited about.
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
-
- Posts: 3232
- Joined: Mon May 31, 2010 1:29 pm
- Full name: lucasart
Re: Verification search
DiscoCheck has some rather unusual preconditions for null move now.Don wrote: Also to be considered is how you set it up, such as which depths you start using it and how much do you reduce? How it's set up might determine if it helps or not. We do it in Komodo and it appears to be a small gain for us but nothing to get excited about.
Code: Select all
// Null move pruning
if (UseNull && !is_pv
&& !in_check && !is_mate_score(beta)
&& (current_eval >= beta || depth <= NullReduction(depth))
&& B->st->piece_psq[B->turn])
{
const int new_depth = depth - NullReduction(depth);
play(B, NoMove);
int score = -search(B, -beta, -alpha, new_depth, ply+1, false, si+1);
undo(B);
if (score >= beta) {
// null search fails high
return score < mate_in(MAX_PLY)
? score // fail soft
: beta; // *but* do not return an unproven mate
}
}
2/ Another peculiarity is the following condition
Code: Select all
&& (current_eval >= beta || depth <= NullReduction(depth))
As a result of 1/ and 2/ I do null moves quite aggressively at shallow depths. As for the reduction function, it is defined as follows
Code: Select all
static inline int NullReduction(int depth) { return max(4, depth/2); }
-
- Posts: 5106
- Joined: Tue Apr 29, 2008 4:27 pm
Re: Verification search
My programs have always had this condition since I have done null move pruning. In fact I never considered doing it any other way until I heard that Crafty (many years ago) always did null move - so I tried it and it was clear that it didn't work for my programs. But if used that condition I am sure other programs did too.lucasart wrote:DiscoCheck has some rather unusual preconditions for null move now.Don wrote: Also to be considered is how you set it up, such as which depths you start using it and how much do you reduce? How it's set up might determine if it helps or not. We do it in Komodo and it appears to be a small gain for us but nothing to get excited about.1/ I do not have a minimum depth condition (I used to and removed it because testing proved it to be harmful)Code: Select all
// Null move pruning if (UseNull && !is_pv && !in_check && !is_mate_score(beta) && (current_eval >= beta || depth <= NullReduction(depth)) && B->st->piece_psq[B->turn]) { const int new_depth = depth - NullReduction(depth); play(B, NoMove); int score = -search(B, -beta, -alpha, new_depth, ply+1, false, si+1); undo(B); if (score >= beta) { // null search fails high return score < mate_in(MAX_PLY) ? score // fail soft : beta; // *but* do not return an unproven mate } }
2/ Another peculiarity is the following conditionThe idea of doing null move only when eval >= beta, is of course farly standard these days, and came from (or was popularized by?) Fruit.Code: Select all
&& (current_eval >= beta || depth <= NullReduction(depth))
One of the aims of it being to avoid two consecutive null moves, I thought that if depth <= reduction, the reduced depth brings us in the QS, so there's no risk of that.
As a result of 1/ and 2/ I do null moves quite aggressively at shallow depths. As for the reduction function, it is defined as followsCode: Select all
static inline int NullReduction(int depth) { return max(4, depth/2); }
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
-
- Posts: 6401
- Joined: Thu Mar 09, 2006 8:30 pm
- Location: Chicago, Illinois, USA
Re: Verification search
Every time someone says "this come from Fruit" I try to check Phalanx. I just did, and the concept is there.lucasart wrote:DiscoCheck has some rather unusual preconditions for null move now.Don wrote: Also to be considered is how you set it up, such as which depths you start using it and how much do you reduce? How it's set up might determine if it helps or not. We do it in Komodo and it appears to be a small gain for us but nothing to get excited about.1/ I do not have a minimum depth condition (I used to and removed it because testing proved it to be harmful)Code: Select all
// Null move pruning if (UseNull && !is_pv && !in_check && !is_mate_score(beta) && (current_eval >= beta || depth <= NullReduction(depth)) && B->st->piece_psq[B->turn]) { const int new_depth = depth - NullReduction(depth); play(B, NoMove); int score = -search(B, -beta, -alpha, new_depth, ply+1, false, si+1); undo(B); if (score >= beta) { // null search fails high return score < mate_in(MAX_PLY) ? score // fail soft : beta; // *but* do not return an unproven mate } }
2/ Another peculiarity is the following conditionThe idea of doing null move only when eval >= beta, is of course farly standard these days, and came from (or was popularized by?) Fruit.Code: Select all
&& (current_eval >= beta || depth <= NullReduction(depth))
It does not work that way if you have a "side to move" bonus. It is actually useful to avoid running a nullmove search when most likely it will fail.
One of the aims of it being to avoid two consecutive null moves,
Miguel
I thought that if depth <= reduction, the reduced depth brings us in the QS, so there's no risk of that.
As a result of 1/ and 2/ I do null moves quite aggressively at shallow depths. As for the reduction function, it is defined as followsCode: Select all
static inline int NullReduction(int depth) { return max(4, depth/2); }
-
- Posts: 1822
- Joined: Thu Mar 09, 2006 11:54 pm
- Location: The Netherlands
Re: Verification search
If verification search hurts, may i suggest trying double nullmove?lucasart wrote:After some testing, I've come to the conclusion that:lucasart wrote: * the piece precondition: only do a null move when you have at least a piece on the board. Well even that is almost useless. Adding that condition was unmeasurably better (better but below error bar). I kept it anyway, but perhaps it could be further improved, like at least one piece, or not in a king opposition (ie. king dist != 2). Even in K+P endgames, the zugzwang is more an exception than a general problem. And not doing null move misses a lot of good moves that would be found deeper in search, which is to be put in front of the zugzwang improvement.
null move precondition
1.1 doing null move all the time is fine
1.2 not doing it when we have no pieces left is very slightly better.
1.3 doing it all the time except if we have no piece left, and the kings are in direct opposition, is very slightly better than 2/
So 3/ is what I do now
verification search
2.1 doing it all the time hurts
2.2 doing it in all positions but only at high depths still improves nothing. it probably hurts but as Robert Hyatt said the overhead is just reduced.
2.3 doing it only when the side to move has no pieces (assuming 1.3 is the precondition) doesn't show an improvement either.
So I am very skeptical about this verif search hype. And I tested in 12+0.2, reaching average depths of 11 in the middle game, so I doubt it becomes useful at long time controls...
It's less overhead and still detects a few zugzwangs, especially the simple ones, meanwhile hashtables favour you and usually give a cutoff when it ain't a zugzwang anyway.
BTW a reduction factor of 4 for nullmove sounds a tad huge last few plies. is that excluding the additional plydepth?
Normal nullmove is defined as:
nullscore = -search(-beta,-beta+1,depth - R - 1);
So there is an additional '-1' on top of R. Is the R=4 posted here for last few plies including that -1 or excluding; in short is it the total reduction or is it the reduction factor?
The condition of eval >= beta is not new. Most used it already mid 90s.
In Diep i'm doing nothing of that. I always nullmove, i just forbid the 3d consecutive nullmove, which defines double nullmove.
I'm using R=3. Realize that already gets r=4 in too many positions, if you use reductions. Just reducing a search with 5 plies is really a lot.
You typically see most modern engines that get immense search depths that first 20 ply they pick up most things very quick. Moving from 20 to 40 ply is really thin search and hardly adds more understanding to the engines. They still pay to a branching factor of just under 2.0, meanwhile not getting the same elowin one classically would expect winning another ply.
So tricks of 21 ply nominal (just normal alfabeta with check evasion extension and nothing else), they need 30-40 ply to find it, which Diep, Hiarcs and DeepJunior find at 17-21 ply or so. So it's a few minutes for Diep and a minute or 20 for hiarcs and a minute or 21 for DeepJunior, but if you need 40 ply with superthin searching engine you need 24 hours or so. You still pay to some branching factor every iteration to get deeper.
Saying one tested it at bullet levels doesn't impress there, as those depths you don't reach at bullet.
So one should carefully test at slower time controls whether those huge reduction factors that get used at the depths one reaches at slow time controls, that these really work.
If you would for example move up R from 3 to 5 when having a position at depthleft 20, i can understand it, but if you do depth/2 that means a reduction factor of 9, or even 10 in case it's +1.
It's not gonna add elo you know.
You see typically Houdini picks up massive elo by not needing a too extreme search depth there either to find it. It needs a minute or 90 for such positions. Basically what Houdini doesn't find within 3 minutes, it's going to find slower than Diep,Hiarcs,DeepJunior it seems.
Now the evaluation of all those rybka type clones is similar if not the same, so that ain't the difference. If you compare it with Ivanhoe, ivanhoe is 100% the same, yet needs factor 4 more in time to find the same best moves.
The thing where the thin mainline checkers of course really do well is 'avoid moves'. Like the classical Qxb2?? and Qxg2?? in some positions. I do notice that above 20 ply this advantage has gone as well. It's just a blitz advantage there.
-
- Posts: 5960
- Joined: Sun Jan 10, 2010 6:15 am
- Location: Maryland USA
Re: Verification search
How is it possible that Ivanhoe and Houdini are "100% the same, yet (Ivanhoe) needs factor 4 more in time..."? Can you offer any theory that could explain this paradox? I think on average the factor is about 2, not 4, but still, the paradox remains.diep wrote: You see typically Houdini picks up massive elo by not needing a too extreme search depth there either to find it. It needs a minute or 90 for such positions. Basically what Houdini doesn't find within 3 minutes, it's going to find slower than Diep,Hiarcs,DeepJunior it seems.
Now the evaluation of all those rybka type clones is similar if not the same, so that ain't the difference. If you compare it with Ivanhoe, ivanhoe is 100% the same, yet needs factor 4 more in time to find the same best moves.
-
- Posts: 1822
- Joined: Thu Mar 09, 2006 11:54 pm
- Location: The Netherlands
Re: Verification search
Evaluation is 100% the same. Search is slightly different.lkaufman wrote:How is it possible that Ivanhoe and Houdini are "100% the same, yet (Ivanhoe) needs factor 4 more in time..."? Can you offer any theory that could explain this paradox? I think on average the factor is about 2, not 4, but still, the paradox remains.diep wrote: You see typically Houdini picks up massive elo by not needing a too extreme search depth there either to find it. It needs a minute or 90 for such positions. Basically what Houdini doesn't find within 3 minutes, it's going to find slower than Diep,Hiarcs,DeepJunior it seems.
Now the evaluation of all those rybka type clones is similar if not the same, so that ain't the difference. If you compare it with Ivanhoe, ivanhoe is 100% the same, yet needs factor 4 more in time to find the same best moves.
It's not a paradox. Just a small change in search can total cripple you or make you better.
Ivanhoe searches 2 ply deeper than houdini 1.5a here does. Yet it needs factor 4 more TIME to find on average deep difficult to find moves. Usually deep tactics.
That factor 4 has nothing to do with branching factor. It's actually requiring easily on average 4-6 ply more than houdini.
Ivanhoe versus Houdini is the perfect example to show that just forward pruning everything in order to get another 2 ply deeper isn't the holy grail of computerchess.