I don't disagree. And since I am one of those that tries null-move _everywhere_ first, that has a substantial cost, particularly when tried near the root of the tree. Best advice is to test it yourself to see what it does to your program. It might even work. But it certainly didn't for me...hgm wrote:Well, I was just a bit surprised by this sudden relaxing of standards, as you were so emphatically arguing you never assumed anything without testing it. If ther would be a theoretical expectation that R=0 avoids some problems that R>0 might encounter, it could be an incentive. But let's discuss that below.bob wrote:I tried R=3. R=2 was worse. R=1 was much worse. I suppose one needs to try R=0 and R=-1 and R=-2 to be thorough? Or one can logically infer that 2 is worse than 3, 1 is worse than 2, so there is no point in continuing?I touch something and burn my hand. I go back the next day and now this something has a slight red glow. I touch it and I get burned worse. I try tomorrow and now it is bright red. I get burned worse. Next day it is a dull white. I get burned worse. Next day it is a brilliant white glow. I get burned worse. Exactly when do I stop? Something tells me there is no point in which this thing gets hotter still but it suddenly won't burn me.
Could you give me an example of a non-Zugzwang position where a null-move would fail high, but a double null move with R=0 on the second null move, would fail low?No. The reason they were worse as R was reduced was that the size of the tree increased. Again, double-null absolutely does detect most zugzwangs. It also detects non-zugzwangs and calls them zugzwangs, which causes the important null-move search to fail low and both the original null-move search and the second end up wasted effort. And now I have to do a real (expensive) search rather than the cheaper null-search that really should have worked.My guess would be that R=1 etc. would perform poorly because you run into the horizon-effect problem that you pointed out before. R=0 should not have that, if the first null move would not already do it (in which case it would make little difference if you did the second one or not, as the other moves will suffer from this horizon effect even worse).So if I understand you correctly, this size increase was mainly due to the need for doing extra real searches when the first null move failed low. Not because of the size of the second null-move searches themselves/
My testing was based on the size of the tree, and it went up in almost every case. Significantly. Which is the same as slowing the thing down. Significantly. I couldn't play as many games back then as I can today, but testing both on positions, and in real games produced worse results for me. Vincent swears by it. Perhaps for him it works well. But then again, I have seen this happen many times over the years. Programs are different. So they sometimes respond differently to a new idea.
I don't have any such FEN that I can find. But the basic idea is this: The first null-move will fail high because I am winning quite easily. I am not in zugzwang, but the "win" is very deep although my search sees it easily enough. But after I try the first null-move, my depth is now 3 plies shallower than it would be were I doing just a normal search in response to the null-move. And now I first try another null-move which further reduces the remaining depth enough that now I can't see any way for my opponent to beat me in this null-move search, I have "hidden" the forced loss from my search because I now don't have enough depth left to see it, so this null-move search returns "good" and the preceeding null-move search now fails low. And I do a normal search and again find that this is a fail-high node because I can see deeply enough to find the tactical key to winning. Had the first null-move been allowed to run normally, it would have also failed high, but chopping off so many plies on the second null-move search changed things.
that was what I didn't like about the idea. When I tested, I found the following:
(a) I could try null-move searches much more aggressively, even in simple king and pawn endings, without significant risk of zugzwang (try that with KP vs K to see how badly null-move breaks that search). So that was a win, in that now I searched these endings far more deeply in most cases.
(b) my normal search slowed down, because most of the time the extra (second) null-move fails low, so we complete the normal null-move search, but now we have searched extra nodes that slows us down.
I did not spend a lot of time trying to "phase in" the double-null search as zugzwangs became more likely (that is the case where they help) because I could not find any reasonable way to determine when zugzwang was more likely. And if I could, I would just use the same rule and not do nulls there at all...
It was both. Logic says that the first null failing low hurts more because the resulting real search is far more expensive. The second (of the double pair) failing low (as expected) is also just unnecessary work that is done _everywhere_ as well. So when the second search is unnecessary (which is much more common, but I am not certain it adds up to as many nodes as are lost when the real null fails low) it does add up. I don't remember the numbers today, but I do remember that they were not the 1% or 5% type values, they were _much_ larger.
Well, this is inherent in verification. Most of the time it is a waste of time. It must be justified from the rare cases when it comes up with something.
The problem is that the second R=0 search _must_ fail low most of the time, else the original NM search is worthless. That means you are just doing work for no gain, unless null-move is a bad idea everywhere.
The many faces of Null Move Pruning
Moderator: Ras
-
bob
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: The many faces of Null Move Pruning
-
hgm
- Posts: 28440
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: The many faces of Null Move Pruning
"Normal" here meaning "unreduced", I assume. But even with this first reduction, you still see the win.bob wrote:I don't have any such FEN that I can find. But the basic idea is this: The first null-move will fail high because I am winning quite easily. I am not in zugzwang, but the "win" is very deep although my search sees it easily enough. But after I try the first null-move, my depth is now 3 plies shallower than it would be were I doing just a normal search in response to the null-move.
Further reduce??? With R=0? What kind of zero is that?And now I first try another null-move which further reduces the remaining depth enough ...
Yes, I can understand that last part. But the assumption is wrong. You have hidden nothing, as the second null move causes no extra reduction. This was my point all along. The only thing you did was to replace one of the moves of the opponent by a null move. I don't see how that could ever push back and hide your original win....that now I can't see any way for my opponent to beat me in this null-move search, I have "hidden" the forced loss from my search because I now don't have enough depth left to see it, so this null-move search returns "good" and the preceeding null-move search now fails low. And I do a normal search and again find that this is a fail-high node because I can see deeply enough to find the tactical key to winning.
Well, zero is not "so many". In fact it is "nothing".Had the first null-move been allowed to run normally, it would have also failed high, but chopping off so many plies on the second null-move search changed things.
Well, I am not surprised that those were your results with the second R>0. But it seems that reducing here is just a form of "wrong thriftiness". You are trying to save on something that already has nearly negligible cost, with the risk of having to pay very dearly.that was what I didn't like about the idea. When I tested, I found the following:
(a) I could try null-move searches much more aggressively, even in simple king and pawn endings, without significant risk of zugzwang (try that with KP vs K to see how badly null-move breaks that search). So that was a win, in that now I searched these endings far more deeply in most cases.
(b) my normal search slowed down, because most of the time the extra (second) null-move fails low, so we complete the normal null-move search, but now we have searched extra nodes that slows us down.
I did not spend a lot of time trying to "phase in" the double-null search as zugzwangs became more likely (that is the case where they help) because I could not find any reasonable way to determine when zugzwang was more likely. And if I could, I would just use the same rule and not do nulls there at all...
Well, it still seems to me that the problem was R>0. With R=0 things like you describe cannot happen, so it is a pity you did not test that.My testing was based on the size of the tree, and it went up in almost every case. Significantly. Which is the same as slowing the thing down. Significantly. I couldn't play as many games back then as I can today, but testing both on positions, and in real games produced worse results for me. Vincent swears by it. Perhaps for him it works well. But then again, I have seen this happen many times over the years. Programs are different. So they sometimes respond differently to a new idea.
You are talking now about total node increase, or contribution of nodes from second-null-move searches?It was both. Logic says that the first null failing low hurts more because the resulting real search is far more expensive. The second (of the double pair) failing low (as expected) is also just unnecessary work that is done _everywhere_ as well. So when the second search is unnecessary (which is much more common, but I am not certain it adds up to as many nodes as are lost when the real null fails low) it does add up. I don't remember the numbers today, but I do remember that they were not the 1% or 5% type values, they were _much_ larger.
It seems hard to believe that the latter would ever add up to such a number if you reduce them. As you cannot have a second null-move search without having a first one. And in every case that first search will be much deeper.
Could this be related to you trying a null-move search always, also when CurEval < Beta? In such situations the first null move will usually fail low, but you do them in the hope to learn something from them (killers, history). But if you allow the opponent to refute them by a second null move you might learn nothing from it, as you are faced with the original position.
-
bob
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: The many faces of Null Move Pruning
Probably not very clear writing there. But yes, that's what I meant. After the first null + reduction I would be able to see the win, but after the second with its reduction, the win is now hidden and this one fails high, so the important null move search fails low and now I am left to do a regular search to normal depth, and it too is going to fail high, but with a _lot_ more effort expended than if the original NM search had failed high as it should have.hgm wrote:"Normal" here meaning "unreduced", I assume. But even with this first reduction, you still see the win.bob wrote:I don't have any such FEN that I can find. But the basic idea is this: The first null-move will fail high because I am winning quite easily. I am not in zugzwang, but the "win" is very deep although my search sees it easily enough. But after I try the first null-move, my depth is now 3 plies shallower than it would be were I doing just a normal search in response to the null-move.
I never mentioned using R=0. That was your idea. In my example above but using R=0, the R=0 search would presumably fail low, at a high effort level, so that the original null-move would fail high. I'm not sure what I gained as a normal search (which would also be R=0) is also going to have to be completed, unless this is a true zugzwang position, and I am really increasing the search effort.Further reduce??? With R=0? What kind of zero is that?And now I first try another null-move which further reduces the remaining depth enough ...
So, there are two cases:
I reach ply=N and do a null-move search reduced by 3 plies. I go to ply=N+1 and do a null-move search reduced by 0. It fails low. So now I have to try the normal moves at this same ply and they all fail low. I return to the previous ply, the null move search fails high and we are off and running. Except that I did that extra branch at N+1 first, and it gave me nothing, which is what happens most of the time. So in this approach, I have searched one tree to normal depth (after the second null) that had some fixed cost and told me nothing.
I reach ply=N and do a null-move search reduced by 3 plies. I go to ply=N+1 and do a null-move search reduced by 0. Since this is a real zugzwang position, this search also fails high, and I return to the previous ply and the null-move search fails low and we search normally avoiding the zugzwang.
So the question is, does that extra null-move R=0 search hurt or help? In the case of a zugzwang it obviously helps. But since I try a null-move search at _every_ node in the tree, before trying anything else, I am going to be trying that second NM search with R=0 at every node in the tree, and most are going to fail low and simply be extra work done. Intuition says that if R=1 and R=2 failed in this test, R=0 is worse except that it doesn't hide the deep tactics as badly. But the cost is so high it might well cost me a ply overall and that could be worse.
The R=0 search has an extreme cost when the normal R=3 search is done at _every_ node in the tree already. Now we do one of these for every normal R=3 null-move search we do. And we quickly add up extra search overhead in a program with a normal branching factor of 2.0 or so. Double my nodes cuts me by one ply, which is a heavy cost.Yes, I can understand that last part. But the assumption is wrong. You have hidden nothing, as the second null move causes no extra reduction. This was my point all along. The only thing you did was to replace one of the moves of the opponent by a null move. I don't see how that could ever push back and hide your original win....that now I can't see any way for my opponent to beat me in this null-move search, I have "hidden" the forced loss from my search because I now don't have enough depth left to see it, so this null-move search returns "good" and the preceeding null-move search now fails low. And I do a normal search and again find that this is a fail-high node because I can see deeply enough to find the tactical key to winning.Well, zero is not "so many". In fact it is "nothing".Had the first null-move been allowed to run normally, it would have also failed high, but chopping off so many plies on the second null-move search changed things.Well, I am not surprised that those were your results with the second R>0. But it seems that reducing here is just a form of "wrong thriftiness". You are trying to save on something that already has nearly negligible cost, with the risk of having to pay very dearly.that was what I didn't like about the idea. When I tested, I found the following:
(a) I could try null-move searches much more aggressively, even in simple king and pawn endings, without significant risk of zugzwang (try that with KP vs K to see how badly null-move breaks that search). So that was a win, in that now I searched these endings far more deeply in most cases.
(b) my normal search slowed down, because most of the time the extra (second) null-move fails low, so we complete the normal null-move search, but now we have searched extra nodes that slows us down.
I did not spend a lot of time trying to "phase in" the double-null search as zugzwangs became more likely (that is the case where they help) because I could not find any reasonable way to determine when zugzwang was more likely. And if I could, I would just use the same rule and not do nulls there at all...
I can certainly run the test to see what happens, it is not that hard to handle the double-null-move case...
The problem was that R=2 was more expensive than R=3. R=1 was more expensive than R=2. R=0 is even more expensive than R=1. The only edge is that the R=0 search doesn't hide any tactics. But it does have huge cost.Well, it still seems to me that the problem was R>0. With R=0 things like you describe cannot happen, so it is a pity you did not test that.My testing was based on the size of the tree, and it went up in almost every case. Significantly. Which is the same as slowing the thing down. Significantly. I couldn't play as many games back then as I can today, but testing both on positions, and in real games produced worse results for me. Vincent swears by it. Perhaps for him it works well. But then again, I have seen this happen many times over the years. Programs are different. So they sometimes respond differently to a new idea.
I almost always test these things using total node counts, that prevents me from having to make long compiler runs with profiling to make sure I compare the two fastest executables possible. Just counting nodes lets me compile without doing PGO stuff at all, yet still being able to decide which is better.You are talking now about total node increase, or contribution of nodes from second-null-move searches?It was both. Logic says that the first null failing low hurts more because the resulting real search is far more expensive. The second (of the double pair) failing low (as expected) is also just unnecessary work that is done _everywhere_ as well. So when the second search is unnecessary (which is much more common, but I am not certain it adds up to as many nodes as are lost when the real null fails low) it does add up. I don't remember the numbers today, but I do remember that they were not the 1% or 5% type values, they were _much_ larger.
In the above, I am talking about total nodes searched in a position.
Again, I do null move searches at _every_ node in the tree, never skipping one at all except for the q-search. So every R=0 search adds nodes. And most don't provide any useful information because most fail low as the position doesn't have any zugzwang involved.
It seems hard to believe that the latter would ever add up to such a number if you reduce them. As you cannot have a second null-move search without having a first one. And in every case that first search will be much deeper.
I don't think that limit on null-move is effective. I have tried, over the years, many ways to limit where it is applied. And to date, not a single one has ever been better overall than just trying them everywhere. Several of us with different programs did this test back in the middle 90's... Bruce Moreland with Ferret, John Stanback with Zarkov, Dave Kittinger with wchessx, and others I am probably forgetting. Just because the current eval is < beta does not mean a thing about what happens near the tips. you can win or lose material, and earn large positional bonuses or penalties. So for me trying that never has worked. I've also applied the same idea on when to apply extensions, and never got that to work better than just applying them when they are triggered without regard to alpha/beta and current values...
Could this be related to you trying a null-move search always, also when CurEval < Beta? In such situations the first null move will usually fail low, but you do them in the hope to learn something from them (killers, history). But if you allow the opponent to refute them by a second null move you might learn nothing from it, as you are faced with the original position.
-
hgm
- Posts: 28440
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: The many faces of Null Move Pruning
Yes, I specifically asked for R=0, because I think what you explain here, (which is the same as you explained before, and which I then said was due to R>0) should not be able to happen there. As you seem to confirm below now:bob wrote: I never mentioned using R=0. That was your idea. In my example above but using R=0, the R=0 search would presumably fail low, at a high effort level, so that the original null-move would fail high. I'm not sure what I gained as a normal search (which would also be R=0) is also going to have to be completed, unless this is a true zugzwang position, and I am really increasing the search effort.
Isn't this a gross exaggeration? How expensive can it possibly be? It is as expensive as any other null-move reply, (with R=0), but there will typically be some 20-40 null-move replies. So it increases the cost of the (first) null-move searches by 3-5%. But not every search is a null -move searche, and non-null-move searches are typically way more expensive. So I wouuld be surprised if it would cost more that 1%. Surely a slow-down of 1% will not cost you a full ply?So, there are two cases:
I reach ply=N and do a null-move search reduced by 3 plies. I go to ply=N+1 and do a null-move search reduced by 0. It fails low. So now I have to try the normal moves at this same ply and they all fail low. I return to the previous ply, the null move search fails high and we are off and running. Except that I did that extra branch at N+1 first, and it gave me nothing, which is what happens most of the time. So in this approach, I have searched one tree to normal depth (after the second null) that had some fixed cost and told me nothing.
I reach ply=N and do a null-move search reduced by 3 plies. I go to ply=N+1 and do a null-move search reduced by 0. Since this is a real zugzwang position, this search also fails high, and I return to the previous ply and the null-move search fails low and we search normally avoiding the zugzwang.
So the question is, does that extra null-move R=0 search hurt or help? In the case of a zugzwang it obviously helps. But since I try a null-move search at _every_ node in the tree, before trying anything else, I am going to be trying that second NM search with R=0 at every node in the tree, and most are going to fail low and simply be extra work done. Intuition says that if R=1 and R=2 failed in this test, R=0 is worse except that it doesn't hide the deep tactics as badly. But the cost is so high it might well cost me a ply overall and that could be worse.
Yes, it adds up. But, as calculated above, it should only be ~5% of the null-move search effort, which again is only a fraction of all search effort.Again, I do null move searches at _every_ node in the tree, never skipping one at all except for the q-search. So every R=0 search adds nodes. And most don't provide any useful information because most fail low as the position doesn't have any zugzwang involved.
If you experienced increase in node count way more than 5%. It cannot be due to these second null-move searches, which you even redduced. It must be because of the way they were subverting (first) null-move fal highs. And less reduction should decrease that, not increase it.
I don't think that limit on null-move is effective. I have tried, over the years, many ways to limit where it is applied. And to date, not a single one has ever been better overall than just trying them everywhere. Several of us with different programs did this test back in the middle 90's... Bruce Moreland with Ferret, John Stanback with Zarkov, Dave Kittinger with wchessx, and others I am probably forgetting. Just because the current eval is < beta does not mean a thing about what happens near the tips. you can win or lose material, and earn large positional bonuses or penalties. So for me trying that never has worked. I've also applied the same idea on when to apply extensions, and never got that to work better than just applying them when they are triggered without regard to alpha/beta and current values...[/quote]
Could this be related to you trying a null-move search always, also when CurEval < Beta? In such situations the first null move will usually fail low, but you do them in the hope to learn something from them (killers, history). But if you allow the opponent to refute them by a second null move you might learn nothing from it, as you are faced with the original position.
Well, that was the point. You must win material, and if you and the opponent just throw in a null move, you haven't made much progress in that respect. If he is forced to do a normal move because second null-move is forbidden, he is likely to try a good move (determined by static move ordering, based on SEE, or history). So you learn something. If he makes you fail low by a second null move, you have nothing of that.
So there might be an important interaction here. Doing null-move when eval < beta might be good with single null move, but bad in combination with double null move, as the doeble null-move becomes just a very expensive way to test if eval < beta.
-
bob
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: The many faces of Null Move Pruning
OK, I try null move at _every_ tree node before I try anything else. I go down one ply and try the R=0 NULL which does nothing (as hoped) so that my normal null-move search fails high on every CUT node as it should. On every ALL node, it fails low, because most likely the R=0 NULL search one ply later failed high.hgm wrote:Yes, I specifically asked for R=0, because I think what you explain here, (which is the same as you explained before, and which I then said was due to R>0) should not be able to happen there. As you seem to confirm below now:bob wrote: I never mentioned using R=0. That was your idea. In my example above but using R=0, the R=0 search would presumably fail low, at a high effort level, so that the original null-move would fail high. I'm not sure what I gained as a normal search (which would also be R=0) is also going to have to be completed, unless this is a true zugzwang position, and I am really increasing the search effort.Isn't this a gross exaggeration? How expensive can it possibly be? It is as expensive as any other null-move reply, (with R=0), but there will typically be some 20-40 null-move replies. So it increases the cost of the (first) null-move searches by 3-5%. But not every search is a null -move searche, and non-null-move searches are typically way more expensive. So I wouuld be surprised if it would cost more that 1%. Surely a slow-down of 1% will not cost you a full ply?So, there are two cases:
I reach ply=N and do a null-move search reduced by 3 plies. I go to ply=N+1 and do a null-move search reduced by 0. It fails low. So now I have to try the normal moves at this same ply and they all fail low. I return to the previous ply, the null move search fails high and we are off and running. Except that I did that extra branch at N+1 first, and it gave me nothing, which is what happens most of the time. So in this approach, I have searched one tree to normal depth (after the second null) that had some fixed cost and told me nothing.
I reach ply=N and do a null-move search reduced by 3 plies. I go to ply=N+1 and do a null-move search reduced by 0. Since this is a real zugzwang position, this search also fails high, and I return to the previous ply and the null-move search fails low and we search normally avoiding the zugzwang.
So the question is, does that extra null-move R=0 search hurt or help? In the case of a zugzwang it obviously helps. But since I try a null-move search at _every_ node in the tree, before trying anything else, I am going to be trying that second NM search with R=0 at every node in the tree, and most are going to fail low and simply be extra work done. Intuition says that if R=1 and R=2 failed in this test, R=0 is worse except that it doesn't hide the deep tactics as badly. But the cost is so high it might well cost me a ply overall and that could be worse.
So the overhead is that for every CUT node NULL, I do a wasted ALL node NULL to significant depth, which is totally wasted, and then I do the remainder of the searches. As to exactly how much overhead that turns into, I really don't remember and would have to go back and re-run the experiment to see. All I can say for certain (and my losing a ply was just a guess not based on anything factual at the moment) is that this second null is mostly wasted effort, which makes the tree size larger than it is without the extra search. Those R=0 searches at ply=3 are pretty expensive, where the R=3 and R=2 searches are much less so, but all of 'em introduce extra work.
Once we get our cluster back up, I'll run some tests and give some numbers for R=0, 1, 2 and 3...
[/quote]
Yes, it adds up. But, as calculated above, it should only be ~5% of the null-move search effort, which again is only a fraction of all search effort.Again, I do null move searches at _every_ node in the tree, never skipping one at all except for the q-search. So every R=0 search adds nodes. And most don't provide any useful information because most fail low as the position doesn't have any zugzwang involved.
If you experienced increase in node count way more than 5%. It cannot be due to these second null-move searches, which you even redduced. It must be because of the way they were subverting (first) null-move fal highs. And less reduction should decrease that, not increase it.There are also potential hash table interactions, where these extra searches overwrite entries with data that is not very useful. Again I never tried to precisely quantify where the overhead came from, just that it was there.
I don't think that limit on null-move is effective. I have tried, over the years, many ways to limit where it is applied. And to date, not a single one has ever been better overall than just trying them everywhere. Several of us with different programs did this test back in the middle 90's... Bruce Moreland with Ferret, John Stanback with Zarkov, Dave Kittinger with wchessx, and others I am probably forgetting. Just because the current eval is < beta does not mean a thing about what happens near the tips. you can win or lose material, and earn large positional bonuses or penalties. So for me trying that never has worked. I've also applied the same idea on when to apply extensions, and never got that to work better than just applying them when they are triggered without regard to alpha/beta and current values...
Could this be related to you trying a null-move search always, also when CurEval < Beta? In such situations the first null move will usually fail low, but you do them in the hope to learn something from them (killers, history). But if you allow the opponent to refute them by a second null move you might learn nothing from it, as you are faced with the original position.
Not necessarily. I have plenty of positional scores that can add up to something greater than a minor piece. Particularly king safety, but also passed pawns in near-endgames...
Well, that was the point. You must win material, and if you and the opponent just throw in a null move, you haven't made much progress in that respect. If he is forced to do a normal move because second null-move is forbidden, he is likely to try a good move (determined by static move ordering, based on SEE, or history). So you learn something. If he makes you fail low by a second null move, you have nothing of that.
So there might be an important interaction here. Doing null-move when eval < beta might be good with single null move, but bad in combination with double null move, as the doeble null-move becomes just a very expensive way to test if eval < beta.
-
hgm
- Posts: 28440
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: The many faces of Null Move Pruning
OK, I shouldn't have said "material" here. The point was that you have to up your eval somehow. And a pair of null moves will not really do much in that direction. Be it material or position.bob wrote:Not necessarily. I have plenty of positional scores that can add up to something greater than a minor piece. Particularly king safety, but also passed pawns in near-endgames...
Well, that was the point. You must win material, and if you and the opponent just throw in a null move, you haven't made much progress in that respect. If he is forced to do a normal move because second null-move is forbidden, he is likely to try a good move (determined by static move ordering, based on SEE, or history). So you learn something. If he makes you fail low by a second null move, you have nothing of that.
Searching a null-move when eval < beta becomes equivalent to IID in steps of 2+R1+R2. Your null-move search will be exactly the same sub-tree as this particular IID iteration. So in combination with IID it tells you nothing you wouldn't have known anyway.
Depending how you implemented IID it might even confuse the search: the node after NULL-NULL searches the same position as the current node, with depth N-2-R1-R2. It will store its result for this node in the hash. But, unless you re-probe the hash after the null-move search, your current node would never get the result. It just sees a null-move that failed low, and remembers its initial hash probe (which might have been a miss). So you might start IID at d=1, while N-2-R1-R2 = 10. Of course that gives you hash hits of sufficient depth on all the daughters, for quite a few iterations to come. But you probably do not skip iterations in that case. So a lot of unnecessary work is done in the node itself.
-
Uri Blass
- Posts: 11147
- Joined: Thu Mar 09, 2006 12:37 am
- Location: Tel-Aviv Israel
Re: The many faces of Null Move Pruning
I think that all the subject of null move verification is unimportant at the playing strength of your program.
rybka beta does not use null move verification and as long as my program is significantly weaker than rybka beta there are other subjects that I can probably get more improvement from using them.
practically movei does the following:
1)null move pruning with no verification in the opening.
2)null move pruning with verification as David Omid suggested in the endgame(not simple endgame but pieces values are not more than 24 pawns)
3)not using null move pruning in pawn endgames or in situation when the number of legal moves of the side to move is at most 9.
4)not using null move pruning when the remaining depth is very small because last time that I checked I found that it is not productive in going to the same depth faster.
Often nodes that are pruned after null move because of no threat are pruned faster based on other factors without doing null move and if they are not pruned then often making null move and making an expensive evaluation only to discover that I cannot prune is simply a waste of time.
Uri
rybka beta does not use null move verification and as long as my program is significantly weaker than rybka beta there are other subjects that I can probably get more improvement from using them.
practically movei does the following:
1)null move pruning with no verification in the opening.
2)null move pruning with verification as David Omid suggested in the endgame(not simple endgame but pieces values are not more than 24 pawns)
3)not using null move pruning in pawn endgames or in situation when the number of legal moves of the side to move is at most 9.
4)not using null move pruning when the remaining depth is very small because last time that I checked I found that it is not productive in going to the same depth faster.
Often nodes that are pruned after null move because of no threat are pruned faster based on other factors without doing null move and if they are not pruned then often making null move and making an expensive evaluation only to discover that I cannot prune is simply a waste of time.
Uri
-
Tony
Re: The many faces of Null Move Pruning
My experience is that with a lot off checkmates, double nullmove worked better.
Since XiniX has static mate detection, it finds a lot of them. With normal nullmove, I would not find the checkmate move, but take 2nd best (ie the nullmove)
By adding 2nd nullmove, I do find it AND after that, jump back 2 plies, the point being that if I nullmove, you nullmove, I checkmate, I'd better replace the 1st nullmove with the checkmate move.
This might only be relevant if one uses fail soft though.
Tony
Since XiniX has static mate detection, it finds a lot of them. With normal nullmove, I would not find the checkmate move, but take 2nd best (ie the nullmove)
By adding 2nd nullmove, I do find it AND after that, jump back 2 plies, the point being that if I nullmove, you nullmove, I checkmate, I'd better replace the 1st nullmove with the checkmate move.
This might only be relevant if one uses fail soft though.
Tony
-
bob
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: The many faces of Null Move Pruning
Here's the thing however. If you play 2 nulls in successive plies, you are back to where you started, and additional searching can still result in large eval swings... up or down...
-
hgm
- Posts: 28440
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: The many faces of Null Move Pruning
Sure, but you would have seen those same score swings in normal IID. And then at least the node doing the IID would be directly aware of what was going on, use it to order moves, set killers at the proper level, etc.
What happens after the two null moves is hidden. Best move will not be passed on, but has to be deduced from probing hash for all moves. Killers will be stored two ply deeper, etc.
Double null move is simply an (elegant) implementation trick. There is nothing it can do that could not be done by explicit programming in the original node, in combination with single null move. (e.g. IID, verificatiton)
So if you say that it is good to do a null-move search when eval < beta in a double-null-move context, you are in fact saying that it is good to _preceed_ the null-move search by a (reduced) verification search, and then only do the null-move search (single null move only) if that verification search fails high. For that is exactly the same thing, tree-walk-wise. But it would have the advantage that your main search, which might follow after verification search or null-move search fail low, can profit better from the information dug up by the verification search (e.g. use its killers).
What happens after the two null moves is hidden. Best move will not be passed on, but has to be deduced from probing hash for all moves. Killers will be stored two ply deeper, etc.
Double null move is simply an (elegant) implementation trick. There is nothing it can do that could not be done by explicit programming in the original node, in combination with single null move. (e.g. IID, verificatiton)
So if you say that it is good to do a null-move search when eval < beta in a double-null-move context, you are in fact saying that it is good to _preceed_ the null-move search by a (reduced) verification search, and then only do the null-move search (single null move only) if that verification search fails high. For that is exactly the same thing, tree-walk-wise. But it would have the advantage that your main search, which might follow after verification search or null-move search fail low, can profit better from the information dug up by the verification search (e.g. use its killers).