Don wrote:I have tried many different variations of this. Doing null move only when the static score >= beta, doing it always, doing it when it's close.
For my program it turns out that doing it only when the score is above beta works best. I don't really understand why many other programmers find it best to do it always but it must depend on many different details about your program.
I came across this thread of 2009 while cleaning up my IE favorites. I never done much with nullmove but I am trying it now. 2 questions:
1. Reading this thread it seems the norm to do a nullmove is to check beta with or without a margin. Why is a beta-check better than an alpha-check? I am using alpha and the following formula seems to do well:
if (alpha < eval_score + expected_material_gain) { no nullmove };
else { do nullmove };
Wheras expected_material_gain is the highest SEE value. The logic is that in 99% (or so) of the cases you will have to research that node at full depth anyway, so why waste time on a nullmove.
And so my question, is the use of beta better than alpha?
2. For the moment I am using R=3. I am not quite sure what to do in cases R>=depth. It makes little sense to do a nullmove, either you prune here or search these last plies to the horizon. It's not clear from the CPW. To my surprise I noticed that pruning works better then searching them. Just tell me if I have reinvented the wheel or not.
When you do the null move, it will only result in any effect if it scores >= beta. So one could argue that when it scores < beta, the effort was wasted. (Not entirely true, because it fills the hash, gives you killers etc.)
If you want to avoid 'wasting' this time by making search of the null move subject to some simple test, the aim of the test is thus predicting whether there is a realistic chance to score above beta. Whether you expect score above alpha or not is irrelevant. (In fact, scoring above alpha is likely to make the null-move search more expensive.)
In particular it seems completely irrelevant what your best SEE is. Because you knw you are not going to make that capture, but play a null move in stead. You could be in a node where curEval = 100, alpha = 200, beta = 500, and SEE = 950 because you can capture a hanging Queen. Most likely an easy fail high. IF you indeed capture that Queen. Trying a null move will sink you like a stone, however; it will be refuted by evasion of the Queen, it is not clear if you can gain anything else at all, and you are nearly a Rook short of beta.
The most logical test for predicting if a null move will score above beta is to test if curEval > beta. Because it is quite rare that you will gain anythinng by passing your turn. The one who has the move is in a better position to improve on the current position than his opponent. Nevertheless, some people already do null move if curEval > beta - MARGIN. You would need a separate test to prevent two consecutive null moves in that case (which the curEval > beta enforces automatically).
If the remaining depth is larger than the reduction, null move drops you into QS immediately. With R=3 and depth=2 a null-move with QS reply is still cheaper than a 2-ply search with QS reply, though. So there really only is an issue whether you should do null move at depth=1. I don't think it would hurt; it saves you generating moves in that node.
Don wrote:I have tried many different variations of this. Doing null move only when the static score >= beta, doing it always, doing it when it's close.
For my program it turns out that doing it only when the score is above beta works best. I don't really understand why many other programmers find it best to do it always but it must depend on many different details about your program.
I came across this thread of 2009 while cleaning up my IE favorites. I never done much with nullmove but I am trying it now. 2 questions:
1. Reading this thread it seems the norm to do a nullmove is to check beta with or without a margin. Why is a beta-check better than an alpha-check? I am using alpha and the following formula seems to do well:
if (alpha < eval_score + expected_material_gain) { no nullmove };
else { do nullmove };
Wheras expected_material_gain is the highest SEE value. The logic is that in 99% (or so) of the cases you will have to research that node at full depth anyway, so why waste time on a nullmove.
And so my question, is the use of beta better than alpha?
It shouldn't make any difference if you only do null move in non-pv nodes with PVS search because alpha will always be beta-1.
We did experiment with doing null move pruning at PV nodes but that is a clear loss. But the point of testing against beta is that in principle you should get a cutoff if the score is >= beta so there is a strong likelihood that you will fail if it is lower. With alpha of course there is a much stronger likelihood of failure of course.
2. For the moment I am using R=3. I am not quite sure what to do in cases R>=depth. It makes little sense to do a nullmove, either you prune here or search these last plies to the horizon. It's not clear from the CPW. To my surprise I noticed that pruning works better then searching them. Just tell me if I have reinvented the wheel or not.
In Komodo we do not even do null move on the last 2 ply - instead we use a static threat test routine to see what is being attacked at those depths. Otherwise, we just do depth - 3 - 1 even if that becomes a quies search.
I think Stockfish, at least an older version I Looked at just uses margins on the last ply (or maybe it's the last 2 ply?) If you experiment with margins you might want to at the very least check for a pawn on the 7th and not use a margin in those case. We are looking of course at threats of the other side, not the side to move.
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
Don wrote:I have tried many different variations of this. Doing null move only when the static score >= beta, doing it always, doing it when it's close.
For my program it turns out that doing it only when the score is above beta works best. I don't really understand why many other programmers find it best to do it always but it must depend on many different details about your program.
I came across this thread of 2009 while cleaning up my IE favorites. I never done much with nullmove but I am trying it now. 2 questions:
1. Reading this thread it seems the norm to do a nullmove is to check beta with or without a margin. Why is a beta-check better than an alpha-check? I am using alpha and the following formula seems to do well:
if (alpha < eval_score + expected_material_gain) { no nullmove };
else { do nullmove };
Wheras expected_material_gain is the highest SEE value. The logic is that in 99% (or so) of the cases you will have to research that node at full depth anyway, so why waste time on a nullmove.
And so my question, is the use of beta better than alpha?
2. For the moment I am using R=3. I am not quite sure what to do in cases R>=depth. It makes little sense to do a nullmove, either you prune here or search these last plies to the horizon. It's not clear from the CPW. To my surprise I noticed that pruning works better then searching them. Just tell me if I have reinvented the wheel or not.
alpha/beta probably don't matter a whit. I don't even use them, with the exception of not doing a null if alpha != beta-1 (I only do 'em on null-window searches).
For your last question, it is not quite pruning, you just collapse the search into the q-search. If you don't do checks in the first ply of q-search, this will miss a LOT of threats. That's why many of used an adaptive null-move where R varied from 3 down to 2 near the leaves. Once I added checks, it tested better to use R=3 everywhere, which I have done ever since.
Don wrote:I have tried many different variations of this. Doing null move only when the static score >= beta, doing it always, doing it when it's close.
For my program it turns out that doing it only when the score is above beta works best. I don't really understand why many other programmers find it best to do it always but it must depend on many different details about your program.
I came across this thread of 2009 while cleaning up my IE favorites. I never done much with nullmove but I am trying it now. 2 questions:
1. Reading this thread it seems the norm to do a nullmove is to check beta with or without a margin. Why is a beta-check better than an alpha-check? I am using alpha and the following formula seems to do well:
if (alpha < eval_score + expected_material_gain) { no nullmove };
else { do nullmove };
Wheras expected_material_gain is the highest SEE value. The logic is that in 99% (or so) of the cases you will have to research that node at full depth anyway, so why waste time on a nullmove.
And so my question, is the use of beta better than alpha?
2. For the moment I am using R=3. I am not quite sure what to do in cases R>=depth. It makes little sense to do a nullmove, either you prune here or search these last plies to the horizon. It's not clear from the CPW. To my surprise I noticed that pruning works better then searching them. Just tell me if I have reinvented the wheel or not.
alpha/beta probably don't matter a whit. I don't even use them, with the exception of not doing a null if alpha != beta-1 (I only do 'em on null-window searches).
For your last question, it is not quite pruning, you just collapse the search into the q-search. If you don't do checks in the first ply of q-search, this will miss a LOT of threats. That's why many of used an adaptive null-move where R varied from 3 down to 2 near the leaves. Once I added checks, it tested better to use R=3 everywhere, which I have done ever since.
Exactly, checks in quies is very important with null move pruning, and probably not just for this reason (although it may be the primary one) but simply because of it's extra threat resolution power when using high R values.
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
1. I was thinking about it too why I should waste a nullmove search in obvious cases and my solution is a bit more expensive than yours. I don't estimate the successfullness of a possible nullmove with the biggest SEE value I rather do a nullmove search with R+2 reduction. If it fails low then ok and accept the nullmove-test but if it fails high then I go for the normal nullmove search.
2. I do nullmove even at depth == 0 with a quiescence search and it is better for me then not doing it. I use a qsearch as a nullmove search even at depth == 3. I also suggest to use a remaining depth based table to determine the R value to make it more adaptive.
hgm wrote:When you do the null move, it will only result in any effect if it scores >= beta. So one could argue that when it scores < beta, the effort was wasted. (Not entirely true, because it fills the hash, gives you killers etc.)
If you want to avoid 'wasting' this time by making search of the null move subject to some simple test, the aim of the test is thus predicting whether there is a realistic chance to score above beta. Whether you expect score above alpha or not is irrelevant. (In fact, scoring above alpha is likely to make the null-move search more expensive.)
In particular it seems completely irrelevant what your best SEE is.
I am afraid I explained myself not clear enough.
[d] r2qk2r/ppp2ppp/2np1n2/2b1p1B1/2B1P1b1/2NP1N2/1PP2PPP/R2QK2R w KQkq -
Consider the diagram. In this position somewhere in the tree white is a pawn down. Now the only moves that don't need a nullmove are the captures Rxa7 | Nxe5 | Bxf6 but also h3 attacking the bishop on g4.
My eval returns that h3 attacks Bg4 with a SEE value of 2 pawns, what I called the expected_material_gain in the below formula:
hgm wrote:When you do the null move, it will only result in any effect if it scores >= beta. So one could argue that when it scores < beta, the effort was wasted. (Not entirely true, because it fills the hash, gives you killers etc.)
If you want to avoid 'wasting' this time by making search of the null move subject to some simple test, the aim of the test is thus predicting whether there is a realistic chance to score above beta. Whether you expect score above alpha or not is irrelevant. (In fact, scoring above alpha is likely to make the null-move search more expensive.)
In particular it seems completely irrelevant what your best SEE is.
I am afraid I explained myself not clear enough.
[d] r2qk2r/ppp2ppp/2np1n2/2b1p1B1/2B1P1b1/2NP1N2/1PP2PPP/R2QK2R w KQkq -
Consider the diagram. In this position somewhere in the tree white is a pawn down. Now the only moves that don't need a nullmove are the captures Rxa7 | Nxe5 | Bxf6 but also h3 attacking the bishop on g4.
My eval returns that h3 attacks Bg4 with a SEE value of 2 pawns, what I called the expected_material_gain in the below formula:
if (alpha < eval_score + expected_material_gain) { no nullmove };
else { do nullmove };
Hope it's more clear now.
I've tried things like that in DiscoCheck, but it never worked. What I did was similar, but somewhat simpler to implement: if the side to move has hanging pieces (ie. attacked by lower value enemy pieces), then don't do null move search.
No matter how hard I tried (several variations of the idea), it was always a regression in testing. It's really frustrating, but very often ideas that make sense from a chess point of view, simply do not work. And this is one of them, I'm afraid.
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
hgm wrote:When you do the null move, it will only result in any effect if it scores >= beta. So one could argue that when it scores < beta, the effort was wasted. (Not entirely true, because it fills the hash, gives you killers etc.)
If you want to avoid 'wasting' this time by making search of the null move subject to some simple test, the aim of the test is thus predicting whether there is a realistic chance to score above beta. Whether you expect score above alpha or not is irrelevant. (In fact, scoring above alpha is likely to make the null-move search more expensive.)
In particular it seems completely irrelevant what your best SEE is.
I am afraid I explained myself not clear enough.
[d] r2qk2r/ppp2ppp/2np1n2/2b1p1B1/2B1P1b1/2NP1N2/1PP2PPP/R2QK2R w KQkq -
Consider the diagram. In this position somewhere in the tree white is a pawn down. Now the only moves that don't need a nullmove are the captures Rxa7 | Nxe5 | Bxf6 but also h3 attacking the bishop on g4.
My eval returns that h3 attacks Bg4 with a SEE value of 2 pawns, what I called the expected_material_gain in the below formula:
if (alpha < eval_score + expected_material_gain) { no nullmove };
else { do nullmove };
Hope it's more clear now.
I've tried things like that in DiscoCheck, but it never worked. What I did was similar, but somewhat simpler to implement: if the side to move has hanging pieces (ie. attacked by lower value enemy pieces), then don't do null move search.
No matter how hard I tried (several variations of the idea), it was always a regression in testing. It's really frustrating, but very often ideas that make sense from a chess point of view, simply do not work. And this is one of them, I'm afraid.
This is not dynamic enough. It makes sense near the leaf nodes of the tree but when there is 20 ply left you need something that deals with long term threats such as null move.
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
hgm wrote:When you do the null move, it will only result in any effect if it scores >= beta. So one could argue that when it scores < beta, the effort was wasted. (Not entirely true, because it fills the hash, gives you killers etc.)
If you want to avoid 'wasting' this time by making search of the null move subject to some simple test, the aim of the test is thus predicting whether there is a realistic chance to score above beta. Whether you expect score above alpha or not is irrelevant. (In fact, scoring above alpha is likely to make the null-move search more expensive.)
In particular it seems completely irrelevant what your best SEE is.
I am afraid I explained myself not clear enough.
[d] r2qk2r/ppp2ppp/2np1n2/2b1p1B1/2B1P1b1/2NP1N2/1PP2PPP/R2QK2R w KQkq -
Consider the diagram. In this position somewhere in the tree white is a pawn down. Now the only moves that don't need a nullmove are the captures Rxa7 | Nxe5 | Bxf6 but also h3 attacking the bishop on g4.
My eval returns that h3 attacks Bg4 with a SEE value of 2 pawns, what I called the expected_material_gain in the below formula:
if (alpha < eval_score + expected_material_gain) { no nullmove };
else { do nullmove };
Hope it's more clear now.
I've tried things like that in DiscoCheck, but it never worked. What I did was similar, but somewhat simpler to implement: if the side to move has hanging pieces (ie. attacked by lower value enemy pieces), then don't do null move search.
No matter how hard I tried (several variations of the idea), it was always a regression in testing. It's really frustrating, but very often ideas that make sense from a chess point of view, simply do not work. And this is one of them, I'm afraid.
This is not dynamic enough. It makes sense near the leaf nodes of the tree but when there is 20 ply left you need something that deals with long term threats such as null move.
Yes, it's exactly as you say. Can only work well near the leaves.
Where I did manage to get the hanging pieces idea to work is in eval pruning (also called "static null move pruning" or "reverse futility pruning"). The idea is that, you are near the leaves and your eval is much better than beta, but before predicting a beta cutoff make sure you don't have two or more hanging pieces. One is fine (it's your turn to play) but 2 or more is often difficult to get out of, at least in a simple combination (near the leaves). The elo gain was small though, but it was a measurable small quantity.
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.