for ( all moves) {
make
if ( bSearchPv ) {
score = -pvSearch(-beta, -alpha, depth - 1);
} else {
score = -pvSearch(-alpha-1, -alpha, depth - 1);
if ( score > alpha ) // in fail-soft ... && score < beta ) is common
score = -pvSearch(-beta, -alpha, depth - 1); // re-search
}
unmake
if( score >= beta )
return beta; // fail-hard beta cutoff
if( score > alpha ) {
alpha = score; // alpha acts like max in MiniMax
bSearchPv = false;
}
}
So this in fact searches full-window (in pv nodes) as long as alpha is not raised. Now I just encountered a position with a fail low at root level and found out that this algorithm just presented was the reason that it took the engine extremely long to reach the desired result.
Another implementation I tried was to just do the full-window search at the first move. This solved the problem. So my question now .. is there any overall advantage in the implementation as shown in the CPW? Or is it wise to use different algorithms for root-level and internal pv nodes?
Another implementation I tried was to just do the full-window search at the first move.
Is this the same thing as not using aspiration at all?
Sorry, I didn't mean full window, but the aspiration window from the previous search.
The main difference at the root would be that in case of a fail low the CPW algorithm would search all moves with this aspiration window and start a full width re-search afterwards, while my second option would just search the first move with the aspiration window and the rest with null windows and do a full window research at the re-search.
for ( all moves) {
make
if ( bSearchPv ) {
score = -pvSearch(-beta, -alpha, depth - 1);
} else {
score = -pvSearch(-alpha-1, -alpha, depth - 1);
if ( score > alpha ) // in fail-soft ... && score < beta ) is common
score = -pvSearch(-beta, -alpha, depth - 1); // re-search
}
unmake
if( score >= beta )
return beta; // fail-hard beta cutoff
if( score > alpha ) {
alpha = score; // alpha acts like max in MiniMax
bSearchPv = false;
}
}
So this in fact searches full-window (in pv nodes) as long as alpha is not raised. Now I just encountered a position with a fail low at root level and found out that this algorithm just presented was the reason that it took the engine extremely long to reach the desired result.
Another implementation I tried was to just do the full-window search at the first move. This solved the problem. So my question now .. is there any overall advantage in the implementation as shown in the CPW? Or is it wise to use different algorithms for root-level and internal pv nodes?
You aren't interpreting the code correctly. Let's suppose when we enter, alpha=0, beta=100. We search the first move with alpha=0 and beta=100. If we don't fail high on that single move search, we then search with a null-window for the next move (temp_alpha=-beta-1, temp_beta=-current alpha). If, on any of those searches, we get a result that is > original alpha, and < original beta, we need to do a re-search. The reason for this test is that it is possible that we enter Search() with a null-window to start with, which would make the research impossible since the original alpha = X and the original beta = X+1. If that happens, the re-search can.'t happen (it would be pointless. The variable bSearchPV is only true for the _first_ move searched at any node. 99.999% of the time, beta = alpha+1 anyway, and a re-search would be futile.
Codeman wrote:
So this in fact searches full-window (in pv nodes) as long as alpha is not raised.
The code you posted is simply wrong, for the reason you stated.
// in fail-soft ... && score < beta ) is common
This also looks wrong to me. Why would you search if score >= beta? Just cut, doesn't matter if we're fail hard or fail soft!
It is correct, it just reads funny. See my explanation in the other post in this thread. The if (val > alpha && val < beta) is simply a trick to avoid the research if this call to Search (from previous ply) started out with a null-window..
.
When I looked again, I realized that the bSearchPV is in the wrong place. You are right. It needs to be moved outside the if-test for value > alpha...
for ( all moves) {
make
if ( bSearchPv ) {
score = -pvSearch(-beta, -alpha, depth - 1);
} else {
score = -pvSearch(-alpha-1, -alpha, depth - 1);
if ( score > alpha && score < beta )
score = -pvSearch(-beta, -alpha, depth - 1); // re-search
}
unmake
if( score >= beta )
return score;
if( score > alpha )
alpha = score; // alpha acts like max in MiniMax
bSearchPv = false;
}
I thought this was the difference between PVS and NegaScout
PVS waits until a move is found that improves alpha, and then searches every move after that with a zero window around alpha. NegaScout just searches the first move, and then searches every move after that with the same zero window.
If no PV move has been found, "AlphaBeta()" is recursed normally. If one has been found, everything changes. Rather than searching with a normal (alpha, beta) window, we search with (alpha, alpha + 1). The premise is that all of the searches are going to come back with a score less than or equal to alpha, and if this is going to happen, the elimination of the top end of the window results in more cutoffs. Of course, if the premise is wrong, the score comes back at alpha + 1 or higher, and the search must be re-done with the wider window.
Gerd Isenberg wrote:
I thought this was the difference between PVS and NegaScout
PVS waits until a move is found that improves alpha, and then searches every move after that with a zero window around alpha. NegaScout just searches the first move, and then searches every move after that with the same zero window.