SEE is too slow

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
cms271828
Posts: 316
Joined: Wed Apr 12, 2006 10:47 pm

Re: SEE is too slow

Post by cms271828 »

I'm not clear on how I sort >0 SEE moves using MVVLVA

Eg, If pawn takes rook, and queen takes pawn, thats 400 points
So why not just use that "see" score to order moves using bubblesort?

Thanks
Colin
micron
Posts: 155
Joined: Mon Feb 15, 2010 9:33 am
Location: New Zealand

Re: SEE is too slow

Post by micron »

MVVLVA:

Code: Select all

score_for_sort[j] = ValOf( capturedPiece ) - (ValOf( capturingPiece ) >> 3);
This works out to be as effective, for ordering 'good' captures, as

Code: Select all

score_for_sort[j] = SEE( move );
Just use SEE() for classification of doubtful moves, so you can pick up this optimization
an elegant optimization allows many SEE evaluations to be skipped altogether, without affecting the classification:

Code: Select all

 if ( &#40;ValOf&#40; capturedPiece ) < ValOf&#40; capturingPiece )) && &#40;SEE&#40; move ) < 0&#41; ) 
User avatar
cms271828
Posts: 316
Joined: Wed Apr 12, 2006 10:47 pm

Re: SEE is too slow

Post by cms271828 »

Excellent,

I was thinking something like that before, like P x R, is always gonna be good.

In this case, we get the 500 for the rook, and maybe or maybe not lose the 100.
So just to be clear, in the case above, your saying..

score_for_sort[j] = 500 - (100 >> 3);

I think >> 3 is the same as dividing by 8, which would give you 500 - 12 = 488
Is that right?

I'm assuming good captures will be when capturer <= captured, so that you can't lose on it.
And bad captures are capturer > capturered, then we can use the code you gave and test it with SEE.

Thanks
Colin
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: SEE is too slow

Post by Sven »

cms271828 wrote:Excellent,

I was thinking something like that before, like P x R, is always gonna be good.

In this case, we get the 500 for the rook, and maybe or maybe not lose the 100.
So just to be clear, in the case above, your saying..

score_for_sort[j] = 500 - (100 >> 3);

I think >> 3 is the same as dividing by 8, which would give you 500 - 12 = 488
Is that right?

I'm assuming good captures will be when capturer <= captured, so that you can't lose on it.
And bad captures are capturer > capturered, then we can use the code you gave and test it with SEE.

Thanks
What many people do includes something like this:
1) Within QS, order all moves based on MVV/LVA (not on SEE).
2) Before actually making a move in QS, check whether it is a potentially losing capture (pieceValue[movingPiece] > pieceValue[capturedPiece]), and only then apply SEE to such moves; if SEE(potentiallyLosingCaptureMove) < 0 then skip that move since you now verified it is (presumably) a losing capture.

That scheme restricts the additional effort of calculating SEE to very few moves. You can expect a small improvement in tree size from that by cutting off subtrees of bad captures but you do not improve move ordering itself that way.

It is of course also possible to use SEE for ordering of all captures in QS, like you reported. Skipping losing captures should still be part of that solution, though. Whether this leads to a smaller tree size than the MVV/LVA based ordering described above may depend on your implementation.

I don't know if anyone has tried to use a combination of MVV/LVA and SEE for move ordering. Personally I doubt it will give an improvement.


Quite unexpected to me is the measurable performance drop you reported from using SEE. I think it is very likely that there is some SEE implementation problem. Have you analyzed it by profiling?


Here are some more remarks on your QS implementation that you showed in one of your previous posts:

1) I do not understand why you have two QS functions for white and black, although both seem to rely on the negamax principle.

2) Your QS is lacking the usual "stand-pat" checking code in the beginning of the "NOT check" part. If "val >= beta" you can immediately return with an early beta cutoff (return beta, assuming you are doing fail-hard alpha-beta), and otherwise, if "val > alpha" you should set alpha to val. Especially the latter is essential for a correct QS while the former is a trivial improvement: cutoff as soon as possible.

3) Calling the evaluation might be redundant in case of being in check, so it might be a small optimization to move "int val = getEval();" into the "NOT check" part, unless you also use "val" within the "check" part.

Sven
User avatar
JuLieN
Posts: 2949
Joined: Mon May 05, 2008 12:16 pm
Location: Bordeaux (France)
Full name: Julien Marcel

Re: SEE is too slow

Post by JuLieN »

Hello Colin,

the figures you gave are, sorry to say, quite horrible, so I'm left with the feeling that you're trying to speed up an Austin Mini by trying to install a jumbo jet reactor in it.

My suggestion is that you should first try to fix your move ordering firstly with classic solutions that are an order of magnitude more efficient and fast, and then only, when you can't press any more juice out of them, try to add SEE.

For instance, did you try to sort your moves lists using some history tables? My own crappy engine has no SEE but reaches your "ply 10 after a3" test in 0.787 second and 803Kn. Also, in more than 90% of the time the best move will come directly out of the hash table.
"The only good bug is a dead bug." (Don Dailey)
[Blog: http://tinyurl.com/predateur ] [Facebook: http://tinyurl.com/fbpredateur ] [MacEngines: http://tinyurl.com/macengines ]
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: SEE is too slow

Post by bob »

cms271828 wrote:So I just put see in to my engine (for white captures)..
It takes about 3 times longer to complete a 9 ply search than using MVV/LVA.

I always assumed SEE was faster if done correctly, but I clearly have poor results.
So now I'm stuck with what to do, is it likely I have an error?

Thanks
Obvious question. What are you using it for? It definitely has a cost. But if you use it for better move ordering, or to exclude SEE<0 capture moves in the q-search, that cost is more than offset by the smaller tree you search. You can also use it to avoid extending losing checks, or for reduction decisions such as reducing losing captures and such.
User avatar
cms271828
Posts: 316
Joined: Wed Apr 12, 2006 10:47 pm

Re: SEE is too slow

Post by cms271828 »

I was doing it just for move ordering, I assumed that was what it was meant for.

I think my QS is messed up, I gotta go over some alpha-beta pruning again so I can refamiliarise myself with how it works. I get the principle of cut-offs, but I get a bit confused with the alpha-beta window, so probably best I do this, then QS, then get move ordering working better.

Thanks
Colin
User avatar
cms271828
Posts: 316
Joined: Wed Apr 12, 2006 10:47 pm

Re: SEE is too slow

Post by cms271828 »

Thanks, do you mean history heuristic?

I've heard of it, but not implemented it, I have hash table, and iterative deepening, and also null move currently.

You're time for a3 is excellent, I will be very happy if I can get time like that one day before I die (using same hardware)
Colin
User avatar
JuLieN
Posts: 2949
Joined: Mon May 05, 2008 12:16 pm
Location: Bordeaux (France)
Full name: Julien Marcel

Re: SEE is too slow

Post by JuLieN »

cms271828 wrote:Thanks, do you mean history heuristic?

I've heard of it, but not implemented it, I have hash table, and iterative deepening, and also null move currently.

You're time for a3 is excellent, I will be very happy if I can get time like that one day before I die (using same hardware)
Maybe other people do it differently, but here's how I do it: each time a beta cut-off happens, I increase (+1) an History(ply, piece_type, arrival_square) board.

Then during move ordering, I add the corresponding value to the move. For instance, if I have a bishop going from b2 to g7, I'll add History(ply, bishop, g7) to this move's lazy eval.

I know that some people have a History(ply, from_square, to_square) board instead, but my tests show it works a bit less good (but maybe it's implementation dependent).
"The only good bug is a dead bug." (Don Dailey)
[Blog: http://tinyurl.com/predateur ] [Facebook: http://tinyurl.com/fbpredateur ] [MacEngines: http://tinyurl.com/macengines ]
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: SEE is too slow

Post by Sven »

cms271828 wrote:I was doing it just for move ordering, I assumed that was what it was meant for.

I think my QS is messed up, I gotta go over some alpha-beta pruning again so I can refamiliarise myself with how it works. I get the principle of cut-offs, but I get a bit confused with the alpha-beta window, so probably best I do this, then QS, then get move ordering working better.

Thanks
This is how an alpha-beta window works in the standard case, while not yet considering special features like "fail-soft", "null move pruning" or "PVS":

Assuming a recursive negamax framework where scores have the viewpoint of the moving side S, "alpha" denotes the best value that was found for S within the tree search performed so far, with best possible play of the opponent O.

"Beta" denotes the negated best value for O found so far, i.e. it is the negated form of O's "alpha" at the parent node. (Note that the recursive search call in negamax is done in the form "score = -search(-beta, -alpha, ...)".)

If S has a move M1 for which you find a score >= beta then M1 refutes the previous move M0 of O by proving that M0 can't become the new best move for O. This follows from the negamax principle: from the viewpoint of O, searching his move M0 returns a score <= -(-alpha) while a new best move must have a score > alpha.

Finding a new best move means (again: in the most simple form of alpha-beta search) to raise alpha and therefore to tighten the window. Note that both bounds, alpha as well as beta, are "outside" the window. You can say that:
- a score <= alpha fails low, the corresponding move is not the best move;
- a score > alpha and < beta improves alpha (new best move);
- a score >= beta fails high and leads to a cutoff, there is no need to continue searching other moves at this node since it is not required to find the best refutation of a move, just the first refutation is enough and proves that its score will have zero influence on the score of the root node.

When looking on all alpha-beta windows on the current move path from the root node to the current node, the window is getting tighter from both sides (at least: not wider). S continuously improves alpha while O improves "his" alpha which lowers the beta of S.

Techniques like fail-soft alpha-beta (the above is called fail-hard) or PVS can be understood on top of that.

Whenever you reach the condition alpha >= beta you can stop the search with a cutoff. In normal full-width search this will usually not occur without searching at least one move. In contrast to that, in quiescence search there is the additional "stand-pat" principle saying that the moving side is not forced to play any capture move if the current evaluation is good enough. Therefore you call eval() in the beginning to initialize alpha. But if eval() is already >= beta there is no need for any searching at all, you can immediately do the cutoff (alpha >= beta).

Sven