Hey guys,
I implemented alpha-beta pruning but sometimes I get weird results. When (for example) black is winning in 2 moves and the evaluation says that it is winning it doesn't actually make the move for some reason.
public int findBestMove(int depth){
ArrayList<Move> sucessors =MGenerator.getMoves(board);
int max = Integer.MIN_VALUE;
for(Move m : sucessors){
board.makeMove(m);
int value =-alphabeta(board, -1000000, 1000000, depth);
board.undoMove(m);
if(value>max){
max=value;
bestMove=m;
}
}
board.makeMove(bestMove);
return max;
}
Is there anything wrong with my implementation. I am puzzled by the weird results I am getting.
Peng1993 wrote:I implemented alpha-beta pruning but sometimes I get weird results. When (for example) black is winning in 2 moves and the evaluation says that it is winning it doesn't actually make the move for some reason.
I can't see what is happening, but maybe there are many winning moves and the search picks one that is winning but does not bring the win closer?
For example in KQvK any move that does not lose the queen (or causes stalemate) is winning, but randomly playing winning moves will likely not win the game.
Peng1993 wrote:I implemented alpha-beta pruning but sometimes I get weird results. When (for example) black is winning in 2 moves and the evaluation says that it is winning it doesn't actually make the move for some reason.
I can't see what is happening, but maybe there are many winning moves and the search picks one that is winning but does not bring the win closer?
For example in KQvK any move that does not lose the queen (or causes stalemate) is winning, but randomly playing winning moves will likely not win the game.
Maybe. But there isn't anything specifically wrong with my implementation ?
Does `GameEvaluation.evaluate(board)' return the correct values for terminal states? The way I would normally write the code, it would look something like this:
// ...
int final_result;
if (board.isTerminalState(final_result))
return board.color * final_result;
if (depth <= 0)
return board.color * GameEvaluation.evaluate(board);
// ...
If `GameEvaluation.evaluate(board)' is returning some heuristic value even for terminal positions, your code may prefer a larger material advantage than actually winning, e.g.
Peng1993 wrote:Hey guys,
I implemented alpha-beta pruning but sometimes I get weird results. When (for example) black is winning in 2 moves and the evaluation says that it is winning it doesn't actually make the move for some reason.
public int findBestMove(int depth){
ArrayList<Move> sucessors =MGenerator.getMoves(board);
int max = Integer.MIN_VALUE;
for(Move m : sucessors){
board.makeMove(m);
int value =-alphabeta(board, -1000000, 1000000, depth);
board.undoMove(m);
if(value>max){
max=value;
bestMove=m;
}
}
board.makeMove(bestMove);
return max;
}
Is there anything wrong with my implementation. I am puzzled by the weird results I am getting.
I don't see all the details. Do you return a mate score that is something like "MATE - ply" so that deeper mates get worse scores than shallow mates???
That code is problematic along with the return value of Integer.MIN_VALUE when there is a mate found deeper in the tree, lets say it is going to be mated in the next ply. The Integer.MIN_VALUE will be propagated up in the tree and will have this statement:
if (Integer.MIN_VALUE > Integer.MIN_VALUE)
which will not be true and therefore bestMove will not be updated.
Peng1993 wrote:When (for example) black is winning in 2 moves and the evaluation says that it is winning it doesn't actually make the move for some reason.
It would be more enlightening if you could show an example of this.
Henk wrote:What's the value the search returns when win in zero moves ?
Lol
No win in zero moves in search tree
Win in zero moves means the user played his moves and checkmated the engine
In the search tree it may have a win in one move and the value returned is MATE - 1