I understand what you are saying, and I agree with you.hgm wrote: ↑Tue Jun 15, 2021 9:06 amWell, this 'scenario' is how alpha-beta with iterative deepening works in general. You start with the move that was best in the previous iteration, and when you find a better one you switch to that. Of course in the root you cannot have a beta cutoff, as beta is +infinite there. Unless you would use aspiration, and then you would re-search after enlarging the window.amanjpro wrote: ↑Tue Jun 15, 2021 1:44 am So, I tried going back a few moves and couldn't reproduce the issue.
What I thought was happening was:
- In root node, first move is hashmove, and it finds the bestmove
- Other nodes are tried, and one of them is causing beta-cutoff (in this case, Qe5). And since the code above updates PV when the score is higher than bestscore, and before betacutoff is tried, it updates PV to that move...
Not sure if this is the scenario, but I don't have a better explanation than that, gotta think about it harder![]()
Qe5 had a poor score. It can only have superceded Qxg8 if that had an even lower score. And this is obviously wrong, as this line is about a Queen better.
But imagine, when Qe5 was searched, beta cutoff happened (I have aspiration window), given the below code:
Code: Select all
if score > bestscore {
// Potential PV move, lets copy it to the current pv-line
e.innerLines[searchHeight].AddFirst(move)
e.innerLines[searchHeight].ReplaceLine(e.innerLines[searchHeight+1])
if score >= beta {
e.TranspositionTable.Set(hash, move, score, depthLeft, LowerBound, e.Ply)
// e.AddKillerMove(move, searchHeight)
e.AddHistory(move, move.MovingPiece(), move.Destination(), depthLeft)
return score
}
bestscore = score
hashmove = move
}
Or maybe I am missing something?