One possibility is this (another idea from Tony): use small windows around the previous score for each move. As an advantage, you get multi PV for free.Gian-Carlo Pascutto wrote:I think I should clarify this. I do believe it's useful to search alternates more deeply, and I do not have any problems comparing moves from different depths. But the question is what to do with the bounds.
I mean, if you don't have an alpha value yet from your first move, what are you going to do with the others? If you don't kick off searches for alternates until you have an alpha, your speedup will obviously suck donkey balls.
If you don't have an alpha score, I can imagine 2 things:
a) You use the previous alpha. This sucks, because the score does usually change from iteration to iteration, and the score you get for an alternate might be useless.
b) You use a small window around the previous alpha. This sucks, because non-zero window searches suck.
Another possibility is a mutation of MTD(f). I'm just making this up here, so bear with me. Each move uses its own previous alpha as a starting bound, then does a binary-type search to narrow the score. This part is like Tony's idea except replacing the aspiration window with a series of scout searches. But, you also maintain a global lower bound on the real score that you can pass around to each node. If for move 7 you know that the score is less than +1.00 due to a scout search, and then move 4 returns +1.50, you get a cutoff and can abandon move 7.
Of course, you don't actually get the "previous alpha" with this. You'd have to just use whatever information you have on the move, or you can use the real alpha of the previous iteration. Also, you have to decide what to do if you are doing an initial search around 0.00, and another move comes back as >1.00. Do you abandon the search and start again around 1.01, or hope for a <=0.00 score?