There are ways to handle some of that, but I don't think minimaxing is the solution. We did that in Cray Blitz. And in early Crafty versions. The problem is you will _always_ accept gambits since that move wins material and the rest do not.Don wrote:I think mini-max of the book is a very good way to customize a book for a given program. In general you want your program to come out of book with a position that it thinks is good. A significant percentage of Komodo's losses in tournament games happens when we come out of book with what Komodo considers a losing (or close to losing) position.Dann Corbit wrote:Why not mini-max the book with alpha-beta upon update? Doesn't that remove the problems of refutations being discovered converging slowly?hgm wrote:Well, the example is very extreme, of course. But it could very well hapen that a variation that was believed to be good for the opponent was played a lot, and that the common defense against it was move A, which thus got many plays, but rather poor score. Then a novelty B was found in this position, which basically refuted the previous move. It was played a few times, much less than A, but with a very good score, which convinced people that it indeed was a refutation. So they stopped playing the previous move, and B never got the chance to get anywhere near the number of plays that A has. This makes scenarious where A has 40% out of 1000 plays while B has 60% out of 100 plays not unrealistic. Polyglot would continue to prefer the poor move by 400:60. And even worse, it would not discourage the preceeding move from being played, as all the 600 points scored against it because it was met by the inferior reply A continue to count, while they are in fact no longer relevant, as good players will meet it with move B for sure.
Now I know of course that serious book builders would select the games from which they build the book by date, pay attention to the ratings, etc., exactly to cleanse the input data from such misleading cues. But this is a case that could very well be handled automatically, by proper application of minimax. If the 40% vs 60% scores for move A and B would lead to fafor B over A in play by 90% vs 10%, the average score of this node should be set to 58% (90% x 60% + 10% x 40%), so 42% for the opponent's preceding move, based on 1/(0.81/100 + 0.01/1000) = 122 'virtual' games, rather than at 640 out of 1100 games (58%).
However I believe that for this to work the program being tuned must search every book position to ensure there is a valid choice (at least from the programs point of view.) For example:
In position P the book gives Ne5 and d4, but both moves are inferior to c3 which is not in the book. In this case, any mini-max that involves position P will be in error. It might be a good line if c3 is allowed and thus the computer will avoid it when playing white, or as black think it's a great position to force white into.
So my idea for mini-maxing the book is to evaluate every position and insert the computers choice, if different, into the book also. In the cases where the computer has it's own TN you might also consider verifying the new choice with additional thinking time to avoid gratuitously adding moves to the book (that might also get recursively expanded upon.)
I have never seen a book that didn't have some kind of omission such as this. Cilkchess lost a game to "King" one year because the book did not consider an obvious choice. In general I have not payed a lot of attention to the book in our programs and it has cost us dearly. There are a few programs that do not test very well or rank very high on the rating lists but get much better results in tournaments because of exceptionally good book preparation. Once you get a terrible position it's difficult to recover even against a program 2 or 3 hundred ELO weaker.
You can do this minimaxing at least two different ways. The simplest is to simply follow each book line to the end, perhaps do a search there, and then minimax the scores back to the root. Only catch is that you have to change things for black, since when you play black you now want the lowest score, not the highest, if you use the same book for both sides. The other is to search down the book lines, but do a real search at each node as well to see if there is a better move than the book alternative(s).
We finally ended up with a manual system in Cray Blitz. We did the search, but only printed things out when we "thought" we had a better move than the book move, or when the search said that a book move flagged as good really appeared to be bad. That was a pain...