Henk wrote:Henk wrote:Actually I do not understand LMR. LMR depends on good move ordering. But good move ordering costs a lot of CPU Time. If move ordering is not perfect than the search depth of at least one move will be reduced while it shouldn't.
This should be:
If move ordering is not perfect and alfa-beta search algorithm is used, then the search depth of at least one move will be reduced while it shouldn't, if no fail-high is encountered before that move is reached.
So how big is the chance that a fail-high is encountered before a move is reduced by LMR ? At the root level there is never a fail-high when you start with alfa = -infinity, beta = infinity.
I did a study on this once, where I basically turned off LMR and simply noted the move number of the BEST move, whether it was a cutoff or not and kept statistics on it.
The numbers were impressive. The first move was the best move a surprisingly high number of times but if you combined the first N it was ridiculously high. And there was a nice progression downward, so that what the first move didn't catch, the second move caught "most" of the time, then the 3rd move and so on.
I don't have the numbers but they are buried in my email to Larry and this was a few years ago. But it is trivial to reproduce this study, why don't you give it a try?
I get the impression that you want this to be perfect and that you think it's a complete failure if you reduce the wrong move. If you are thinking that way then forget about building a strong program because EVERYTHING in computer chess is about amortized risk. If you extend a move, you are wasting time - how do you know you are extending the right move? Check extensions slow a chess program down and could prevent you from seeing tactics that don't involve any checks, what do you do about that? What about null move pruning? It's amortized risk.
What about your evaluation function itself? Do you have mobility? What if mobility is completely irrelevant for some piece? Once we played against Joel Benjamin in the hotel room (years ago with a different program) and my program place the rook on the 7th rank when it was useless. Should I remove the rook to the 7th bonus? (These days it's a much better rule, but it's still broken as are all positional heuristics in chess engines.)
The only thing you can reasonably do it to simply TRY things and see if they work. Take an idea that seems to make sense and TRY it. If it doesn't work then think about what it's weakeness are and how it can be improved. But the number one consideration is that everything in computer chess is a calculated risk, especially when it comes to pruning and reductions.
Incidentally, it's that way with people too. They miss things because their search is highly directed and highly selective. Do you think they should stop using a selective search?
Don
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.