jwes wrote:Don wrote:jwes wrote:
How about this? In a Q+pawns vs pawns endgame, if the side with only pawns has no pawn past the 5th rank, a 3 ply search is enough to determine if the position is completely won or only probably won.
The key phrase you used here is "probably won" and if you want a scalable program you must be 100% certain if you are going to prune absolutely.
Joona said it better than I have been saying it, basically programs already handle this in a scalable way. All modern programs will tend to adjust their depth according to the certainty in the position. After a6 in the reference position, modern programs will in general look far less deeply than normal but the depth the do look will progressively increase as they search more deeply. But the point is that only a tiny fraction of the total search time will be spent on such moves.
It's not a waste of time to spend 0.001 percent of your time on a move that you cannot determine with 100% is unplayable. But if you can, then you can save 0.001 percent of your search effort for a huge win
If you are doing a 100 ply search, 0.001 percent might be translate to several ply.
The point I was trying to make is:
If you have a massive advantage and you have previously searched this position without the advantage decreasing significantly then you can afford to reduce the search depth much more than you otherwise would.
Yes, then we completely agree.
The issue where there seems to be disagreement is whether it's safe to not reduce at all, just to hard prune (without consideration for depth or anything else.)
My answer is that if you are not 100% sure that a move is inferior, you should not "hard" prune it. But if you are 99.999 certain you can probably reduce severely.
I think bishop under-promotion is a good case study. You can prune those out completely, but if you do you weaken the program by some small fraction of an ELO. The primary argument for throwing these out anyway is the overhead issue, is it worth looking at a move that is only best 1 out of a million times? For a practical chess playing program the answer is probably no - my argument is more theoretical than practical.
But I think ultimately, it does weaken the program. One theoretically desirable property of a chess program is that given enough time it should be able to play perfect chess. So having a rule like this means that you have limited the scalability of your program. What this means is that you might have a program that is stronger than some other program today, but on much faster hardware your program might be weaker because you just put a hard limit on how strong it can get (it will always misplay positions where promotion to a bishop is the best move.)
Of course I have no illusions that this is actually a practical consideration - but I think the general principle applies to less clear cut cases.