I only read this last post of the whole thread, so I probably miss what exactly is being discussed, but I was just wondering if, in the case of Stockfish, this is true. It is true that with any additional ply there is less futility pruning and especially if we are talking about ALL nodes, more moves leaving the node will be searched, unless one of the moves searched leads to a beta cutoff.mcostalba wrote:IMHO the intellectual effort needed to understand this matter is the following:
Let's say I search at depth 20, then I search at depth 21. What have been changed between the two searches ?
Intuitively and also assuming very low LMR the answer is: "in the second case I search at higher depth".
IMHO, with today very aggressive LMR and pruning the most realistic answer is:"in the second case I search a wider tree" (not necessarly deeper).
The "depth" parameter in the iterative deepening loop is actually misleading, it should be called "wideness".
If we got this concept then everything comes more easier. But I understand that this concept requires some intelelctual effort especcialy for traditionally minded people.
But there is to my knowledge very little code that can say "We have searched this node to N-1 plies and now we should search it to depth N, but the chances that a new search in this node will change anything at the root are so infinitesimally small that it is safe to stop now and accept the N-1 result from the transposition table. Back up this N-1 result to the parent node and proceed with the next move from the parent node."
The only thing that comes close to this is the Reduction Matrix in Stockfish that determines the reduced search depth for moves without any extension, they will be searched with reduced depth only unless they fail high in this reduced search, and where the reduction is based solely on depth and move count. In theory these moves only need a positional evaluation + already reached tactical depth and there is no need for iterative deepening anymore. But even here the reduction does not halt iterative deepening. Reduced search to depth N will always be deeper than Reduced search to depth N - 1.
Code: Select all
// Step 14. Reduced search
// Reduction lookup tables (initialized at startup) and their getter functions
int8_t ReductionMatrix[2][64][64]; // [pv][depth][moveNumber]
template <NodeType PV>
inline Depth reduction(Depth d, int mn) { return (Depth) ReductionMatrix[PV][Min(d / 2, 63)][Min(mn, 63)]; }
Code: Select all
void init_search() {
int d; // depth (ONE_PLY == 2)
int hd; // half depth (ONE_PLY == 1)
int mc; // moveCount
// Init reductions array
for (hd = 1; hd < 64; hd++) for (mc = 1; mc < 64; mc++)
{
double pvRed = log(double(hd)) * log(double(mc)) / 3.0;
double nonPVRed = 0.33 + log(double(hd)) * log(double(mc)) / 2.25;
ReductionMatrix[PV][hd][mc] = (int8_t) ( pvRed >= 1.0 ? floor( pvRed * int(ONE_PLY)) : 0);
ReductionMatrix[NonPV][hd][mc] = (int8_t) (nonPVRed >= 1.0 ? floor(nonPVRed * int(ONE_PLY)) : 0);
}
Regards, Eelco
