Not yet for me. I have also tried both. Problem is, I am not sure that testing this with short games is safe, because the reductions become a major fraction of the search space in terms of depth removed...lkaufman wrote:Should the amount of LMR reduction be an increasing function of depth, as in Stockfish, or should it be depth-independent, as in Rybka and Ippo and Critter? In Komodo we have gone both ways on this issue, and still we are not sure. What say you all? Does anyone have solid data on this issue?
Should reduction depend on depth?
Moderator: Ras
-
bob
- Posts: 20943
- Joined: Mon Feb 27, 2006 7:30 pm
- Location: Birmingham, AL
Re: Should reduction depend on depth?
-
Eelco de Groot
- Posts: 4694
- Joined: Sun Mar 12, 2006 2:40 am
- Full name: Eelco de Groot
Re: Should reduction depend on depth?
Marco as far as I can follow I, of course I should almost say, agree with what Joona was writing. The only trouble is I can't seem to find much code in Stockfish that actually does these Late Move Reductions. The only reductions I see are after the futility pruningmcostalba wrote:Joona has written a very high quality post: I suggest newbie engine developers to give it a very good read because it is not common to find such a clear and deep description of how things works.zamar wrote: There is a very logical reason to do this:
When we remaining_depth = 5 and we are considering how many plies to reduce, there is an enormous impact to tactical strength. But when remaining_depth = 15 the effect is much milder and we might want to be just a little bit more aggressive.
It's a known fact that ELO-gap between 3-ply and 4-ply fixed depth search is much bigger than between 13-ply and 14-ply fixed depth search.
Now reducing at the end of the move list is nothing more than making program weaker at less important moves, so that it can search deeper and thus become stronger at more important moves. You may also see this as a kind of ELO-strength transfer from the likely less important lines to the likely more important lines. If you want to keep this ELO-strength transfer metric close to constant at each search tree level, it automatically means reducing more heavily in higher depths.
But again we have to deal with the fact that LMR interacts heavily with Null move, LMP (=late move pruning) and maybe even futility pruning, so what works best depends a lot of how other things are done.
ELO-wise I do not expect it to be a big issue (~10 ELOs) whether you decide to go with depth-dependant or depth-independent reductions.
Code: Select all
{
// Step 15. Reduced depth search
// If the move fails high will be re-searched at full depth.
bool doFullDepthSearch = true;
if ( depth > 3 * ONE_PLY
&& !captureOrPromotion
&& !dangerous
&& !is_castle(move)
&& ss->killers[0] != move
&& ss->killers[1] != move
&& (ss->reduction = reduction<PvNode>(depth, moveCount)) != DEPTH_ZERO)
{
Depth d = newDepth - ss->reduction;
alpha = SpNode ? sp->alpha : alpha;
value = d < ONE_PLY ? -qsearch<NonPV>(pos, ss+1, -(alpha+1), -alpha, DEPTH_ZERO)
: - search<NonPV>(pos, ss+1, -(alpha+1), -alpha, d);
ss->reduction = DEPTH_ZERO;
doFullDepthSearch = (value > alpha);
}
Code: Select all
// Step 14. Reduced search
// Reduction lookup tables (initialized at startup) and their access function
int8_t Reductions[2][64][64]; // [pv][depth][moveNumber]
template <bool PvNode> inline Depth reduction(Depth d, int mn) {
return (Depth) Reductions[PvNode][Min(d / ONE_PLY, 63)][Min(mn, 63)];
}
Code: Select all
int d; // depth (ONE_PLY == 2)
int hd; // half depth (ONE_PLY == 1)
int mc; // moveCount
// Init reductions array
for (hd = 1; hd < 64; hd++) for (mc = 1; mc < 64; mc++)
{
double pvRed = log(double(hd)) * log(double(mc)) / 3.0;
double nonPVRed = 0.33 + log(double(hd)) * log(double(mc)) / 2.25;
Reductions[1][hd][mc] = (int8_t) ( pvRed >= 1.0 ? floor( pvRed * int(ONE_PLY)) : 0);
Reductions[0][hd][mc] = (int8_t) (nonPVRed >= 1.0 ? floor(nonPVRed * int(ONE_PLY)) : 0);
}
These are not very aggressive reductions, moreover they are only for a special class of moves that for instance had to pass the futility pruning first. And if the null move fails low in a reduced search, the reduction may even be undone entirely!
I have not calculated these numbers, I believe somebody posted a table once in this forum of this function you made, but if log is base 10 log, the maximum reduction possible seems much less than the depth reducing effect of the Futility pruning in step 13, even if you don't look at the fact that this is full (forward) pruning of moves, not just reductions. I do agree you can look at the Futility Pruning (Step 13) as an form depth reduction too, but Joona specifically calls this LMP and Futility Pruning. The actual Late Move Reductions are only in Step 14 and compared to Step 13 much less important IMO! Or am I overlooking something, in the magnitude of the reduction numbers in the Reductions array for instance...
Code: Select all
// Step 13. Futility pruning (is omitted in PV nodes)
if ( !PvNode
&& !captureOrPromotion
&& !inCheck
&& !dangerous
&& move != ttMove
&& !is_castle(move))
{
// Move count based pruning
if ( moveCount >= futility_move_count(depth)
&& (!threatMove || !connected_threat(pos, move, threatMove))
&& bestValue > VALUE_MATED_IN_PLY_MAX) // FIXME bestValue is racy
{
if (SpNode)
lock_grab(&(sp->lock));
continue;
}
// Value based pruning
// We illogically ignore reduction condition depth >= 3*ONE_PLY for predicted depth,
// but fixing this made program slightly weaker.
Depth predictedDepth = newDepth - reduction<PvNode>(depth, moveCount);
futilityValue = futilityBase + futility_margin(predictedDepth, moveCount)
+ H.gain(pos.piece_on(move_from(move)), move_to(move));
if (futilityValue < beta)
{
if (SpNode)
{
lock_grab(&(sp->lock));
if (futilityValue > sp->bestValue)
sp->bestValue = bestValue = futilityValue;
}
else if (futilityValue > bestValue)
bestValue = futilityValue;
continue;
}
// Prune moves with negative SEE at low depths
if ( predictedDepth < 2 * ONE_PLY
&& bestValue > VALUE_MATED_IN_PLY_MAX
&& pos.see_sign(move) < 0)
{
if (SpNode)
lock_grab(&(sp->lock));
continue;
}
}
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
-
Eelco de Groot
- Posts: 4694
- Joined: Sun Mar 12, 2006 2:40 am
- Full name: Eelco de Groot
Re: Should reduction depend on depth?
Oh, I now see one place where my reasoning goes wrong, log in C is the natural base e logarithm, this increases all the numbers of course!Eelco de Groot wrote: I have not calculated these numbers, I believe somebody posted a table once in this forum of this function you made, but if log is base 10 log, the maximum reduction possible seems much less than the depth reducing effect of the Futility pruning in step 13
2.3025850929940456840179914546844
Sorry!
Eelco
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
-
lkaufman
- Posts: 6281
- Joined: Sun Jan 10, 2010 6:15 am
- Location: Maryland USA
- Full name: Larry Kaufman
Re: Should reduction depend on depth?
Well, you can test mild depth dependence in fast games, at least to see if the principle is correct. Our results are inconclusive to date.bob wrote:Not yet for me. I have also tried both. Problem is, I am not sure that testing this with short games is safe, because the reductions become a major fraction of the search space in terms of depth removed...lkaufman wrote:Should the amount of LMR reduction be an increasing function of depth, as in Stockfish, or should it be depth-independent, as in Rybka and Ippo and Critter? In Komodo we have gone both ways on this issue, and still we are not sure. What say you all? Does anyone have solid data on this issue?
-
lkaufman
- Posts: 6281
- Joined: Sun Jan 10, 2010 6:15 am
- Location: Maryland USA
- Full name: Larry Kaufman
Re: Should reduction depend on depth?
Eelco de Groot wrote:Marco as far as I can follow I, of course I should almost say, agree with what Joona was writing. The only trouble is I can't seem to find much code in Stockfish that actually does these Late Move Reductions. The only reductions I see are after the futility pruningmcostalba wrote:Joona has written a very high quality post: I suggest newbie engine developers to give it a very good read because it is not common to find such a clear and deep description of how things works.zamar wrote: There is a very logical reason to do this:
When we remaining_depth = 5 and we are considering how many plies to reduce, there is an enormous impact to tactical strength. But when remaining_depth = 15 the effect is much milder and we might want to be just a little bit more aggressive.
It's a known fact that ELO-gap between 3-ply and 4-ply fixed depth search is much bigger than between 13-ply and 14-ply fixed depth search.
Now reducing at the end of the move list is nothing more than making program weaker at less important moves, so that it can search deeper and thus become stronger at more important moves. You may also see this as a kind of ELO-strength transfer from the likely less important lines to the likely more important lines. If you want to keep this ELO-strength transfer metric close to constant at each search tree level, it automatically means reducing more heavily in higher depths.
But again we have to deal with the fact that LMR interacts heavily with Null move, LMP (=late move pruning) and maybe even futility pruning, so what works best depends a lot of how other things are done.
ELO-wise I do not expect it to be a big issue (~10 ELOs) whether you decide to go with depth-dependant or depth-independent reductions.
This is from the older Stockfish code, before 2.2.1, but in more recent Stockfish 2.2.1 code I don't think the reduced search is fundamentally different. The reduction calculations are not so easy to decipher so maybe I got it wrong;Code: Select all
{ // Step 15. Reduced depth search // If the move fails high will be re-searched at full depth. bool doFullDepthSearch = true; if ( depth > 3 * ONE_PLY && !captureOrPromotion && !dangerous && !is_castle(move) && ss->killers[0] != move && ss->killers[1] != move && (ss->reduction = reduction<PvNode>(depth, moveCount)) != DEPTH_ZERO) { Depth d = newDepth - ss->reduction; alpha = SpNode ? sp->alpha : alpha; value = d < ONE_PLY ? -qsearch<NonPV>(pos, ss+1, -(alpha+1), -alpha, DEPTH_ZERO) : - search<NonPV>(pos, ss+1, -(alpha+1), -alpha, d); ss->reduction = DEPTH_ZERO; doFullDepthSearch = (value > alpha); }
andCode: Select all
// Step 14. Reduced search // Reduction lookup tables (initialized at startup) and their access function int8_t Reductions[2][64][64]; // [pv][depth][moveNumber] template <bool PvNode> inline Depth reduction(Depth d, int mn) { return (Depth) Reductions[PvNode][Min(d / ONE_PLY, 63)][Min(mn, 63)]; }
Code: Select all
int d; // depth (ONE_PLY == 2) int hd; // half depth (ONE_PLY == 1) int mc; // moveCount // Init reductions array for (hd = 1; hd < 64; hd++) for (mc = 1; mc < 64; mc++) { double pvRed = log(double(hd)) * log(double(mc)) / 3.0; double nonPVRed = 0.33 + log(double(hd)) * log(double(mc)) / 2.25; Reductions[1][hd][mc] = (int8_t) ( pvRed >= 1.0 ? floor( pvRed * int(ONE_PLY)) : 0); Reductions[0][hd][mc] = (int8_t) (nonPVRed >= 1.0 ? floor(nonPVRed * int(ONE_PLY)) : 0); }
These are not very aggressive reductions, moreover they are only for a special class of moves that for instance had to pass the futility pruning first. And if the null move fails low in a reduced search, the reduction may even be undone entirely!
I have not calculated these numbers, I believe somebody posted a table once in this forum of this function you made, but if log is base 10 log, the maximum reduction possible seems much less than the depth reducing effect of the Futility pruning in step 13, even if you don't look at the fact that this is full (forward) pruning of moves, not just reductions. I do agree you can look at the Futility Pruning (Step 13) as an form depth reduction too, but Joona specifically calls this LMP and Futility Pruning. The actual Late Move Reductions are only in Step 14 and compared to Step 13 much less important IMO! Or am I overlooking something, in the magnitude of the reduction numbers in the Reductions array for instance...
Regards, EelcoCode: Select all
// Step 13. Futility pruning (is omitted in PV nodes) if ( !PvNode && !captureOrPromotion && !inCheck && !dangerous && move != ttMove && !is_castle(move)) { // Move count based pruning if ( moveCount >= futility_move_count(depth) && (!threatMove || !connected_threat(pos, move, threatMove)) && bestValue > VALUE_MATED_IN_PLY_MAX) // FIXME bestValue is racy { if (SpNode) lock_grab(&(sp->lock)); continue; } // Value based pruning // We illogically ignore reduction condition depth >= 3*ONE_PLY for predicted depth, // but fixing this made program slightly weaker. Depth predictedDepth = newDepth - reduction<PvNode>(depth, moveCount); futilityValue = futilityBase + futility_margin(predictedDepth, moveCount) + H.gain(pos.piece_on(move_from(move)), move_to(move)); if (futilityValue < beta) { if (SpNode) { lock_grab(&(sp->lock)); if (futilityValue > sp->bestValue) sp->bestValue = bestValue = futilityValue; } else if (futilityValue > bestValue) bestValue = futilityValue; continue; } // Prune moves with negative SEE at low depths if ( predictedDepth < 2 * ONE_PLY && bestValue > VALUE_MATED_IN_PLY_MAX && pos.see_sign(move) < 0) { if (SpNode) lock_grab(&(sp->lock)); continue; } }
I'm only a novice at reading code, but I think you are somehow confused. Futility pruning only affects the last few plies, whereas Late Move Reduction occurs at any depth above three plies; it is not a subset of Futility. LMR is thus far more consequential than futility pruning; if it knocks a ply or two off of move after move, the total reduction can be huge. Stockfish LMR reductions are the largest of any program with which I am familiar. This is the main reason that Stockfish reaches greater depths than any other program at any normal time limit.
-
rvida
- Posts: 481
- Joined: Thu Apr 16, 2009 12:00 pm
- Location: Slovakia, EU
Re: Should reduction depend on depth?
True, but the condition "depth > 3 * ONE_PLY" is not mandatory (or not necessarily formulated that way)lkaufman wrote: Futility pruning only affects the last few plies, whereas Late Move Reduction occurs at any depth above three plies;
True.lkaufman wrote: it is not a subset of Futility.
True. The keyword here is "recursively".lkaufman wrote: LMR is thus far more consequential than futility pruning; if it knocks a ply or two off of move after move, the total reduction can be huge.
(at least partially) true. I just wonder why you didn't agree with this while discussing on the Rybka Forum?lkaufman wrote: Stockfish LMR reductions are the largest of any program with which I am familiar. This is the main reason that Stockfish reaches greater depths than any other program at any normal time limit.
-
Eelco de Groot
- Posts: 4694
- Joined: Sun Mar 12, 2006 2:40 am
- Full name: Eelco de Groot
Re: Should reduction depend on depth?
Hello Larry,lkaufman wrote: I'm only a novice at reading code, but I think you are somehow confused. Futility pruning only affects the last few plies, whereas Late Move Reduction occurs at any depth above three plies; it is not a subset of Futility. LMR is thus far more consequential than futility pruning; if it knocks a ply or two off of move after move, the total reduction can be huge. Stockfish LMR reductions are the largest of any program with which I am familiar. This is the main reason that Stockfish reaches greater depths than any other program at any normal time limit.
You are right that I have underestimated until now mainly in my mind the reductions from Stockfish Late Move Reductions, but that was because I read log as a base 10 log, and then an awful lot of those LMR reductions are on the order of ONE_PLY or less, and for large depths that did not seem to have much of an impact. If it is used recursively then that also changes things a bit of course. Actually in Rainbow Serpent I had increased the LMR already for this reason, and quite possibly too much, by not taking the logarithm of the depth but of depth squared
Code: Select all
int d; // depth (ONE_PLY == 2)
int hd; // half depth (ONE_PLY == 1)
int mc; // moveCount
// Init reductions array
for (hd = 1; hd < 64; hd++) for (mc = 1; mc < 64; mc++)
{
double pvRed = log(double((hd + 1) * (hd - 1))) * log(double(mc)) / 3.0;
double nonPVRed = 0.33 + log(double((hd + 1) * (hd - 1))) * log(double(mc)) / 2.25;
Reductions[1][hd][mc] = (int8_t) ( pvRed >= 1.0 ? floor( pvRed * int(ONE_PLY)) : 0);
Reductions[0][hd][mc] = (int8_t) (nonPVRed >= 1.0 ? floor(nonPVRed * int(ONE_PLY)) : 0);
Code: Select all
// Init futility margins array
for (d = 1; d < 24; d++) for (mc = 0; mc < 64; mc++)
FutilityMargins[d][mc] = Value (std::max<int>((100 + (d * d) + int(log(double(d * d)))) * int(log(double(d * d) / 2) / log(2.0) + 1.001) - ((d+5) * (mc+1) - 45), 45));// Value(112 * int(log(double(d * d) / 2) / log(2.0) + 1.001) - 8 * mc + 45);
// Init futility move count array
for (d = 0; d < 32; d++)
FutilityMoveCounts[d] = int(3.001 + 0.25 * pow(d, 2.0));
}
Regards, Eelco
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
-
lkaufman
- Posts: 6281
- Joined: Sun Jan 10, 2010 6:15 am
- Location: Maryland USA
- Full name: Larry Kaufman
Re: Should reduction depend on depth?
rvida wrote:On the Rybka Forum we were discussing LOW DEPTH pruning, which I took to mean the last four plies, which is the Ivanhoe definition and is what you call "selective search". On those plies I claim that Ivanhoe and Critter both prune much more than SF. SF uses higher move counts for pruning, larger margins for futility and static null, etc. I think we only disagreed because you had a different definition of LOW DEPTH in your mind than I did. There is no question that SF uses more aggressive LMR than Ivanhoe or Critter at higher depths; I haven't done the calculation to determine at what depth SF LMR becomes more aggressive than Ivanhoe or Critter, but I imagine it is somewhere around 5 or 6 plies from the root. Please let us know whether you still disagree with me now that I have made clear what I meant by LOW DEPTH.lkaufman wrote:(at least partially) true. I just wonder why you didn't agree with this while discussing on the Rybka Forum?lkaufman wrote: Stockfish LMR reductions are the largest of any program with which I am familiar. This is the main reason that Stockfish reaches greater depths than any other program at any normal time limit.
-
Eelco de Groot
- Posts: 4694
- Joined: Sun Mar 12, 2006 2:40 am
- Full name: Eelco de Groot
Re: Should reduction depend on depth?
Actually to avoid more confusion I should edit that, I was just too late to edit the post; I think the Futility Pruning is only when depth < 7 * ONE_PLY in Stockfish:Eelco de Groot wrote:But for Rainbow Serpent this still means that the Futility Pruning is having more of an effect because there that is no longer confined to the last 8 plies as in Stockfish, but to the last 24 plies.
Code: Select all
inline Value futility_margin(Depth d, int mn) {
return d < 7 * ONE_PLY ? FutilityMargins[Max(d, 1)][Min(mn, 63)]
: 2 * VALUE_INFINITE;
Code: Select all
inline Value futility_margin(Depth d, int mn) {
return d < 24 * ONE_PLY ? FutilityMargins[std::max<int>(d, 1)][std::min<int>(mn, 63)]
: 2 * VALUE_INFINITE;
}
inline int futility_move_count(Depth d) {
return d < 24 * ONE_PLY ? FutilityMoveCounts[d] : MAX_MOVES;
}
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
-
lkaufman
- Posts: 6281
- Joined: Sun Jan 10, 2010 6:15 am
- Location: Maryland USA
- Full name: Larry Kaufman
Re: Should reduction depend on depth?
Regards,Eelco de Groot wrote:Hello Larry,lkaufman wrote: I'm only a novice at reading code, but I think you are somehow confused. Futility pruning only affects the last few plies, whereas Late Move Reduction occurs at any depth above three plies; it is not a subset of Futility. LMR is thus far more consequential than futility pruning; if it knocks a ply or two off of move after move, the total reduction can be huge. Stockfish LMR reductions are the largest of any program with which I am familiar. This is the main reason that Stockfish reaches greater depths than any other program at any normal time limit.
You are right that I have underestimated until now mainly in my mind the reductions from Stockfish Late Move Reductions, but that was because I read log as a base 10 log, and then an awful lot of those LMR reductions are on the order of ONE_PLY or less, and for large depths that did not seem to have much of an impact. If it is used recursively then that also changes things a bit of course. Actually in Rainbow Serpent I had increased the LMR already for this reason, and quite possibly too much, by not taking the logarithm of the depth but of depth squared!
But for Rainbow Serpent this still means that the Futility Pruning is having more of an effect because there that is no longer confined to the last 8 plies as in Stockfish, but to the last 24 plies. I do take some precautions by increasing the margins but it is still pretty risky of course, especially if there are lines where there are sacrifices or semi-sacrifices, those are very hard now to get right. But this is not a problem for just Rainbow Serpent.Code: Select all
int d; // depth (ONE_PLY == 2) int hd; // half depth (ONE_PLY == 1) int mc; // moveCount // Init reductions array for (hd = 1; hd < 64; hd++) for (mc = 1; mc < 64; mc++) { double pvRed = log(double((hd + 1) * (hd - 1))) * log(double(mc)) / 3.0; double nonPVRed = 0.33 + log(double((hd + 1) * (hd - 1))) * log(double(mc)) / 2.25; Reductions[1][hd][mc] = (int8_t) ( pvRed >= 1.0 ? floor( pvRed * int(ONE_PLY)) : 0); Reductions[0][hd][mc] = (int8_t) (nonPVRed >= 1.0 ? floor(nonPVRed * int(ONE_PLY)) : 0);
I don't quite follow where the number 24 plies comes from, but to my thinking ANY algorithm that applies to the last 24 plies is silly. Anything that is valid anywhere near that far from the leaf almost certainly must be valid at any distance from the leaf. Furthermore, it is almost certainly wrong to prune ANY move (except for something like bishop underpromotion that is extremely rare) more than around ten moves back from the leaf, because pruning at ten moves back eliminates 99.9% of the underlying tree assuming a branching factor of 2; how much more can you save by pruning even earlier?.
Can you clarify exactly what you are pruning or reducing 24 plies back, but not 25 plies back?
Regards, Eelco
Larry