Should reduction depend on depth?

Discussion of chess software programming and technical issues.

Moderator: Ras

bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Should reduction depend on depth?

Post by bob »

lkaufman wrote:Should the amount of LMR reduction be an increasing function of depth, as in Stockfish, or should it be depth-independent, as in Rybka and Ippo and Critter? In Komodo we have gone both ways on this issue, and still we are not sure. What say you all? Does anyone have solid data on this issue?
Not yet for me. I have also tried both. Problem is, I am not sure that testing this with short games is safe, because the reductions become a major fraction of the search space in terms of depth removed...
User avatar
Eelco de Groot
Posts: 4694
Joined: Sun Mar 12, 2006 2:40 am
Full name:   Eelco de Groot

Re: Should reduction depend on depth?

Post by Eelco de Groot »

mcostalba wrote:
zamar wrote: There is a very logical reason to do this:
When we remaining_depth = 5 and we are considering how many plies to reduce, there is an enormous impact to tactical strength. But when remaining_depth = 15 the effect is much milder and we might want to be just a little bit more aggressive.

It's a known fact that ELO-gap between 3-ply and 4-ply fixed depth search is much bigger than between 13-ply and 14-ply fixed depth search.

Now reducing at the end of the move list is nothing more than making program weaker at less important moves, so that it can search deeper and thus become stronger at more important moves. You may also see this as a kind of ELO-strength transfer from the likely less important lines to the likely more important lines. If you want to keep this ELO-strength transfer metric close to constant at each search tree level, it automatically means reducing more heavily in higher depths.

But again we have to deal with the fact that LMR interacts heavily with Null move, LMP (=late move pruning) and maybe even futility pruning, so what works best depends a lot of how other things are done.

ELO-wise I do not expect it to be a big issue (~10 ELOs) whether you decide to go with depth-dependant or depth-independent reductions.
Joona has written a very high quality post: I suggest newbie engine developers to give it a very good read because it is not common to find such a clear and deep description of how things works.
Marco as far as I can follow I, of course I should almost say, agree with what Joona was writing. The only trouble is I can't seem to find much code in Stockfish that actually does these Late Move Reductions. The only reductions I see are after the futility pruning

Code: Select all

      {
          // Step 15. Reduced depth search
          // If the move fails high will be re-searched at full depth.
          bool doFullDepthSearch = true;

          if (    depth > 3 * ONE_PLY
              && !captureOrPromotion
              && !dangerous
              && !is_castle(move)
              &&  ss->killers[0] != move
              &&  ss->killers[1] != move
              && (ss->reduction = reduction<PvNode>(depth, moveCount)) != DEPTH_ZERO)
          {
              Depth d = newDepth - ss->reduction;
              alpha = SpNode ? sp->alpha : alpha;

              value = d < ONE_PLY ? -qsearch<NonPV>(pos, ss+1, -(alpha+1), -alpha, DEPTH_ZERO)
                                  : - search<NonPV>(pos, ss+1, -(alpha+1), -alpha, d);

              ss->reduction = DEPTH_ZERO;
              doFullDepthSearch = (value > alpha);
          }
This is from the older Stockfish code, before 2.2.1, but in more recent Stockfish 2.2.1 code I don't think the reduced search is fundamentally different. The reduction calculations are not so easy to decipher so maybe I got it wrong;

Code: Select all

  // Step 14. Reduced search

  // Reduction lookup tables (initialized at startup) and their access function
  int8_t Reductions[2][64][64]; // [pv][depth][moveNumber]

  template <bool PvNode> inline Depth reduction(Depth d, int mn) {

    return (Depth) Reductions[PvNode][Min(d / ONE_PLY, 63)][Min(mn, 63)];
  }
and

Code: Select all

  int d;  // depth (ONE_PLY == 2)
  int hd; // half depth (ONE_PLY == 1)
  int mc; // moveCount

  // Init reductions array
  for (hd = 1; hd < 64; hd++) for (mc = 1; mc < 64; mc++)
  {
      double    pvRed = log(double(hd)) * log(double(mc)) / 3.0;
      double nonPVRed = 0.33 + log(double(hd)) * log(double(mc)) / 2.25;
      Reductions[1][hd][mc] = (int8_t) (   pvRed >= 1.0 ? floor(   pvRed * int(ONE_PLY)) : 0);
      Reductions[0][hd][mc] = (int8_t) (nonPVRed >= 1.0 ? floor(nonPVRed * int(ONE_PLY)) : 0);
  }

These are not very aggressive reductions, moreover they are only for a special class of moves that for instance had to pass the futility pruning first. And if the null move fails low in a reduced search, the reduction may even be undone entirely!

I have not calculated these numbers, I believe somebody posted a table once in this forum of this function you made, but if log is base 10 log, the maximum reduction possible seems much less than the depth reducing effect of the Futility pruning in step 13, even if you don't look at the fact that this is full (forward) pruning of moves, not just reductions. I do agree you can look at the Futility Pruning (Step 13) as an form depth reduction too, but Joona specifically calls this LMP and Futility Pruning. The actual Late Move Reductions are only in Step 14 and compared to Step 13 much less important IMO! Or am I overlooking something, in the magnitude of the reduction numbers in the Reductions array for instance...

Code: Select all

      // Step 13. Futility pruning (is omitted in PV nodes)
      if (   !PvNode
          && !captureOrPromotion
          && !inCheck
          && !dangerous
          &&  move != ttMove
          && !is_castle(move))
      {
          // Move count based pruning
          if (   moveCount >= futility_move_count(depth)
              && (!threatMove || !connected_threat(pos, move, threatMove))
              && bestValue > VALUE_MATED_IN_PLY_MAX) // FIXME bestValue is racy
          {
              if (SpNode)
                  lock_grab(&(sp->lock));

              continue;
          }

          // Value based pruning
          // We illogically ignore reduction condition depth >= 3*ONE_PLY for predicted depth,
          // but fixing this made program slightly weaker.
          Depth predictedDepth = newDepth - reduction<PvNode>(depth, moveCount);
          futilityValue =  futilityBase + futility_margin(predictedDepth, moveCount)
                         + H.gain(pos.piece_on(move_from(move)), move_to(move));

          if (futilityValue < beta)
          {
              if (SpNode)
              {
                  lock_grab(&(sp->lock));
                  if (futilityValue > sp->bestValue)
                      sp->bestValue = bestValue = futilityValue;
              }
              else if (futilityValue > bestValue)
                  bestValue = futilityValue;

              continue;
          }

          // Prune moves with negative SEE at low depths
          if (   predictedDepth < 2 * ONE_PLY
              && bestValue > VALUE_MATED_IN_PLY_MAX
              && pos.see_sign(move) < 0)
          {
              if (SpNode)
                  lock_grab(&(sp->lock));

              continue;
          }
      }
Regards, Eelco
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
User avatar
Eelco de Groot
Posts: 4694
Joined: Sun Mar 12, 2006 2:40 am
Full name:   Eelco de Groot

Re: Should reduction depend on depth?

Post by Eelco de Groot »

Eelco de Groot wrote: I have not calculated these numbers, I believe somebody posted a table once in this forum of this function you made, but if log is base 10 log, the maximum reduction possible seems much less than the depth reducing effect of the Futility pruning in step 13
Oh, I now see one place where my reasoning goes wrong, log in C is the natural base e logarithm, this increases all the numbers of course! :oops: I still think the Late Move Reductions are less important than the Late Move Pruning in Stockfish but the magnitude of the reductions has just increased with a factor of

2.3025850929940456840179914546844 :oops:

Sorry!
Eelco
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
lkaufman
Posts: 6281
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA
Full name: Larry Kaufman

Re: Should reduction depend on depth?

Post by lkaufman »

bob wrote:
lkaufman wrote:Should the amount of LMR reduction be an increasing function of depth, as in Stockfish, or should it be depth-independent, as in Rybka and Ippo and Critter? In Komodo we have gone both ways on this issue, and still we are not sure. What say you all? Does anyone have solid data on this issue?
Not yet for me. I have also tried both. Problem is, I am not sure that testing this with short games is safe, because the reductions become a major fraction of the search space in terms of depth removed...
Well, you can test mild depth dependence in fast games, at least to see if the principle is correct. Our results are inconclusive to date.
lkaufman
Posts: 6281
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA
Full name: Larry Kaufman

Re: Should reduction depend on depth?

Post by lkaufman »

Eelco de Groot wrote:
mcostalba wrote:
zamar wrote: There is a very logical reason to do this:
When we remaining_depth = 5 and we are considering how many plies to reduce, there is an enormous impact to tactical strength. But when remaining_depth = 15 the effect is much milder and we might want to be just a little bit more aggressive.

It's a known fact that ELO-gap between 3-ply and 4-ply fixed depth search is much bigger than between 13-ply and 14-ply fixed depth search.

Now reducing at the end of the move list is nothing more than making program weaker at less important moves, so that it can search deeper and thus become stronger at more important moves. You may also see this as a kind of ELO-strength transfer from the likely less important lines to the likely more important lines. If you want to keep this ELO-strength transfer metric close to constant at each search tree level, it automatically means reducing more heavily in higher depths.

But again we have to deal with the fact that LMR interacts heavily with Null move, LMP (=late move pruning) and maybe even futility pruning, so what works best depends a lot of how other things are done.

ELO-wise I do not expect it to be a big issue (~10 ELOs) whether you decide to go with depth-dependant or depth-independent reductions.
Joona has written a very high quality post: I suggest newbie engine developers to give it a very good read because it is not common to find such a clear and deep description of how things works.
Marco as far as I can follow I, of course I should almost say, agree with what Joona was writing. The only trouble is I can't seem to find much code in Stockfish that actually does these Late Move Reductions. The only reductions I see are after the futility pruning

Code: Select all

      {
          // Step 15. Reduced depth search
          // If the move fails high will be re-searched at full depth.
          bool doFullDepthSearch = true;

          if (    depth > 3 * ONE_PLY
              && !captureOrPromotion
              && !dangerous
              && !is_castle(move)
              &&  ss->killers[0] != move
              &&  ss->killers[1] != move
              && (ss->reduction = reduction<PvNode>(depth, moveCount)) != DEPTH_ZERO)
          {
              Depth d = newDepth - ss->reduction;
              alpha = SpNode ? sp->alpha : alpha;

              value = d < ONE_PLY ? -qsearch<NonPV>(pos, ss+1, -(alpha+1), -alpha, DEPTH_ZERO)
                                  : - search<NonPV>(pos, ss+1, -(alpha+1), -alpha, d);

              ss->reduction = DEPTH_ZERO;
              doFullDepthSearch = (value > alpha);
          }
This is from the older Stockfish code, before 2.2.1, but in more recent Stockfish 2.2.1 code I don't think the reduced search is fundamentally different. The reduction calculations are not so easy to decipher so maybe I got it wrong;

Code: Select all

  // Step 14. Reduced search

  // Reduction lookup tables (initialized at startup) and their access function
  int8_t Reductions[2][64][64]; // [pv][depth][moveNumber]

  template <bool PvNode> inline Depth reduction(Depth d, int mn) {

    return (Depth) Reductions[PvNode][Min(d / ONE_PLY, 63)][Min(mn, 63)];
  }
and

Code: Select all

  int d;  // depth (ONE_PLY == 2)
  int hd; // half depth (ONE_PLY == 1)
  int mc; // moveCount

  // Init reductions array
  for (hd = 1; hd < 64; hd++) for (mc = 1; mc < 64; mc++)
  {
      double    pvRed = log(double(hd)) * log(double(mc)) / 3.0;
      double nonPVRed = 0.33 + log(double(hd)) * log(double(mc)) / 2.25;
      Reductions[1][hd][mc] = (int8_t) (   pvRed >= 1.0 ? floor(   pvRed * int(ONE_PLY)) : 0);
      Reductions[0][hd][mc] = (int8_t) (nonPVRed >= 1.0 ? floor(nonPVRed * int(ONE_PLY)) : 0);
  }

These are not very aggressive reductions, moreover they are only for a special class of moves that for instance had to pass the futility pruning first. And if the null move fails low in a reduced search, the reduction may even be undone entirely!

I have not calculated these numbers, I believe somebody posted a table once in this forum of this function you made, but if log is base 10 log, the maximum reduction possible seems much less than the depth reducing effect of the Futility pruning in step 13, even if you don't look at the fact that this is full (forward) pruning of moves, not just reductions. I do agree you can look at the Futility Pruning (Step 13) as an form depth reduction too, but Joona specifically calls this LMP and Futility Pruning. The actual Late Move Reductions are only in Step 14 and compared to Step 13 much less important IMO! Or am I overlooking something, in the magnitude of the reduction numbers in the Reductions array for instance...

Code: Select all

      // Step 13. Futility pruning (is omitted in PV nodes)
      if (   !PvNode
          && !captureOrPromotion
          && !inCheck
          && !dangerous
          &&  move != ttMove
          && !is_castle(move))
      {
          // Move count based pruning
          if (   moveCount >= futility_move_count(depth)
              && (!threatMove || !connected_threat(pos, move, threatMove))
              && bestValue > VALUE_MATED_IN_PLY_MAX) // FIXME bestValue is racy
          {
              if (SpNode)
                  lock_grab(&(sp->lock));

              continue;
          }

          // Value based pruning
          // We illogically ignore reduction condition depth >= 3*ONE_PLY for predicted depth,
          // but fixing this made program slightly weaker.
          Depth predictedDepth = newDepth - reduction<PvNode>(depth, moveCount);
          futilityValue =  futilityBase + futility_margin(predictedDepth, moveCount)
                         + H.gain(pos.piece_on(move_from(move)), move_to(move));

          if (futilityValue < beta)
          {
              if (SpNode)
              {
                  lock_grab(&(sp->lock));
                  if (futilityValue > sp->bestValue)
                      sp->bestValue = bestValue = futilityValue;
              }
              else if (futilityValue > bestValue)
                  bestValue = futilityValue;

              continue;
          }

          // Prune moves with negative SEE at low depths
          if (   predictedDepth < 2 * ONE_PLY
              && bestValue > VALUE_MATED_IN_PLY_MAX
              && pos.see_sign(move) < 0)
          {
              if (SpNode)
                  lock_grab(&(sp->lock));

              continue;
          }
      }
Regards, Eelco

I'm only a novice at reading code, but I think you are somehow confused. Futility pruning only affects the last few plies, whereas Late Move Reduction occurs at any depth above three plies; it is not a subset of Futility. LMR is thus far more consequential than futility pruning; if it knocks a ply or two off of move after move, the total reduction can be huge. Stockfish LMR reductions are the largest of any program with which I am familiar. This is the main reason that Stockfish reaches greater depths than any other program at any normal time limit.
User avatar
rvida
Posts: 481
Joined: Thu Apr 16, 2009 12:00 pm
Location: Slovakia, EU

Re: Should reduction depend on depth?

Post by rvida »

lkaufman wrote: Futility pruning only affects the last few plies, whereas Late Move Reduction occurs at any depth above three plies;
True, but the condition "depth > 3 * ONE_PLY" is not mandatory (or not necessarily formulated that way)
lkaufman wrote: it is not a subset of Futility.
True.
lkaufman wrote: LMR is thus far more consequential than futility pruning; if it knocks a ply or two off of move after move, the total reduction can be huge.
True. The keyword here is "recursively".
lkaufman wrote: Stockfish LMR reductions are the largest of any program with which I am familiar. This is the main reason that Stockfish reaches greater depths than any other program at any normal time limit.
(at least partially) true. I just wonder why you didn't agree with this while discussing on the Rybka Forum?
User avatar
Eelco de Groot
Posts: 4694
Joined: Sun Mar 12, 2006 2:40 am
Full name:   Eelco de Groot

Re: Should reduction depend on depth?

Post by Eelco de Groot »

lkaufman wrote: I'm only a novice at reading code, but I think you are somehow confused. Futility pruning only affects the last few plies, whereas Late Move Reduction occurs at any depth above three plies; it is not a subset of Futility. LMR is thus far more consequential than futility pruning; if it knocks a ply or two off of move after move, the total reduction can be huge. Stockfish LMR reductions are the largest of any program with which I am familiar. This is the main reason that Stockfish reaches greater depths than any other program at any normal time limit.
Hello Larry,

You are right that I have underestimated until now mainly in my mind the reductions from Stockfish Late Move Reductions, but that was because I read log as a base 10 log, and then an awful lot of those LMR reductions are on the order of ONE_PLY or less, and for large depths that did not seem to have much of an impact. If it is used recursively then that also changes things a bit of course. Actually in Rainbow Serpent I had increased the LMR already for this reason, and quite possibly too much, by not taking the logarithm of the depth but of depth squared :) !

Code: Select all

  int d;  // depth (ONE_PLY == 2)
  int hd; // half depth (ONE_PLY == 1)
  int mc; // moveCount

  // Init reductions array
  for (hd = 1; hd < 64; hd++) for (mc = 1; mc < 64; mc++)
  {
      double    pvRed = log(double((hd + 1) * (hd - 1))) * log(double(mc)) / 3.0;
      double nonPVRed = 0.33 + log(double((hd + 1) * (hd - 1))) * log(double(mc)) / 2.25;
      Reductions[1][hd][mc] = (int8_t) (   pvRed >= 1.0 ? floor(   pvRed * int(ONE_PLY)) : 0);
      Reductions[0][hd][mc] = (int8_t) (nonPVRed >= 1.0 ? floor(nonPVRed * int(ONE_PLY)) : 0);
But for Rainbow Serpent this still means that the Futility Pruning is having more of an effect because there that is no longer confined to the last 8 plies as in Stockfish, but to the last 24 plies. I do take some precautions by increasing the margins but it is still pretty risky of course, especially if there are lines where there are sacrifices or semi-sacrifices, those are very hard now to get right. But this is not a problem for just Rainbow Serpent.

Code: Select all

  // Init futility margins array
  for (d = 1; d < 24; d++) for (mc = 0; mc < 64; mc++)
      FutilityMargins[d][mc] = Value (std::max<int>((100 + (d * d) + int(log(double(d * d)))) * int(log(double(d * d) / 2) / log(2.0) + 1.001) - ((d+5) * (mc+1) - 45), 45));// Value(112 * int(log(double(d * d) / 2) / log(2.0) + 1.001) - 8 * mc + 45);

  // Init futility move count array
  for (d = 0; d < 32; d++)
      FutilityMoveCounts[d] = int(3.001 + 0.25 * pow(d, 2.0));
}
I'm not being very precise here, but all these reductions and prunings are still only relative of course, in my opinion, because they only should be measured against the moves that are not pruned, or are not reduced. The reduction numbers have no absolute meaning. In the case of Stockfish, one of the effects should be that the the PV gets relatively more depth than all its side branches, and that all tactical moves get more depth than more quiet moves. Also you can say that it allows Stockfish to do more iterations in a given amount of time, by shortening each depth. But that mainly comes from a few extra iterations at the start! So maybe you should do some experiments with negative depths in the beginning, like Rybka :lol:

Regards, Eelco
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
lkaufman
Posts: 6281
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA
Full name: Larry Kaufman

Re: Should reduction depend on depth?

Post by lkaufman »

rvida wrote:
lkaufman wrote:
lkaufman wrote: Stockfish LMR reductions are the largest of any program with which I am familiar. This is the main reason that Stockfish reaches greater depths than any other program at any normal time limit.
(at least partially) true. I just wonder why you didn't agree with this while discussing on the Rybka Forum?
On the Rybka Forum we were discussing LOW DEPTH pruning, which I took to mean the last four plies, which is the Ivanhoe definition and is what you call "selective search". On those plies I claim that Ivanhoe and Critter both prune much more than SF. SF uses higher move counts for pruning, larger margins for futility and static null, etc. I think we only disagreed because you had a different definition of LOW DEPTH in your mind than I did. There is no question that SF uses more aggressive LMR than Ivanhoe or Critter at higher depths; I haven't done the calculation to determine at what depth SF LMR becomes more aggressive than Ivanhoe or Critter, but I imagine it is somewhere around 5 or 6 plies from the root. Please let us know whether you still disagree with me now that I have made clear what I meant by LOW DEPTH.
User avatar
Eelco de Groot
Posts: 4694
Joined: Sun Mar 12, 2006 2:40 am
Full name:   Eelco de Groot

Re: Should reduction depend on depth?

Post by Eelco de Groot »

Eelco de Groot wrote:But for Rainbow Serpent this still means that the Futility Pruning is having more of an effect because there that is no longer confined to the last 8 plies as in Stockfish, but to the last 24 plies.
Actually to avoid more confusion I should edit that, I was just too late to edit the post; I think the Futility Pruning is only when depth < 7 * ONE_PLY in Stockfish:

Code: Select all

  inline Value futility_margin(Depth d, int mn) {

    return d < 7 * ONE_PLY ? FutilityMargins[Max(d, 1)][Min(mn, 63)]
                           : 2 * VALUE_INFINITE;
And in Rainbow Serpent

Code: Select all

  inline Value futility_margin(Depth d, int mn) {

    return d < 24 * ONE_PLY ? FutilityMargins[std::max<int>(d, 1)][std::min<int>(mn, 63)]
                           : 2 * VALUE_INFINITE;
  }

  inline int futility_move_count(Depth d) {

    return d < 24 * ONE_PLY ? FutilityMoveCounts[d] : MAX_MOVES;
  }
Eelco
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
lkaufman
Posts: 6281
Joined: Sun Jan 10, 2010 6:15 am
Location: Maryland USA
Full name: Larry Kaufman

Re: Should reduction depend on depth?

Post by lkaufman »

Eelco de Groot wrote:
lkaufman wrote: I'm only a novice at reading code, but I think you are somehow confused. Futility pruning only affects the last few plies, whereas Late Move Reduction occurs at any depth above three plies; it is not a subset of Futility. LMR is thus far more consequential than futility pruning; if it knocks a ply or two off of move after move, the total reduction can be huge. Stockfish LMR reductions are the largest of any program with which I am familiar. This is the main reason that Stockfish reaches greater depths than any other program at any normal time limit.
Hello Larry,

You are right that I have underestimated until now mainly in my mind the reductions from Stockfish Late Move Reductions, but that was because I read log as a base 10 log, and then an awful lot of those LMR reductions are on the order of ONE_PLY or less, and for large depths that did not seem to have much of an impact. If it is used recursively then that also changes things a bit of course. Actually in Rainbow Serpent I had increased the LMR already for this reason, and quite possibly too much, by not taking the logarithm of the depth but of depth squared :) !

Code: Select all

  int d;  // depth (ONE_PLY == 2)
  int hd; // half depth (ONE_PLY == 1)
  int mc; // moveCount

  // Init reductions array
  for (hd = 1; hd < 64; hd++) for (mc = 1; mc < 64; mc++)
  {
      double    pvRed = log(double((hd + 1) * (hd - 1))) * log(double(mc)) / 3.0;
      double nonPVRed = 0.33 + log(double((hd + 1) * (hd - 1))) * log(double(mc)) / 2.25;
      Reductions[1][hd][mc] = (int8_t) (   pvRed >= 1.0 ? floor(   pvRed * int(ONE_PLY)) : 0);
      Reductions[0][hd][mc] = (int8_t) (nonPVRed >= 1.0 ? floor(nonPVRed * int(ONE_PLY)) : 0);
But for Rainbow Serpent this still means that the Futility Pruning is having more of an effect because there that is no longer confined to the last 8 plies as in Stockfish, but to the last 24 plies. I do take some precautions by increasing the margins but it is still pretty risky of course, especially if there are lines where there are sacrifices or semi-sacrifices, those are very hard now to get right. But this is not a problem for just Rainbow Serpent.

I don't quite follow where the number 24 plies comes from, but to my thinking ANY algorithm that applies to the last 24 plies is silly. Anything that is valid anywhere near that far from the leaf almost certainly must be valid at any distance from the leaf. Furthermore, it is almost certainly wrong to prune ANY move (except for something like bishop underpromotion that is extremely rare) more than around ten moves back from the leaf, because pruning at ten moves back eliminates 99.9% of the underlying tree assuming a branching factor of 2; how much more can you save by pruning even earlier?.

Can you clarify exactly what you are pruning or reducing 24 plies back, but not 25 plies back?


Regards, Eelco
Regards,
Larry