LMR

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Michael Sherwin
Posts: 3196
Joined: Fri May 26, 2006 3:00 am
Location: WY, USA
Full name: Michael Sherwin

Re: LMR

Post by Michael Sherwin »

bob wrote:
Michael Sherwin wrote:
bob wrote:
Michael Sherwin wrote:
bob wrote:
Don wrote:I would like to add that LMR does result in a major improvement in the strength of my chess program. I think most people have found that too, but I have heard some claim there was little strength gain.
I ran this test a couple of months ago. If you have null-move already, it is worth about +40 Elo in Crafty. If I don't use null-move and add LMR it is worth about +80. Interestingly, if you don't have LMR, null-move adds about +80 also, and if you already have LMR then null-move is worth about +40. I was surprised at first, but when you think about it, the two ideas are complementary.
I am not sure that complementary is the right word.

Null move suffers some tactical shortcomings because of a more shallow search. However, Null move is much stronger overall.

LMR suffers some tactical shortcomings because of a more shallow search. However, LMR is much stronger overall.

When Null move and LMR reductions are both active at the same time the combined reductions IMO reduces the total benefit.

Using a higher move count before allowing LMR, while in null move, seems to improve strength. And is stronger than making them mutually exclusive. Except at shallower depths when mutual exclusion seems stronger.

Edit: In RomiChess the remaining moves are first scored with a very shallow search that has an open window if remaining depth is greater than three. This allows for very good move ordering of the remaining moves which in turn improves LMR.
I tried that (higher count while searching null-move) as well as a dozen other ideas over the last few weeks. No improvement for me at all. Most ideas I have tried were "zero effect" in fact, although an occasional idea would be a small negative change. Your idea about searching to order moves sounds impossibly expensive...
Here are some numbers for RomiChess in the original position for a 16 ply search on a Q6600, 2.4 GHz using only one core.

These numbers just reflect how the 'remaining' moves are ordered.

No changes:
47,035,963 nodes, 18,919 msec, 2,486,175 nodes/sec

Not counting the nodes of the move ordering searches:
31,062,742 nodes, 18,778 msec, 1,654,209 nodes/sec

Not doing the move ordering searches, only using piece-square table (fs - ts) move ordering:
195,685,842 nodes, 76,736 msec, 2,553,445 nodes/sec

No ordering of remaining moves:
326,677,020 nodes, 117,960 msec, 2,769,387 nodes/sec

This looks to me as though move ordering searches are cheap.

And N then becomes very relevant for LMR.
I do not understand your "move ordering search" then. It appeared, based on the terminology, to be a real search that somehow gets a score back for each move, which would imply a wide window. In any case, I don't see how one can do a search to order _all_ nodes inside the tree by doing a search, since internal iterative deepening on all nodes would be horribly expensive, particularly since the ALL nodes have no best order anyway...
After the killers:

Code: Select all

    case ADDMOVES:
      h->phase = NEXTMOVE;
      AddMoves(h->node, depth);
      if(h->node == (h+1)->t) return FALSE;
      if(depth > 3) {
        for&#40;node = h->node; node < &#40;h+1&#41;->t; node++) &#123;
          Sort&#40;node, &#40;h+1&#41;->t&#41;;
          MakeMove&#40;&#40;moves *)&node->m&#41;;
          if&#40;GetReps&#40;))
            node->score = 0;
          else &#123;
            inShort++;
            node->score = -Search&#40;-beta - 100, -alpha + 100, depth > 9 ? 3 &#58; 1, extendBy&#41;;
            inShort--;
          &#125;
          ClrReps&#40;);
          TakeBack&#40;);
        &#125;
      &#125;        
    case NEXTMOVE&#58;
      if&#40;h->node + 1 == &#40;h+1&#41;->t&#41; &#123;
        h->phase = NOMOVES;
        return TRUE;
      &#125;
      Sort&#40;h->node, &#40;h+1&#41;->t&#41;;
  &#125;
  return TRUE;

The LMR code:

Code: Select all

    reduce = 1;
    if&#40;depth > 3 && h->phase == NEXTMOVE&#41; &#123;
      count++;
      if&#40;h->node->score <= alpha&#41; &#123;
        if&#40;didNull&#41; reduce += 2; else
        if&#40;count > 1 + (&#40;inNull != 0&#41; << 1&#41; && board&#91;ts&#93; == EMPTY&#41; &#123;
          s32 g = histTblg&#91;fig&#93;&#91;fs&#93;&#91;ts&#93;;
          s32 n = histTbln&#91;fig&#93;&#91;fs&#93;&#91;ts&#93;;
          if&#40;n > 99 && g / n < 12&#41; &#123;
            reduce += 2;
          &#125; 
        &#125;
      &#125; 
    &#125;

If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: LMR

Post by bob »

Michael Sherwin wrote:
bob wrote:
Michael Sherwin wrote:
bob wrote:
Michael Sherwin wrote:
bob wrote:
Don wrote:I would like to add that LMR does result in a major improvement in the strength of my chess program. I think most people have found that too, but I have heard some claim there was little strength gain.
I ran this test a couple of months ago. If you have null-move already, it is worth about +40 Elo in Crafty. If I don't use null-move and add LMR it is worth about +80. Interestingly, if you don't have LMR, null-move adds about +80 also, and if you already have LMR then null-move is worth about +40. I was surprised at first, but when you think about it, the two ideas are complementary.
I am not sure that complementary is the right word.

Null move suffers some tactical shortcomings because of a more shallow search. However, Null move is much stronger overall.

LMR suffers some tactical shortcomings because of a more shallow search. However, LMR is much stronger overall.

When Null move and LMR reductions are both active at the same time the combined reductions IMO reduces the total benefit.

Using a higher move count before allowing LMR, while in null move, seems to improve strength. And is stronger than making them mutually exclusive. Except at shallower depths when mutual exclusion seems stronger.

Edit: In RomiChess the remaining moves are first scored with a very shallow search that has an open window if remaining depth is greater than three. This allows for very good move ordering of the remaining moves which in turn improves LMR.
I tried that (higher count while searching null-move) as well as a dozen other ideas over the last few weeks. No improvement for me at all. Most ideas I have tried were "zero effect" in fact, although an occasional idea would be a small negative change. Your idea about searching to order moves sounds impossibly expensive...
Here are some numbers for RomiChess in the original position for a 16 ply search on a Q6600, 2.4 GHz using only one core.

These numbers just reflect how the 'remaining' moves are ordered.

No changes:
47,035,963 nodes, 18,919 msec, 2,486,175 nodes/sec

Not counting the nodes of the move ordering searches:
31,062,742 nodes, 18,778 msec, 1,654,209 nodes/sec

Not doing the move ordering searches, only using piece-square table (fs - ts) move ordering:
195,685,842 nodes, 76,736 msec, 2,553,445 nodes/sec

No ordering of remaining moves:
326,677,020 nodes, 117,960 msec, 2,769,387 nodes/sec

This looks to me as though move ordering searches are cheap.

And N then becomes very relevant for LMR.
I do not understand your "move ordering search" then. It appeared, based on the terminology, to be a real search that somehow gets a score back for each move, which would imply a wide window. In any case, I don't see how one can do a search to order _all_ nodes inside the tree by doing a search, since internal iterative deepening on all nodes would be horribly expensive, particularly since the ALL nodes have no best order anyway...
After the killers:

Code: Select all

    case ADDMOVES&#58;
      h->phase = NEXTMOVE;
      AddMoves&#40;h->node, depth&#41;;
      if&#40;h->node == &#40;h+1&#41;->t&#41; return FALSE;
      if&#40;depth > 3&#41; &#123;
        for&#40;node = h->node; node < &#40;h+1&#41;->t; node++) &#123;
          Sort&#40;node, &#40;h+1&#41;->t&#41;;
          MakeMove&#40;&#40;moves *)&node->m&#41;;
          if&#40;GetReps&#40;))
            node->score = 0;
          else &#123;
            inShort++;
            node->score = -Search&#40;-beta - 100, -alpha + 100, depth > 9 ? 3 &#58; 1, extendBy&#41;;
            inShort--;
          &#125;
          ClrReps&#40;);
          TakeBack&#40;);
        &#125;
      &#125;        
    case NEXTMOVE&#58;
      if&#40;h->node + 1 == &#40;h+1&#41;->t&#41; &#123;
        h->phase = NOMOVES;
        return TRUE;
      &#125;
      Sort&#40;h->node, &#40;h+1&#41;->t&#41;;
  &#125;
  return TRUE;

The LMR code:

Code: Select all

    reduce = 1;
    if&#40;depth > 3 && h->phase == NEXTMOVE&#41; &#123;
      count++;
      if&#40;h->node->score <= alpha&#41; &#123;
        if&#40;didNull&#41; reduce += 2; else
        if&#40;count > 1 + (&#40;inNull != 0&#41; << 1&#41; && board&#91;ts&#93; == EMPTY&#41; &#123;
          s32 g = histTblg&#91;fig&#93;&#91;fs&#93;&#91;ts&#93;;
          s32 n = histTbln&#91;fig&#93;&#91;fs&#93;&#91;ts&#93;;
          if&#40;n > 99 && g / n < 12&#41; &#123;
            reduce += 2;
          &#125; 
        &#125;
      &#125; 
    &#125;

The code after the killers looks terrible IMHO. You are forcing yourself to search _every_ move, with no chance to bail out on a fail high node. So _every_ node is going to look at that stuff??? I don't see how that can work at all, from an efficiency perspective... And it is violating the idea of a basic PVS search (which you might not be using)...
Stan Arts

Re: LMR

Post by Stan Arts »

Just wondering, why doesn't picking moves upon static evaluation work?
Even just looking at score being up or down after making the move I'd figure this would automatically favour moves that attack the king, progress passed pawns, etc.
I'm aware a good move for one side is or has been a bad one for the other and that's a reason why many extensions don't work. However in this case that doesn't apply as you do the same for both sides and should end up with lines of "good" moves reduced less.
Perhaps with some ordening that saving a couple of quiet moves from reduction already gives enough of a same effect?
Too slow?
Chess involves too many quirky moves?

Just wondering. (I've done some experimentation with reductions over the past years but so far wasn't usefull at all. (looks good, then the normal version surprises it with a 0.10pawn better move, happens 3x and it's lost.) still it's an attractive idea to variably reduce the tree and am going to try again.)

Stan
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: LMR

Post by bob »

Stan Arts wrote:Just wondering, why doesn't picking moves upon static evaluation work?
Even just looking at score being up or down after making the move I'd figure this would automatically favour moves that attack the king, progress passed pawns, etc.
I'm aware a good move for one side is or has been a bad one for the other and that's a reason why many extensions don't work. However in this case that doesn't apply as you do the same for both sides and should end up with lines of "good" moves reduced less.
Perhaps with some ordening that saving a couple of quiet moves from reduction already gives enough of a same effect?
Too slow?
Chess involves too many quirky moves?

Just wondering. (I've done some experimentation with reductions over the past years but so far wasn't usefull at all. (looks good, then the normal version surprises it with a 0.10pawn better move, happens 3x and it's lost.) still it's an attractive idea to variably reduce the tree and am going to try again.)

Stan
I am not sure why it doesn't work with respect to LMR. Speed is not the issue. I did a full evaluation at all interior nodes and did not see a huge speed drop, in fact a full evaluation + swap() (SEE) on each move (which also requires a Make/Unmake as well) reduced the overall NPS by about 10%. But it didn't help with LMR at all. There were some potential benefits for better move ordering, and I am going to do some testing there before long also. But for the LMR issue, using Evaluate() to order the moves so that the bad ones could be reduced and the good ones not just didn't help, and the overall Elo actually dropped by maybe 10 (LMR only gives about 40 when put on top of null-move).
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: LMR

Post by diep »

Stan Arts wrote:Just wondering, why doesn't picking moves upon static evaluation work?
Even just looking at score being up or down after making the move I'd figure this would automatically favour moves that attack the king, progress passed pawns, etc.
I'm aware a good move for one side is or has been a bad one for the other and that's a reason why many extensions don't work. However in this case that doesn't apply as you do the same for both sides and should end up with lines of "good" moves reduced less.
Perhaps with some ordening that saving a couple of quiet moves from reduction already gives enough of a same effect?
Too slow?
Chess involves too many quirky moves?

Just wondering. (I've done some experimentation with reductions over the past years but so far wasn't usefull at all. (looks good, then the normal version surprises it with a 0.10pawn better move, happens 3x and it's lost.) still it's an attractive idea to variably reduce the tree and am going to try again.)

Stan
If LMR works for you, then you have some huge knowledge gap in your engine, you just test at too little Ghz minutes a move, or your search is crap.

This is to some extend mathematical provable.

There is a lot of alternatives to LMR that are far superior IMHO.

In diep with LMR what is the case is that it is about Ghz minutes.
In 1999 i played at bob's quad xeon. Fastest quad xeons that joined were quad xeon 500Mhz. Time controle was 3 minutes a move and games didn't take that long like they take now.

3 minutes * 2Ghz = 6Ghz minute a move.

Most people who claim LMR works, they speak about tests with under or close to 6Ghz minutes a move.

Some engines where diep scores well against, when i turn on LMR, suddenly it loses some games, thanks to all kind of worst case problems. Most weako engine programmers will soon conclude LMR works, because either their branching factor without it just is too ugly, or they now suddenly can win 1 game with it instead of scoring 0 out of 100.

So from everyone the perspective is different.

LMR is a method that basically searches with simple moves your mainline deeper. My argument is that when simple trivial moves can flip your score in evaluation, that your engine has a bug in its evaluation.

In case of Diep, LMR works when diep can't get to 12-14 ply without LMR.
When diep can get to 14 ply without LMR *every move*, so not having a worst case there, then LMR is a lot of elo worse.

In todays fast time controls, that still isn't easy. Just do the math.
4 cores * 2.4ghz * 5 seconds a move = 1/5 * 2.4 = 0.5 Ghz minutes a move.

Most testgames of today are simply not giving a program more system time than we did give software back in 1999. That's reality.

Computerchess has not progressed a lot there algorithmic. Most conclusions get drawn based upon these quickie time controls by most posters in CCC.

To quote SMK: "i see a big difference between testing at blitz time controls versus slower time controls".

Yes in diep i DEFINITELY see that.

LMR has 2 positional problems:
a) you compare different search line lengths with each other, so every inaccuracy in your evaluation suddenly makes your engine play worse positional moves, as some reduced line has implicit a bigger horizon effect than a deeper line. Tried keeping track of that, but didn't help either.
b) getting a fail high to a much better positional move is a lot harder. Even when LMR gives you 4 ply, not seldom a better positional move is +5 or +6 ply to see.

LMR has numerous search problems, i'll just list 1:
a) the odds the best move is within the selection of the moves that you do not reduce is really tiny
b) when you do not reduce tactics with it, you hardly search deeper in some positions
c) the better your move ordering, the less plies LMR wins

I concluded LMR is a cheapo blitz algorithm that's perfect for those who do not want to work at having a good branching factor.

Way more interesting than LMR i'd say is forward pruning last few plies. I'm gonna revive soon an experiment where i do search replacement. So last few plies when for example score is far above beta, i replace diep's slow search by some very fast tactical searcher that just has a look whether there is some sort of threat avoiding us from giving a cutoff here statically.

It's not so good as a nullmove, yet it's stronger than doing a direct static forward pruning.

Yet the fast tactical searcher searches at 2.5 million nps at a k7 2.1Ghz (that includes hashtable, which of course for just last few plies i won't be using) and diep's main search searches at 100k nps at this hardware thanks to its slow evaluation function.

Say 90% of the nodes last 3 ply or so are 3 pawns away from beta.
nullmove has a 70% cutrate last 3 ply. these nullmoves eat on average 7 nodes. In total doing nullmove last 3 plies is 50% of the total nodes of Diep.

Getting all this to work is not so easy as it seems like. Lot yet to be done to get it to work.

You see, i don't make a secret of what i'm doing in search there.

Note the idea in itself is not new. Doing last few plies a fast simple search has been done in so many programs during so many decades already and in so many forms that nothing of this idea is new.

But will it win elo for Diep?

I don't know. Whatever i tried there past years, it all failed elowise.

Correcting evaluation by means of search seems to be really important. All these forward pruning ideas as well as LMR are not doing that. They do the opposite. They cause misevaluations to play a more dominant role.

I'd argue LMR can only work in evaluations that have been really well tuned.

That implicitly means your eval is utmost tiny.

Vincent
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: LMR

Post by bob »

Vincent, that is a bunch of nonsense. LMR can easily be proven to be perfectly sound. Here's how.

1. Program A, Crafty with no LMR is run on a set of test positions. All I extend at present is moves that give check. Nothing else. We record the trees this program searches for each position to depth D. Remember, we extend checks, don't extend anything else (except for null-move which all versions will use).

2. Program B. Crafty again. But this version does not extend checks. But it does reduce all non-checks by one ply. We record the trees on the test positions again, except this time we do search to depth D+1 instead of D.

Now we compare the trees, and guess what? They are _identical_. The LMR version produces the same result, with the same tree search space, in the same amount of time, as the non-LMR version. The LMR version reports one ply deeper for its depth, but who cares?

So LMR does work. But we get a bit more than that at present.

Because in the simple examples above, we either extend checks and nothing else, or reduce non-checks and nothing else. So we have categorized moves into two classes, moves we extend and moves we don't. But with current LMR we take this further. We extend checks. We don't extend or reduce some moves that appear to be interesting, and then we reduce the rest. So we recognize three classes of moves, those we extend, those we leave alone, and those we reduce. Which gives us more potential accuracy, for the same search space. Somewhat akin to what Amir does with his odd ply values in Junior.

If you can't make it work, that's your problem. You need to test for thousands of games to verify whether it works or not. I have used everything from 1+1 to 60+60 and it works in both of those and even in a much faster test I use for a quick check of an eval term. I know it works for me, and that it gives about 40 Elo on top of null move. without null-move it adds about 80 Elo. Verified over a huge number of games with no opening book to confuse the issue.
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: LMR

Post by diep »

bob wrote:Vincent, that is a bunch of nonsense. LMR can easily be proven to be perfectly sound. Here's how.

1. Program A, Crafty with no LMR is run on a set of test positions. All I extend at present is moves that give check. Nothing else. We record the trees this program searches for each position to depth D. Remember, we extend checks, don't extend anything else (except for null-move which all versions will use).

2. Program B. Crafty again. But this version does not extend checks. But it does reduce all non-checks by one ply. We record the trees on the test positions again, except this time we do search to depth D+1 instead of D.

Now we compare the trees, and guess what? They are _identical_. The LMR version produces the same result, with the same tree search space, in the same amount of time, as the non-LMR version. The LMR version reports one ply deeper for its depth, but who cares?
I hope you realize that you have to clearly define which moves to not reduce with LMR in order to have search see that to the correct depth; if you already have the knowledge statically to know what to not reduce, why bother searching at all?

Note both trees are not identical in fact. You search with program B basically to depth D/2 at most or so.

Additionally with LMR you're not reducing first 5 moves or so, so everything depends upon your move ordering. Again that move ordering is not exactly very knowledgeable. It doesn't know that in order to attack the opponent you first have to sac this pawn in order to get an open diagonal towards the opponent king.

Additionally we can safely assume you had at most stored, if anything has been stored at all, that the node had been stored as an upperbound. Hence your odds to select the best move is not much better than say 5 / 40 (average number of move) = 12.5% or so.

So it is easy to obtain the insight that in like 80%+ of the nodes where you get a lowerbound AND applied LMR anyway, as you didn't even have the CHANCE to get a cutoff.

Hence my argument that LMR is not a method that allows you to fail high. It just gives you seemingly a bigger depth meanwhile not having achieved it practical, as the statistical odds are just too big that you fail searching the best move.

If you select captures and checks to not get reduced, i know another 100 methods to see tactics better and a lot safer than with LMR.

If your claim is that LMR works for you, i'd advice you to take a look in some old ICCA journals and look for the algorithm ProbCut. Probability Cut is exactly the same like LMR, just a tad more dramatic.
bob wrote: So LMR does work. But we get a bit more than that at present.

Because in the simple examples above, we either extend checks and nothing else, or reduce non-checks and nothing else. So we have categorized moves into two classes, moves we extend and moves we don't. But with current LMR we take this further. We extend checks. We don't extend or reduce some moves that appear to be interesting, and then we reduce the rest. So we recognize three classes of moves, those we extend, those we leave alone, and those we reduce. Which gives us more potential accuracy, for the same search space. Somewhat akin to what Amir does with his odd ply values in Junior.

If you can't make it work, that's your problem. You need to test for thousands of games to verify whether it works or not. I have used everything from 1+1 to 60+60 and it works in both of those and even in a much faster test I use for a quick check of an eval term. I know it works for me, and that it gives about 40 Elo on top of null move. without null-move it adds about 80 Elo. Verified over a huge number of games with no opening book to confuse the issue.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: LMR

Post by bob »

diep wrote:
bob wrote:Vincent, that is a bunch of nonsense. LMR can easily be proven to be perfectly sound. Here's how.

1. Program A, Crafty with no LMR is run on a set of test positions. All I extend at present is moves that give check. Nothing else. We record the trees this program searches for each position to depth D. Remember, we extend checks, don't extend anything else (except for null-move which all versions will use).

2. Program B. Crafty again. But this version does not extend checks. But it does reduce all non-checks by one ply. We record the trees on the test positions again, except this time we do search to depth D+1 instead of D.

Now we compare the trees, and guess what? They are _identical_. The LMR version produces the same result, with the same tree search space, in the same amount of time, as the non-LMR version. The LMR version reports one ply deeper for its depth, but who cares?
I hope you realize that you have to clearly define which moves to not reduce with LMR in order to have search see that to the correct depth; if you already have the knowledge statically to know what to not reduce, why bother searching at all?
I already know, in my example, which moves to reduce (everything but checks). In the current (real) version, I have ideas about what to not reduce / reduce, and I'm still looking at other possibilities learned dynamically as the search progresses. It clearly works. I found no significant difference between very fast games and 60+60 games which take about 4 hours or so to play, when I compare with/without LMR. LMR adds about +40 Elo no matter the time control.

Note both trees are not identical in fact. You search with program B basically to depth D/2 at most or so.
You did not read my explanation. In one program I extend checks and leave everything else alone. In the other I leave checks alone and reduce everything else. Those are _identical_ algorithms. The second will search 1 ply deeper, but the PV and tree will be identical for both...


Additionally with LMR you're not reducing first 5 moves or so, so everything depends upon your move ordering. Again that move ordering is not exactly very knowledgeable. It doesn't know that in order to attack the opponent you first have to sac this pawn in order to get an open diagonal towards the opponent king.
maybe or maybe not. I might reduce _all_ moves. I only exclude hash, non-losing captures, and killers before I start reducing... It is possible to have no hash move, no good captures (or even no captures at all) and no usable killers. So I can reduce the very first move I search.



Additionally we can safely assume you had at most stored, if anything has been stored at all, that the node had been stored as an upperbound. Hence your odds to select the best move is not much better than say 5 / 40 (average number of move) = 12.5% or so.

So it is easy to obtain the insight that in like 80%+ of the nodes where you get a lowerbound AND applied LMR anyway, as you didn't even have the CHANCE to get a cutoff.

Hence my argument that LMR is not a method that allows you to fail high. It just gives you seemingly a bigger depth meanwhile not having achieved it practical, as the statistical odds are just too big that you fail searching the best move.

All I can say is "where is the +40 Elo coming from?" It is not imaginary, it is based on tens of thousands of test games against multiple opponents.

If you select captures and checks to not get reduced, i know another 100 methods to see tactics better and a lot safer than with LMR.

If your claim is that LMR works for you, i'd advice you to take a look in some old ICCA journals and look for the algorithm ProbCut. Probability Cut is exactly the same like LMR, just a tad more dramatic.
bob wrote: So LMR does work. But we get a bit more than that at present.

Because in the simple examples above, we either extend checks and nothing else, or reduce non-checks and nothing else. So we have categorized moves into two classes, moves we extend and moves we don't. But with current LMR we take this further. We extend checks. We don't extend or reduce some moves that appear to be interesting, and then we reduce the rest. So we recognize three classes of moves, those we extend, those we leave alone, and those we reduce. Which gives us more potential accuracy, for the same search space. Somewhat akin to what Amir does with his odd ply values in Junior.

If you can't make it work, that's your problem. You need to test for thousands of games to verify whether it works or not. I have used everything from 1+1 to 60+60 and it works in both of those and even in a much faster test I use for a quick check of an eval term. I know it works for me, and that it gives about 40 Elo on top of null move. without null-move it adds about 80 Elo. Verified over a huge number of games with no opening book to confuse the issue.
Uri Blass
Posts: 10268
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: LMR

Post by Uri Blass »

diep wrote:I'd argue LMR can only work in evaluations that have been really well tuned.

That implicitly means your eval is utmost tiny.

Vincent
and it also means that you later can explain the reason that you lost games.

You will claim that because your evaluation is too big it is not well tuned and somebody with a small tuned evaluation not only has a better evaluation relative to your evaluation but also can use LMR in a productive way unlike you.

Uri
diep
Posts: 1822
Joined: Thu Mar 09, 2006 11:54 pm
Location: The Netherlands

Re: LMR

Post by diep »

Uri Blass wrote:
diep wrote:I'd argue LMR can only work in evaluations that have been really well tuned.

That implicitly means your eval is utmost tiny.

Vincent
and it also means that you later can explain the reason that you lost games.

You will claim that because your evaluation is too big it is not well tuned and somebody with a small tuned evaluation not only has a better evaluation relative to your evaluation but also can use LMR in a productive way unlike you.

Uri
And what did you have to add to this algorithmic discussion?

Thanks,
Vincent