Joost Buijs wrote: ↑Mon Oct 11, 2021 7:21 pm
My experience is that the quality of the evaluation has a very big influence on the gain you can get with LMR.
With a better evaluation function you can use more aggressive LMR without making big mistakes.
It's all connected to each other, the quality of the evaluation, history and LMR, so it's very difficult to predict what the gain should be.
Hmm, thanks, I see. That makes sense. I've recently finished writing a texel tuner for Blunder and I used it to add mobility to my evaluation. So I suppose I'll keep improving the evaluation right now and see how that pans out with LMR.
From observing more games played, it looks like LMR is stronger at longer time controls, which makes sense. The 32 Elo rating came from me using SPRT testing with a time control of 3+0.08s. The gauntlet I also ran to test Blunder's current strength used the same time control. I'm now running another gauntlet with a time control of 15+0.1 with the same pool of engines, and the preliminary results look pretty promising.
3+0.08s is too short to be meaningful really, even SF's STC is longer than that... I would suggest to use at least 8+0.8s or something a long this line, after all you want to exercise search prunings well before merging in a change
Thanks, that didn’t occur to me, although it makes sense now that I think about it.
And I haven’t merged anything into main yet, since I wasn’t convinced I could only get 30 Elo from LMR. So I’ll be running some more tests at longer time controls, something like 15+0.1s.
4. Late Move Reduction is implemented.
----------------------------------------
Score of Drofa_dev vs Drofa_1.0.3: 277 - 124 - 199 [0.627] 600
Elo difference: 90.59 +/- 23.10
Finished match
Though this might not be representative for the real value of LMR, as I added LMR some time ago, and from that point I tuned my engine to work best with this LMR implementation. So removing LMR not only removes the value of LMR itself, but also all the synergies with other methods (evaluation, history heuristic, move ordering, PVS, etc.).
lithander wrote: ↑Wed Oct 13, 2021 1:55 pm
Took a while to get it to work. The trick was - if I remember correctly - to also introduce history sorting of quiet moves. Before sorting quiet moves I couldn't really get LMR to work for my engine. Also all my reductions are always multiples of 2 (LMR also) and that kind of symmetry that you call the eval always on positions where the same color is to move seemed to help with ensuring that evaluations are actually comparable.
Right, that's what I noticed too. LMR didn't become an Elo gained for me until I introduced history heuristics.
I (since the 80's) still use use the PST values for (quiet) move ordering because in those days history tables with only 8Kb of RAM was impossible. Several times I tried to replace it with history tables because that's the standard, however to no avail. I think that's because the whole move ordering system is fused with the use of PST's within ProDeo. Nevertheless one can try.
90% of coding is debugging, the other 10% is writing bugs.
Though this might not be representative for the real value of LMR, as I added LMR some time ago, and from that point I tuned my engine to work best with this LMR implementation. So removing LMR not only removes the value of LMR itself, but also all the synergies with other methods (evaluation, history heuristic, move ordering, PVS, etc.).
Thanks, those results are in line with what I've heard reported about LMR. I'm currently in the process of adding features back in after debugging PVS, so I'll see what kind of impact LMR makes this time around.
lithander wrote: ↑Wed Oct 13, 2021 1:55 pm
Took a while to get it to work. The trick was - if I remember correctly - to also introduce history sorting of quiet moves. Before sorting quiet moves I couldn't really get LMR to work for my engine. Also all my reductions are always multiples of 2 (LMR also) and that kind of symmetry that you call the eval always on positions where the same color is to move seemed to help with ensuring that evaluations are actually comparable.
Right, that's what I noticed too. LMR didn't become an Elo gained for me until I introduced history heuristics.
I (since the 80's) still use use the PST values for (quiet) move ordering because in those days history tables with only 8Kb of RAM was impossible. Several times I tried to replace it with history tables because that's the standard, however to no avail. I think that's because the whole move ordering system is fused with the use of PST's within ProDeo. Nevertheless one can try.
I've heard some people propose using PST, although I've never seen an implementation myself, I'm certainly not opposed to the idea. And it makes sense to me intuitively. If a certain move to a certain square is usually a pretty good move, it makes sense that you'd probably want to try that move first, rather than some other quiet move. I think I'll do some experimenting between using PST and using history tables to see which one is more effective.