The improving variable is known in all the sources on Github as "very important"...
This thingie is defined as inline evaluation (node eval) > the same on ply-2
As this is the case we can do various fuzzy logic:
- reduce less in lmr and or lmp, also nullmove in zillions of variations.
Things like this can only work when you have a very smooth evaluation-function. When there is noise on your evaluation-function it will probably hurt more than it helps, and make your search very unstable.
Nowadays every engine looks alike, everybody seems to be doing what others are doing, I dont think this is a very good development.
You have to go back to somewhere timeframe Stockfish 11 for the introduction. Without Marco Costalba's rigorous approach to atomic testing I doubt it would have worked well. And he knew how to get it technically perfect. We later got word from Mark Lefler and Larry Kaufman I think on the Rybka forum it was but I don't know exactly the source, not Larry but another programmer it was but with a pseudonym, that it worked well for them too. I am glad it has been adopted so widely! If it did work, somewhat universally, strengthens the 'theory' if you may call it that. Which is not difficult to follow really, more chess than programming.
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
Sounds like religion Eelco. All due respect to Marco C. for his good work but in what sense did he make it "technically perfect"? There is no direct proof for that. Or is it?
What it obviously does is "shaping the tree" in a way. But in what way...
As we speak: the Ed Schroeder reduction a.k.a. "IIR" is also a white raven shaping the tree in a very difficult to understand way.
Last edited by Bart Weststrate on Mon Oct 20, 2025 9:40 pm, edited 1 time in total.
Joost Buijs wrote: ↑Mon Oct 20, 2025 7:45 am
Things like this can only work when you have a very smooth evaluation-function. When there is noise on your evaluation-function it will probably hurt more than it helps, and make your search very unstable.
Nowadays every engine looks alike, everybody seems to be doing what others are doing, I dont think this is a very good development.
Joost and Eelco : 15 + 16 november Leiden tournament for old times sake maybe..
Nu al hebben de volgende mensen zich aangemeld! (participants up to now)
Wees er snel bij!
Dog Folkert van Heusden
Knight Clubbing Ralf van Aert
Lunar Patrick Hilhorst
Single Malt Hans Secelle/Bart Weststrate/Jeroen Noomen
The Baron Richard Pijl
The King Johan de Koning
Titan4 Aldo Voogt
Arminius Volker Annuss
Spartacus Harm Geert Muller
Bart Weststrate wrote: ↑Mon Oct 20, 2025 9:24 pmAs we speak: the Ed Schroeder reduction a.k.a. "IIR" is also a white raven shaping the tree in a very difficult to understand way.
Is it so difficult to understand? If we're a PV node in a part of the tree that has a low-depth/no TT entry, this implies that the previous PV move in the parent node failed low, and we find ourselves in the critical situation of having to discover a new PV.
Because we have a low-depth/no TT entry, we expect to have poor move ordering, and subsequently are at risk of search explosion in this part of the tree; by reducing the size of the tree, we mitigate the effects of this.
But what i ment was that it is shaping the tree in unexpected ways. Hence the confusion.
IID was ment to speed up things somewhat. IIR is shaping the tree in a totally different way.