All and Cut nodes
Moderator: Ras
-
- Posts: 4186
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: All and Cut nodes
When I first implemented LMR with node types, my target was to limit where reductions are to be started. That was like 10 years ago but i had something like nmove>=12:6:3 for PV:CUT:ALL lmr, and it worked well until n_moves >3 or even >1 became the norm. No one I remember did that at the time and in fact most were using the !=PV of Fruit. Infact I was using double reductions too both at CUT/ALL nodes without using history information. But what I never did is to do more reductions at CUT nodes, or lower the lmr start move at CUT nodes because it doesn't make sense. First there is the issue of node type classification so you would always have ALL nodes for nmoves >=6 or less. I would guess people who reduce more at CUT nodes are reducing for the sake of reducing more! This I say because you have more CUT nodes on average through out the tree 50-50 at even plies but much larger odd plies (CUT/ALL may be >10x)! Reducing CUT nodes more is very un-intuitive so the guy who implemented it first (Vasik probably) must had a revelation or a bug that worked probably because CUT nodes are more and the engine benefited from more reductions. I take other explanations for why it works,if indeed it does, with a huge grain of salt.
-
- Posts: 3241
- Joined: Mon May 31, 2010 1:29 pm
- Full name: lucasart
Re: All and Cut nodes
Yes, that's history. Indeed, testing shows that it's best to simply use > 1, in a basic LMR implementation (with the usual exclusion conditions and with a binary reduction: one ply r nothing).Daniel Shawul wrote:When I first implemented LMR with node types, my target was to limit where reductions are to be started. That was like 10 years ago but i had something like nmove>=12:6:3 for PV:CUT:ALL lmr, and it worked well until n_moves >3 or even >1 became the norm.
That' history again. I don't think that's a good idea. In fact, even Fabien himself recognized that it was arbitrary, and it was likely to be useless or even harmful if properly tested. He was right about that, and these PV dependant conditions are all useless in my testing, except not doing null move at PV nodes (and even that makes a very small difference because PV nodes are few). The way Fabien thought of it at the time, was to be able to print longer PVs, and make Fruit more useful at analysis, that's all. Then people took it for gospel truth because they saw it in Fruit without understanding, and Toga spread the PV misconceptions further (recapture extension at PV nodes, and similar non sense).Daniel Shawul wrote: No one I remember did that at the time and in fact most were using the !=PV of Fruit.
I think you misread me here. I clarly explained that you should use the initially predicted node type. That's what you should put in the TT entry, since it's what conditions the kind of search that is going to be done at this node. Of course the node type gets refined dynamically during the search.First there is the issue of node type classification so you would always have ALL nodes for nmoves >=6 or less.
Yes. It's un-ntuitive, and I really don't see why it would work. But some people have reported success with it, and it's one of the "secrets" of Ippolit, I'm told (I've never managed to read Ippo source code, because it's so ugly and illegible, I give up after two minutes...)I would guess people who reduce more at CUT nodes are reducing for the sake of reducing more! This I say because you have more CUT nodes on average through out the tree 50-50 at even plies but much larger odd plies (CUT/ALL may be >10x)! Reducing CUT nodes more is very un-intuitive so the guy who implemented it first (Vasik probably) must had a revelation or a bug that worked probably because CUT nodes are more and the engine benefited from more reductions. I take other explanations for why it works,if indeed it does, with a huge grain of salt.
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
-
- Posts: 6401
- Joined: Thu Mar 09, 2006 8:30 pm
- Location: Chicago, Illinois, USA
Re: All and Cut nodes
Depending what an engine does, it is logical. Reducing more at a cut node works as a sort of IID. At a cut node, after few failed attempts, you may be eager to find the move that will fail high as quick as possible (this is the key). One way to explore this quickly is to reduce more, to scan all the moves fast. Once you find the move it fails high, you research it with a proper depth, so you win by skipping all the others with a low depth. What if I do not find the move that fails high? well, in that case, I may want to research ALL of them with a proper depth, since I started with the assumption this is a cut node. But that is not needed, since returning "fail-low" will automatically trigger a research in the parent node, and then the child node will become an ALL node, which will reduce less. I think Don explained something like this somewhere in the thread.lucasart wrote:Yes, that's history. Indeed, testing shows that it's best to simply use > 1, in a basic LMR implementation (with the usual exclusion conditions and with a binary reduction: one ply r nothing).Daniel Shawul wrote:When I first implemented LMR with node types, my target was to limit where reductions are to be started. That was like 10 years ago but i had something like nmove>=12:6:3 for PV:CUT:ALL lmr, and it worked well until n_moves >3 or even >1 became the norm.That' history again. I don't think that's a good idea. In fact, even Fabien himself recognized that it was arbitrary, and it was likely to be useless or even harmful if properly tested. He was right about that, and these PV dependant conditions are all useless in my testing, except not doing null move at PV nodes (and even that makes a very small difference because PV nodes are few). The way Fabien thought of it at the time, was to be able to print longer PVs, and make Fruit more useful at analysis, that's all. Then people took it for gospel truth because they saw it in Fruit without understanding, and Toga spread the PV misconceptions further (recapture extension at PV nodes, and similar non sense).Daniel Shawul wrote: No one I remember did that at the time and in fact most were using the !=PV of Fruit.I think you misread me here. I clarly explained that you should use the initially predicted node type. That's what you should put in the TT entry, since it's what conditions the kind of search that is going to be done at this node. Of course the node type gets refined dynamically during the search.First there is the issue of node type classification so you would always have ALL nodes for nmoves >=6 or less.Yes. It's un-ntuitive, and I really don't see why it would work. But some people have reported success with it, and it's one of the "secrets" of Ippolit, I'm told (I've never managed to read Ippo source code, because it's so ugly and illegible, I give up after two minutes...)I would guess people who reduce more at CUT nodes are reducing for the sake of reducing more! This I say because you have more CUT nodes on average through out the tree 50-50 at even plies but much larger odd plies (CUT/ALL may be >10x)! Reducing CUT nodes more is very un-intuitive so the guy who implemented it first (Vasik probably) must had a revelation or a bug that worked probably because CUT nodes are more and the engine benefited from more reductions. I take other explanations for why it works,if indeed it does, with a huge grain of salt.
But this may be very engine dependent. I tried several things and this does not work for me, and in fact, now I do the complete opposite. I never reduce twice in a row. That is, most cut nodes I do not even do LMR. So far, I am pretty conservative with LMR and I will have to revisit all this at one point. But I prefer to err on the conservative side for now, until I get other things improved first.
I did not know this was an "Ippolit secret". I may need to recheck all of this if it works well for others.
Miguel
-
- Posts: 4186
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: All and Cut nodes
After CUT node you go in to ALL nodes directly so this doesn't make a difference.Depending what an engine does, it is logical. Reducing more at a cut node works as a sort of IID. At a cut node, after few failed attempts, you may be eager to find the move that will fail high as quick as possible (this is the key). One way to explore this quickly is to reduce more, to scan all the moves fast. Once you find the move it fails high, you research it with a proper depth, so you win by skipping all the others with a low depth. What if I do not find the move that fails high? well, in that case, I may want to research ALL of them with a proper depth, since I started with the assumption this is a cut node. But that is not needed, since returning "fail-low" will automatically trigger a research in the parent node, and then the child node will become an ALL node, which will reduce less. I think Don explained something like this somewhere in the thread.
Code: Select all
4 ply CUT + 2 ply ALL = 2 ply CUT + 4 ply ALL so why not 3 ply CUT + 3 ply ALL?
-
- Posts: 4186
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: All and Cut nodes
I didn't misread you but made the note to stress the fact that dynamic node type classification is found to better on almost anything except this new reduce cut nodes more thing. The history is kind of important because so far node type classification were used to fix the starting point of LMR redcution, but this new thing of reducing more with it (and even that is at CUT nodes) is weird at the very least.
-
- Posts: 28387
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: All and Cut nodes
This sounds indeed a lot my preferred way of doing IID. I usually deepen cut-moves in a separate loop, without going to the next iteration globally. Like:michiguel wrote:Reducing more at a cut node works as a sort of IID. At a cut node, after few failed attempts, you may be eager to find the move that will fail high as quick as possible (this is the key). One way to explore this quickly is to reduce more, to scan all the moves fast. Once you find the move it fails high, you research it with a proper depth, so you win by skipping all the others with a low depth.
Code: Select all
Search(depth)
{
for(iterDepth=1; iterDepth<=depth; iterDepth++) {
for(ALL_MOVES) {
MakeMove();
cutDepth = iterDepth;
do {
score = -Search(cutDepth);
} while(++cutDepth <= depth);
UnMake();
}
}
}
On later iterDepth iterations, the moves you have already deepened in the cutDepth loop will be satisfied by immediate hash hits.
-
- Posts: 2684
- Joined: Sat Jun 14, 2008 9:17 pm
Re: All and Cut nodes
We have committed a patch thatDaniel Shawul wrote: I must admit my patience runs out quickly when an explanation to a very un-intutive thing is offered but i would try and listen if someone posted a pseudo code explaining how researches save the day, which is claimed to be the key?
1) Increases reduction at CUT nodes
2) Limit the increase to nodes whose parent ALL node is reduced
Details are here:
https://github.com/mcostalba/Stockfish/ ... 7386b1742a
I'd second Don explanation in that this trick is a way to trigger "difficult" positions that otherwise would go unnoticed. So if we expect a node to fail-high at depth n, then we assume it should fail-high also at depth n-1, if this doesn't happen it means position is tricky enough to deserve a research at depth n+1.
-
- Posts: 2292
- Joined: Mon Sep 29, 2008 1:50 am
Re: All and Cut nodes
I am not sure this was really his explanation. I think his explanation was that at a CUT node you want to find a cutoff as fast as possible. So you apply very aggressive LMR. You can do this as being too aggressive is harmless since if you don't find a cutoff the node will be researched at full depth.I'd second Don explanation in that this trick is a way to trigger "difficult" positions that otherwise would go unnoticed.
Of course this is not free as researching the node costs time. So it is a tradeoff as usual. But AFAICS it has nothing to with triggering tricky positions.
-
- Posts: 2684
- Joined: Sat Jun 14, 2008 9:17 pm
Re: All and Cut nodes
hmmm.....in this case I prefer my explanationMichel wrote: I am not sure this was really his explanation. I think his explanation was that at a CUT node you want to find a cutoff as fast as possible. So you apply very aggressive LMR. You can do this as being too aggessive is harmless since if you don't find a cutoff the node will be researched at full depth.

In a CUT node you find the cut moves mostly at TT move/ capture stage, and tt / captures are not reduced....so this explanation of speed up cut move finding is naive IMHO.
If you reach the stage of quiet moves (the ones that are reduced) and you still didn't find a cut move it means node could be a troubled one. In this case this trick could detect issues in the position.
-
- Posts: 2292
- Joined: Mon Sep 29, 2008 1:50 am
Re: All and Cut nodes
How could reducing the quality of the search possibly detect issues?If you reach the stage of quiet moves (the ones that are reduced) and you still didn't find a cut move it means node could be a troubled one. In this case this trick could detect issues in the position.
The only reason to reduce the quality of the search is to speed up things.
The added thing here is that there is a safety net for the reduced quality of the search at a CUT node by the research triggered in the parent node.
EDIT: I have tested this idea in GNU Chess but so far it is at best even. So it could be that Daniel is right and this whole thing is just a way to get a bit more reductions....