Preliminary tests still inconclusive after ~5700 games.
At least it doesn't obviously hurt...
Reductions from internal iterative deepening
Moderator: Ras
-
Evert
- Posts: 2929
- Joined: Sat Jan 22, 2011 12:42 am
- Location: NL
-
Rebel
- Posts: 7475
- Joined: Thu Aug 18, 2011 12:04 pm
- Full name: Ed Schröder
Re: Reductions from internal iterative deepening
Precisely. IID comes in many forms. I use a one step approach of RD/2 (with a minimum RD of 4) and then research at full depth. Various tries to reduce or extend the research failed. One tries the craziest ideas, you never know for sure if you haven't tried.Sven Schüle wrote:You assume a different IID implementation, one that actually has >1 iterations. That is of course the original meaning of the name "IID" but today it seems that many programmers, including Evert, just do one Search() call with reduced depth and no iterations there.
-
syzygy
- Posts: 5840
- Joined: Tue Feb 28, 2012 11:56 pm
Re: Reductions from internal iterative deepening
So what is wrong? If your details make it wrong, maybe you should have added them in your initial post?Evert wrote:Wrong.syzygy wrote: Hgm's point is that your proposed reduction (of all moves of the current node) has more or less the same effect as a reduction performed in the parent for the move leading to the current position. That turns your fail low condition for the current node into a fail high condition for the parent node. Your proposal is to not search further (i.e. not do the research) if IID returned a value below alpha.
If IID indicates that the node will likely fail low, then the search is reduced. That's the idea but the devil is, as always, in the details.
The point is that your scheme, viewed from the parent node, is similar to an LMR reduction, except that it foregoes on the research upon a fail high which I believe most think is necessary.Sure, but I don't see how that is unique to this proposed scheme: any move that is reduced might not have failed low if searched at full depth. That's why reducing arbitrarily is a bad idea.That parent node is not going to do the research either, because it had no idea the move was being reduced through your IID reduction. (Or if the parent had reduced the move, your IID reduction is reducing it once more.)
Btw, why only reduce when you use IID and not also if you find the node in the TT but with insufficient depth (but with a score that would otherwise cause a cut-off)?
-
Evert
- Posts: 2929
- Joined: Sat Jan 22, 2011 12:42 am
- Location: NL
Re: Reductions from internal iterative deepening
ThisSo what is wrong?
is not what I suggested.Your proposal is to not search further (i.e. not do the research) if IID returned a value below alpha.
Your snarkyness is misplaced, particularly since I already elaborated on some of the search peculiarities in Jazz. Besides, this is an idea, a suggestion. I didn't say anywhere that this is the perfect and exact final version of it.If your details make it wrong, maybe you should have added them in your initial post?
I already explained why this is actually not what it is.The point is that your scheme, viewed from the parent node, is similar to an LMR reduction, except that it foregoes on the research upon a fail high which I believe most think is necessary.
However, the more I think about it, the more it seems to me that the idea is really not very different from LMR (at the current node that is) and so it may well be redundant with that. I haven't tried tweaking LMR based on the results from IID but that could be worth looking into more.
No reason; in fact I mentioned this inconsistency in an earlier post.Btw, why only reduce when you use IID and not also if you find the node in the TT but with insufficient depth (but with a score that would otherwise cause a cut-off)?
The main difference would be with respect to the search-bounds: when I do IID, I use the current [alpha, beta], which will in general be different from what was used to generate the TT entry.
-
hgm
- Posts: 28447
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: Reductions from internal iterative deepening
This is still something I don't get. The essence of LMR is that you only reduce 'late' (= presumably useless) moves. The way you described it sounded like you would reduce all moves, if an earlier iteration would suggest you are in an all-node. And that would make it very different. Reducing all moves in a node often boils down to just another way to define depth.Evert wrote:However, the more I think about it, the more it seems to me that the idea is really not very different from LMR (at the current node that is) and so it may well be redundant with that.
-
Evert
- Posts: 2929
- Joined: Sat Jan 22, 2011 12:42 am
- Location: NL
Re: Reductions from internal iterative deepening
The (implicit) assumption you make with LMR is that you are in an all-node once you get to the "late" move stage. If you have reason to believe that the current node is not an all-node (despite having gone far down the move list), then you should not apply LMR because you expect one of the moves to improve alpha and reducing it may hide that.hgm wrote: This is still something I don't get. The essence of LMR is that you only reduce 'late' (= presumably useless) moves. The way you described it sounded like you would reduce all moves, if an earlier iteration would suggest you are in an all-node.
This is where the similarity comes in: you reduce the search when you have reason to believe that the current node is an all-node. The difference is in when you think you have a reason to believe that: either while going through the move list, or before you start.
-
hgm
- Posts: 28447
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: Reductions from internal iterative deepening
Well, that is not my idea of LMR. I would like to see branches with reasonable moves to be searched at full depth, and only nonsense to be reduced. Because in a deep tree there are many branches that end in scores just marginally below that of the PV, and upon elongation might actually greatly exceed it, and become the new PV. And, not being on the previous PV, they would go through a lot of all nodes.
By reducing everything, you won't be able to see superior alternative lines before you also reach the same depth as where the superiority shows for all of the garbage branches. And that makes it incredibly expensive to find anything. What you need is a method that reduces the garbage, but has not too much impact on reasonable alternatives.
By reducing everything, you won't be able to see superior alternative lines before you also reach the same depth as where the superiority shows for all of the garbage branches. And that makes it incredibly expensive to find anything. What you need is a method that reduces the garbage, but has not too much impact on reasonable alternatives.
-
Evert
- Posts: 2929
- Joined: Sat Jan 22, 2011 12:42 am
- Location: NL
Re: Reductions from internal iterative deepening
Sure, but doesn't that just become semantics?hgm wrote:Well, that is not my idea of LMR. I would like to see branches with reasonable moves to be searched at full depth, and only nonsense to be reduced.
Evidently, but that is true of any reduction scheme.Because in a deep tree there are many branches that end in scores just marginally below that of the PV, and upon elongation might actually greatly exceed it, and become the new PV. And, not being on the previous PV, they would go through a lot of all nodes.
Again, that's true of any reduction scheme, so I fail to see how it would be a specific problem with "IID reductions"?By reducing everything, you won't be able to see superior alternative lines before you also reach the same depth as where the superiority shows for all of the garbage branches. And that makes it incredibly expensive to find anything. What you need is a method that reduces the garbage, but has not too much impact on reasonable alternatives.
Note that the main idea is not "reduce everything" (as I said already, the first move is excluded anyway for instance) but the condition used for making reduction decisions.
-
hgm
- Posts: 28447
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: Reductions from internal iterative deepening
Let me make sure I understand this, then:
If your IID pilot search (at depth = d-4, say) fails low, you would do a d-1 iteration for all moves except the first (which you would search at d), and then call it quits? (If they all failed low)?
That would indeed make it look very much like LMR, except that you would not do the LMR when the pilot search did not fail low.
I am not sure how such a conditional application of LMR would be helpful. If a move in the pilot search fails high it causes a beta cutoff there, and you would immediately proceed with that move at full depth, as you would sort it in front. If it fails high again you are done. But if it fails low now, I see no reason why you would expect any of the other moves to fail high in its place. or have more chance to switch from fail low at d-1 to fail high at d than when the first move would not have dropped in score to below alpha. So I would prefer to reduce as always.
If your IID pilot search (at depth = d-4, say) fails low, you would do a d-1 iteration for all moves except the first (which you would search at d), and then call it quits? (If they all failed low)?
That would indeed make it look very much like LMR, except that you would not do the LMR when the pilot search did not fail low.
I am not sure how such a conditional application of LMR would be helpful. If a move in the pilot search fails high it causes a beta cutoff there, and you would immediately proceed with that move at full depth, as you would sort it in front. If it fails high again you are done. But if it fails low now, I see no reason why you would expect any of the other moves to fail high in its place. or have more chance to switch from fail low at d-1 to fail high at d than when the first move would not have dropped in score to below alpha. So I would prefer to reduce as always.