Reductions from internal iterative deepening

Discussion of chess software programming and technical issues.

Moderator: Ras

syzygy
Posts: 5911
Joined: Tue Feb 28, 2012 11:56 pm

Re: Reductions from internal iterative deepening

Post by syzygy »

Evert wrote:
hgm wrote:This is kind of the opposite of what I do in Spartacus. Reducing a node when it fails low is logically equivalent to reducing the move in the caller leading to it when it fails high. Usually this is not what you want, as reducing moves that fail high is an invitation to horizon effect.
I see what you're saying, but I'm not entirely sure that I follow the logic entirely. Why can't you use the same argument against reducing any move, or LMR in general?
With LMR, you would normally research a reduced move that failed high with full depth.
User avatar
Evert
Posts: 2929
Joined: Sat Jan 22, 2011 12:42 am
Location: NL

Re: Reductions from internal iterative deepening

Post by Evert »

syzygy wrote: With LMR, you would normally research a reduced move that failed high with full depth.
I always do a verification search if reduced moves fail high, so this is no different.
User avatar
hgm
Posts: 28464
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Reductions from internal iterative deepening

Post by hgm »

Evert wrote:I see what you're saying, but I'm not entirely sure that I follow the logic entirely. Why can't you use the same argument against reducing any move, or LMR in general?
Well, in LMR you reduce (likely) poor moves that fail low. That is much less dangerous than reducing poor moves that fail high, because in the latter case the search would start to abuse the reduction for pushing insolvable threats over the horizon with a sequence of poor moves. With LMR moves that fail high never keep the reduction, but are re-searched.

You could of course argue that one man's fail low is another man's fail high, so that when I reduce all low-failing moves in an all node, I effectively reduce the the move to it in the parent node that fails high. But what saves LMR is that you don't reduce all moves, but only those that are likely poor. As it prsumably would only be possible to cause trouble for the high-failing side with a sequence of non-poor (i.e.unreduced) moves, the branches leading to the trouble would not suffer reduction, so that there is no exacerbation of horizon effect. Whether trouble for the low failing side is pushed over the horizon by poor moves is not relevant, if the poor moves themselves are already sufficient to make him fail low.
Evert wrote:
syzygy wrote: With LMR, you would normally research a reduced move that failed high with full depth.
I always do a verification search if reduced moves fail high, so this is no different.
The crux is the depth at which you re-search. I must admit that Spartacus handles a same-depth re-search because of PVS also completely in the daughter node. In a PV node it will never call Search() with a null window, but a non-PV daughter node will start by increasing alpha to beta-1, in an attempt to verify it will fail high. If it fails low with an upper bound between the original alpha and beta, however, it then lowers alpha to the original value, and redoes the iteration. If it fails high and it was a late node, it does an extra iteration at unreduced depth.
User avatar
Evert
Posts: 2929
Joined: Sat Jan 22, 2011 12:42 am
Location: NL

Re: Reductions from internal iterative deepening

Post by Evert »

hgm wrote: Well, in LMR you reduce (likely) poor moves that fail low. That is much less dangerous than reducing poor moves that fail high, because in the latter case the search would start to abuse the reduction for pushing insolvable threats over the horizon with a sequence of poor moves. With LMR moves that fail high never keep the reduction, but are re-searched.
Sure. But as I said, nothing stops you from doing that for IID reductions (in fact I always do that).
You could of course argue that one man's fail low is another man's fail high, so that when I reduce all low-failing moves in an all node, I effectively reduce the the move to it in the parent node that fails high. But what saves LMR is that you don't reduce all moves, but only those that are likely poor.
Yes. But at a node that scores below alpha all moves are poor.
Again though, it's easy to avoid the reduction for TT moves, winning captures, killer moves, etc.
Perhaps all the idea is good for is an indication that LMR could be more agressive at this node - which is still a useful result.
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: Reductions from internal iterative deepening

Post by Sven »

hgm wrote:
Sven Schüle wrote:How is that equivalent? The move failing low is just one of several moves, and presumably not the best one in this case.
Evert is not talking about reducing one move when it fails low, but doing one less iteration when an earlier iteration fails low. That is reducing all moves.
There is no such case where you "reduce a move (at the parent node) when it fails low", since if it fails low then you have already searched it.
You do the reduced search first, of course. That is how IID works: you don't start with the maximum depth, and then iterate over lower depth. Then it would have been called IIR in stead of IID...
You assume a different IID implementation, one that actually has >1 iterations. That is of course the original meaning of the name "IID" but today it seems that many programmers, including Evert, just do one Search() call with reduced depth and no iterations there. I can't say which approach is the better one, and I also think that discussing that would be a different topic for a different thread. But I am pretty sure that Evert does not use >1 iterations for his "IID". Sure, such "iterations" will indeed happen indirectly through the recursion involved. But when we say that we are in the normal search and IID has returned then those "iterations" are now done and you don't do any further IID at that point. So I think you misunderstand the way how Evert intends to implement the reduction he is talking about, I think he does not speak about doing one iteration less in the IID but he wants to reduce the full-width search of the current node that starts after IID.

Note also that this is not the same as "reducing a move (at the parent node) that fails low", since this would imply knowledge about that "fail low" result already. But the decision about failing high or low of the current (child) node (and thus the move leading to it at the parent) is not yet taken at the point we are since we have just finished IID. The reduced normal search that follows may or may not return a value <= alpha. If it does so that the previous move at the parent fails high then Evert says he does a research at full depth in that case, to avoid wrong cutoffs.

So in general I think the approach may be sound. The only thing I have in mind is that the actual saving might be insufficient to show any improvement, but this can only be subject to testing.

Sven
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: Reductions from internal iterative deepening

Post by Sven »

Don wrote:Why can't you reduce a move and have it fail high or low?
That was not what I meant, I meant there is no "reduce a move when it fails low", or more precisely, "reduce a move after it has failed low". As I also wrote in my reply to HGM, it is the point where we are within the search that matters. If you search a move M1 at node P1, and that search returns with a value meaning that M1 fails high or low, then you don't decide to reduce M1 and search again at that point. The reduction idea of Evert is located at a different point in the search, if it happens then it will happen before searching M1 has returned and the control is back at P1. I don't think I need to explain further to you :-)

Sven
syzygy
Posts: 5911
Joined: Tue Feb 28, 2012 11:56 pm

Re: Reductions from internal iterative deepening

Post by syzygy »

Evert wrote:
syzygy wrote: With LMR, you would normally research a reduced move that failed high with full depth.
I always do a verification search if reduced moves fail high, so this is no different.
You don't here, so it is different.

Hgm's point is that your proposed reduction (of all moves of the current node) has more or less the same effect as a reduction performed in the parent for the move leading to the current position. That turns your fail low condition for the current node into a fail high condition for the parent node. Your proposal is to not search further (i.e. not do the research) if IID returned a value below alpha. That means you back up a value that is above beta for the parent node, i.e. a fail high. That parent node is not going to do the research either, because it had no idea the move was being reduced through your IID reduction. (Or if the parent had reduced the move, your IID reduction is reducing it once more.)

(Hgm may correct me if I didn't get his point correctly.)
User avatar
hgm
Posts: 28464
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Reductions from internal iterative deepening

Post by hgm »

Evert wrote:Yes. But at a node that scores below alpha all moves are poor.
No, that is not what I mean by 'poor'. Moves that score below alpha are just not good enough, but that doesn't mean they are poor. There could be sensible moves amongst them, which could conceivably allow you to fight back (e.g. passer pushes, attacks on high opponent pieces that gain tempo), and are good attempts. Those should better be not reduced.
Sven Schüle wrote:You assume a different IID implementation, one that actually has >1 iterations. That is of course the original meaning of the name "IID" but today it seems that many programmers, including Evert, just do one Search() call with reduced depth and no iterations there. I can't say which approach is the better one, and I also think that discussing that would be a different topic for a different thread. But I am pretty sure that Evert does not use >1 iterations for his "IID". Sure, such "iterations" will indeed happen indirectly through the recursion involved.
That doesn't really matter. Any search at reduced depth is equivalent to an IID iteration. So if searching at d/2 or d-4 fails low, and he continues with d-1 because of that, rather than with d as he would do otherwise, he just slips in another iteration, with the intention to stop early. (This is in fact what I do in Spartacus too, but with the opposite fail.)

I am not sure what Evert does when the d-1 search fails high after all. (Maybe he told, and I missed it.) Logical would be to cancel the reduction, and proceed with depth d after all.
But when we say that we are in the normal search and IID has returned then those "iterations" are now done and you don't do any further IID at that point. So I think you misunderstand the way how Evert intends to implement the reduction he is talking about, I think he does not speak about doing one iteration less in the IID but he wants to reduce the full-width search of the current node that starts after IID.
In my terminology the full-depth search is merely the final iteration of IID. That is just semantics, but I do think it is the correct way to refer to it. Doing one search at reduced depth is not 'deepening'. The deepening is when you search at full depth.
Note also that this is not the same as "reducing a move (at the parent node) that fails low", since this would imply knowledge about that "fail low" result already. But the decision about failing high or low of the current (child) node (and thus the move leading to it at the parent) is not yet taken at the point we are since we have just finished IID. The reduced normal search that follows may or may not return a value <= alpha. If it does so that the previous move at the parent fails high then Evert says he does a research at full depth in that case, to avoid wrong cutoffs.
I would say conventional LMR reduces moves that fail low. Ronald got it exactly right.

So in general I think the approach may be sound. The only thing I have in mind is that the actual saving might be insufficient to show any improvement, but this can only be subject to testing.

Sven[/quote]
User avatar
Evert
Posts: 2929
Joined: Sat Jan 22, 2011 12:42 am
Location: NL

Re: Reductions from internal iterative deepening

Post by Evert »

hgm wrote: No, that is not what I mean by 'poor'. Moves that score below alpha are just not good enough, but that doesn't mean they are poor. There could be sensible moves amongst them, which could conceivably allow you to fight back (e.g. passer pushes, attacks on high opponent pieces that gain tempo), and are good attempts. Those should better be not reduced.
The question is this: <i>if</i> after a search at full-depth no move scores above alpha, then this node fails low and we could (probably) have saved time by searching it at reduced depth. This is what LMR is about: reducing the time spent on moves that will not improve alpha.
It is the same here, except that the assumption that moves can be reduced is not based on their position (or score) in the move list, but on information from a reduced-depth search (which is speculative and may be wrong).
I am not sure what Evert does when the d-1 search fails high after all. (Maybe he told, and I missed it.) Logical would be to cancel the reduction, and proceed with depth d after all.
That's what I do - but only if a reduced move fails high. I don't research the entire node.
What is important, I think, is that even after the reduction the search depth is larger than the depth of the IID search, otherwise we're not going to learn anything and we might as well accept the reduced-depth result - which is probably not a good idea.
I would say conventional LMR reduces moves that fail low. Ronald got it exactly right.
Indeed. This is the same idea.

So the idea may not be worth much on top of conventional LMR, although it could perhaps be used as a speculative hint that more agressive LMR is possible.
User avatar
Evert
Posts: 2929
Joined: Sat Jan 22, 2011 12:42 am
Location: NL

Re: Reductions from internal iterative deepening

Post by Evert »

syzygy wrote: Hgm's point is that your proposed reduction (of all moves of the current node) has more or less the same effect as a reduction performed in the parent for the move leading to the current position. That turns your fail low condition for the current node into a fail high condition for the parent node. Your proposal is to not search further (i.e. not do the research) if IID returned a value below alpha.
Wrong.
If IID indicates that the node will likely fail low, then the search is reduced. That's the idea but the devil is, as always, in the details.
Details include the possibilty of excluding certain moves from reduction (in Jazz' search the first move is always excluded from reductions, this could easily be extended to winning captures or killer moves), researching at full depth if the reduced move fails high and whether the reduced search is still deeper than the IID search (it should be, or we might as well accept the IID result as-is).
That parent node is not going to do the research either, because it had no idea the move was being reduced through your IID reduction. (Or if the parent had reduced the move, your IID reduction is reducing it once more.)
Sure, but I don't see how that is unique to this proposed scheme: any move that is reduced might not have failed low if searched at full depth. That's why reducing arbitrarily is a bad idea.