Reduction Research Question

Discussion of chess software programming and technical issues.

Moderators: hgm, Dann Corbit, Harvey Williamson

Henk
Posts: 7210
Joined: Mon May 27, 2013 10:31 am

Re: Reduction Research Question

Post by Henk »

D Sceviour wrote: What do you mean by “search horizon is cause of biggest errors"?
A search that does not search deep enough is cause of biggest errors. For instance if you don't see that there is a combination below level 0 resulting in a loss of a relatively important piece. If this search would have searched deeper, that is when it had started with a much larger depth, it might have caught that combination.
User avatar
hgm
Posts: 27703
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Reduction Research Question

Post by hgm »

D Sceviour wrote:However, here lies another question. What we are trying to find is a better move, not just spin through the move list as fast as possible. What should be done if a reduction research returns a score > alpha and score < beta? What I assume you are saying is to ignore it.
Well, one thing is sure: that move will become the new PV if you accept such a score, and there are no other remaining moves in this node or all nodes leading to it (which, btw, must all be PV nodes, or the window would not be open in this node) that would score even higher. For which there is no guarantee, and even should be unlikely (or PVS would be a losing deal).

The node could ly at the end of a long stretch (say 3) of very late moves (alternated with cut-moves from the hash), each reduced by (say) 3, 4 or 5 ply. Each of the expected all-nodes will now see that reduced move score between alpha and beta. If they would all accept that score, and no later moves (which probably would be reduced even more...) would supercede it, you would be left with a PV that was reduced by 12 ply. It seems pretty suicidal to me to now trust that move in the root, and play it.
Volker Annuss
Posts: 180
Joined: Mon Sep 03, 2007 9:15 am

Re: Reduction Research Question

Post by Volker Annuss »

hgm wrote:For PVS null-window fail highs you only need to research if alpha < score < beta.
Then your beta is different from mine. In my search beta == alfa + 1 in this case and I only do the research when the score < StartBeta, the value beta had before the search window was closed.
User avatar
hgm
Posts: 27703
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Reduction Research Question

Post by hgm »

Well, when your beta is his alpha+1, then score >= your beta obviously means score > his alpha. So the answer to the question would be 'yes'.
D Sceviour
Posts: 570
Joined: Mon Jul 20, 2015 5:06 pm

Re: Reduction Research Question

Post by D Sceviour »

Volker Annuss wrote:In my search beta == alfa + 1 in this case and I only do the research when the score < StartBeta, the value beta had before the search window was closed.
Do you mean "I only do the reduction research when the return score >StartBeta"?
Volker Annuss
Posts: 180
Joined: Mon Sep 03, 2007 9:15 am

Re: Reduction Research Question

Post by Volker Annuss »

D Sceviour wrote:Do you mean "I only do the reduction research when the return score >StartBeta"?
No, I always do the reduction research when score >= beta, where beta is the one used for the reduced search.

But I did the PVS research when score >= beta and score < StartBeta.

I wrote I did because I have just changed my condition for research to

Code: Select all

if ( score > alfa && &#40;reduced || &#40;StartBeta != beta && score < StartBeta&#41;) )
I fixed a bug somewhere else where not immediately closing the alfa beta window was meant to be helpful, but it wasn't and it broke the research condition. Now the changed research condition is equivalent to the one I used before, but I think it's more stable in case I want to keep the alfa beta window open for some other reason in the future.
D Sceviour
Posts: 570
Joined: Mon Jul 20, 2015 5:06 pm

Re: Reduction Research Question

Post by D Sceviour »

Volker Annuss wrote:

Code: Select all

if ( score > alfa && &#40;reduced || &#40;StartBeta != beta && score < StartBeta&#41;) )
The formula meets the conditions for:

if (score>alfa) && (reduced) && (score<StartBeta)

You are essentially doing a research that if it returns>alpha will produce a new PV. In many cases this will not happen because of the null window but when it does the conditions have been violated for the reduction in the first place as the line was reduced because it was assumed it would fail low. It seems that not just the move, but the whole line would have to be backed up for research to the previous move. Would it not be better to stop there and trip a research on the previous ply? AS HGM pointed out, it would be suicidal to accept such a move as a new PV. The whole issue can be avoided by only testing if score>=beta.

However, this is very interesting. It allows the two independent research conditions to be performed in one test. One of the implied conditions is that a reduction research and PVS research cannot and should not be performed at the same time. I will try it, and get back with suggestions in the future, if any. One thing is for sure, not everyone is in agreement.
User avatar
hgm
Posts: 27703
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Reduction Research Question

Post by hgm »

There are indeed some engines that do every combination,continuing down the list for as long as score > alpha:

1) reduced null-window
2) reduced open-window
3) unreduced null-window
4) unreduced open-window

Whether skipping (2), for instance, would be better depends on how stable your search is in general. If a fail high on (1) would almost always mean (2) fails high as well, there is little point in doing it.

Note that if you do self-deepening IID (2) would be a sort of natural precursor of (4), as it would be an earlier iteration in the daughter. I also don't know how useful (3) is. If you went via (1) to (2) apparently this is a surprise discovery of a new best move, as when it would have been known before to be best move, it would be hash move and not reduced. Is there really any basis for a belief that a move that suddenly starts failing high at some depth would go beach to failing low just by slamming on more depth?
D Sceviour
Posts: 570
Joined: Mon Jul 20, 2015 5:06 pm

Re: Reduction Research Question

Post by D Sceviour »

hgm wrote:There are indeed some engines that do every combination,continuing down the list for as long as score > alpha:
Dividing the various conditions may be the better route to go, especially if writing and testing new code. However, extra debug code and Asserts have to be tested to make sure there is no cross dependency or conflict. After all, there are five variables that have to be tested: Score, Alpha, Beta, StartBeta, and Reduction. Once it is completed, then it may be possible to combine tests.
D Sceviour
Posts: 570
Joined: Mon Jul 20, 2015 5:06 pm

Re: Reduction Research Question

Post by D Sceviour »

hgm wrote:One supposes that the move ordering is based on some quality measure, e.g. history. Otherwise increasing reduction with move number would not be justified at all. And yest, the disadvantage of reduction is that you might overlook things, when moves are unpredictably good.
The example poisoned pawn position was tested using a history table (butterfly board) and the move Bd2 changed to move #26 in the move list. The result was insignificant to make a difference to the amount of reduction. In other articles, it has been reported many programmers are abandoning history tables. In many cases, they could push the best move even further down the list.