variable reductions

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
lucasart
Posts: 3232
Joined: Mon May 31, 2010 1:29 pm
Full name: lucasart

Re: variable reductions

Post by lucasart »

Henk wrote:
lucasart wrote:
bob wrote:
tpetzke wrote:I don't do LMR at the root. I just do a scout search for the moves beyond the first. If it fails high, I research with an open window and only if it still increases alpha, it is a new best move.

Thomas...
For the record, LMR at root works, and is effective. If it is good at ply=2, why not at ply=1? Ply=1 also has "lousy moves" near the end of the list.
Complety agree. I never understood why people always want to treat the root node differently. I use the same search function for all kinds of nodes, and view the root node like any other PV node.
In the root node you certainly don't have a fail high or fail low so you have exact values for all moves unless there is a check mate. But if you use LMR at the root the values are computed at different levels.

At the root they do aspiration search. I don't know it that gives profit.

Futility pruning and null move makes no sense in the root node.
Yes the score is expected to be within the alpha beta window at the root. How is that different from PV nodes in general?

Null move, Futility pruning, Razoring, and generally anything used to predict a fail low or a fail high without searching anything makes no sense at PV nodes, not just the root node.

This is the theory. In practice none of that is true in the presence of aspiration windows and search instability...
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: variable reductions

Post by bob »

hgm wrote:I guess it depends a lot on how stable your search is. That taking away the reduction will cause the score to change is normal, but that opening the window will turn the fail high into a fail low can be pretty unlikely if your search is quite stable.
The problem is, opening the window CHANGES the reduction value in that search, because we are now back to this move producing a PV node below it, as opposed to the previous search where it produced a normal null-window (non-PV node) below it. If you reduce PV nodes less, then on this last re-search, you now reduce less, you see more, and those fail high / worse score searches happen a bit more frequently...
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: variable reductions

Post by bob »

Henk wrote:
lucasart wrote:
bob wrote:
tpetzke wrote:I don't do LMR at the root. I just do a scout search for the moves beyond the first. If it fails high, I research with an open window and only if it still increases alpha, it is a new best move.

Thomas...
For the record, LMR at root works, and is effective. If it is good at ply=2, why not at ply=1? Ply=1 also has "lousy moves" near the end of the list.
Complety agree. I never understood why people always want to treat the root node differently. I use the same search function for all kinds of nodes, and view the root node like any other PV node.
In the root node you certainly don't have a fail high or fail low so you have exact values for all moves unless there is a check mate. But if you use LMR at the root the values are computed at different levels.

At the root they do aspiration search. I don't know it that gives profit.

Futility pruning and null move makes no sense in the root node.
At the root you can (a) fail high due to an aspiration window or (b) fail high on a null-window search which is done for every move after the first one.

null-move at the root clearly is wrong. But I don't see why futility pruning is bad.
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: variable reductions

Post by Henk »

bob wrote:
Henk wrote:
lucasart wrote:
bob wrote:
tpetzke wrote:I don't do LMR at the root. I just do a scout search for the moves beyond the first. If it fails high, I research with an open window and only if it still increases alpha, it is a new best move.

Thomas...
For the record, LMR at root works, and is effective. If it is good at ply=2, why not at ply=1? Ply=1 also has "lousy moves" near the end of the list.
Complety agree. I never understood why people always want to treat the root node differently. I use the same search function for all kinds of nodes, and view the root node like any other PV node.
In the root node you certainly don't have a fail high or fail low so you have exact values for all moves unless there is a check mate. But if you use LMR at the root the values are computed at different levels.

At the root they do aspiration search. I don't know it that gives profit.

Futility pruning and null move makes no sense in the root node.
At the root you can (a) fail high due to an aspiration window or (b) fail high on a null-window search which is done for every move after the first one.

null-move at the root clearly is wrong. But I don't see why futility pruning is bad.
Only if a bound would stay equal to infinity compare is useless for
val -margin < infinity and val + margin > -infinity
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: variable reductions

Post by bob »

lucasart wrote:
Henk wrote:
lucasart wrote:
bob wrote:
tpetzke wrote:I don't do LMR at the root. I just do a scout search for the moves beyond the first. If it fails high, I research with an open window and only if it still increases alpha, it is a new best move.

Thomas...
For the record, LMR at root works, and is effective. If it is good at ply=2, why not at ply=1? Ply=1 also has "lousy moves" near the end of the list.
Complety agree. I never understood why people always want to treat the root node differently. I use the same search function for all kinds of nodes, and view the root node like any other PV node.
In the root node you certainly don't have a fail high or fail low so you have exact values for all moves unless there is a check mate. But if you use LMR at the root the values are computed at different levels.

At the root they do aspiration search. I don't know it that gives profit.

Futility pruning and null move makes no sense in the root node.
Yes the score is expected to be within the alpha beta window at the root. How is that different from PV nodes in general?

Null move, Futility pruning, Razoring, and generally anything used to predict a fail low or a fail high without searching anything makes no sense at PV nodes, not just the root node.

This is the theory. In practice none of that is true in the presence of aspiration windows and search instability...
Why do you say they make no sense? You never see any horrible moves at PV nodes or at the end of the root move list? Is there any justification that says "at a PV node, the LAST move in the move list is likely to be more important than the LAST move at a non-PV node?"

I tested this idea to death a good while back, and even recently with the different reduction values for PV/non-PV. Always tests worse for me. Treating them the same is always better for my program at least. And no, I don't do any of the other nonsense either (no exact TT cutoffs on PV nodes and such, trying to exclude mate scores or draw scores and such).
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: variable reductions

Post by bob »

tpetzke wrote:
For the record, LMR at root works, and is effective. If it is good at ply=2, why not at ply=1? Ply=1 also has "lousy moves" near the end of the list.
So far I had it out because I don't know how good my root move list is ordered. There might be lousy moves at the end of the list, so might be good moves. In the inner nodes I have killers and counter moves and history to order the quiet moves a bit. I don't have that at the root.

I do the usual stuff for the captures and also moves that were best once are maintained at the top spots. They might become good again. But beyond that the list is more or less in the order the moves were generated.

So I have no reason to believe that move 20 in the list is any better than move 30, so why reduce move 30 but not 20.

In the past I maintained fail high counters for each root move and ordered the list with them but it was not really worth the effort. So I removed it.
My root move list ordering is pretty simple, and pretty reasonable.

1. Before starting a new search (not new iteration, but new search) I generate the list of moves, and sort them by making each move, one at a time, and calling quiesce() with an infinite window to get a real score back.

2. I sort the root move list based on these values, which might just be static evaluations, or might include a deep sequence of captures first if something is hanging after the root move is made.

3. If a move fails high at any iteration, it is removed from the list, everything above it moves down one slot, and that move goes to the top of the list.

This means best moves collect at the front, and reducing those at the end works quite we'll for me. One minor trick I do use is that when a move fails high, it won't be reduced for this or the next two iterations (at the root only although I suppose with the history information this sort of happens everywhere, except that I don't use the history information to order moves at the root at all).
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: variable reductions

Post by bob »

Henk wrote:
bob wrote:
Henk wrote:
lucasart wrote:
bob wrote:
tpetzke wrote:I don't do LMR at the root. I just do a scout search for the moves beyond the first. If it fails high, I research with an open window and only if it still increases alpha, it is a new best move.

Thomas...
For the record, LMR at root works, and is effective. If it is good at ply=2, why not at ply=1? Ply=1 also has "lousy moves" near the end of the list.
Complety agree. I never understood why people always want to treat the root node differently. I use the same search function for all kinds of nodes, and view the root node like any other PV node.
In the root node you certainly don't have a fail high or fail low so you have exact values for all moves unless there is a check mate. But if you use LMR at the root the values are computed at different levels.

At the root they do aspiration search. I don't know it that gives profit.

Futility pruning and null move makes no sense in the root node.
At the root you can (a) fail high due to an aspiration window or (b) fail high on a null-window search which is done for every move after the first one.

null-move at the root clearly is wrong. But I don't see why futility pruning is bad.
Only if a bound would stay equal to infinity compare is useless for
val -margin < infinity and val + margin > -infinity
Why would a bound stay at infinity? Surely after searching one move at the root, you have alpha = best score, and most search with beta = best score + 1. I only do futility in the last 5-6 plies, so it won't be done at the root very long, just for the first 6 iterations or so which go by in microseconds.
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: variable reductions

Post by Henk »

bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
lucasart wrote:
bob wrote:
tpetzke wrote:I don't do LMR at the root. I just do a scout search for the moves beyond the first. If it fails high, I research with an open window and only if it still increases alpha, it is a new best move.

Thomas...
For the record, LMR at root works, and is effective. If it is good at ply=2, why not at ply=1? Ply=1 also has "lousy moves" near the end of the list.
Complety agree. I never understood why people always want to treat the root node differently. I use the same search function for all kinds of nodes, and view the root node like any other PV node.
In the root node you certainly don't have a fail high or fail low so you have exact values for all moves unless there is a check mate. But if you use LMR at the root the values are computed at different levels.

At the root they do aspiration search. I don't know it that gives profit.

Futility pruning and null move makes no sense in the root node.
At the root you can (a) fail high due to an aspiration window or (b) fail high on a null-window search which is done for every move after the first one.

null-move at the root clearly is wrong. But I don't see why futility pruning is bad.
Only if a bound would stay equal to infinity compare is useless for
val -margin < infinity and val + margin > -infinity
Why would a bound stay at infinity? Surely after searching one move at the root, you have alpha = best score, and most search with beta = best score + 1. I only do futility in the last 5-6 plies, so it won't be done at the root very long, just for the first 6 iterations or so which go by in microseconds.
beta stays at infinity in the root if you only do pure alpha beta in the root.
If you only do minimax in the root there even are no alpha and beta.

But as I already posted I made a mistake in my first post for I forgot that alpha is updated after every move.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: variable reductions

Post by bob »

Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
lucasart wrote:
bob wrote:
tpetzke wrote:I don't do LMR at the root. I just do a scout search for the moves beyond the first. If it fails high, I research with an open window and only if it still increases alpha, it is a new best move.

Thomas...
For the record, LMR at root works, and is effective. If it is good at ply=2, why not at ply=1? Ply=1 also has "lousy moves" near the end of the list.
Complety agree. I never understood why people always want to treat the root node differently. I use the same search function for all kinds of nodes, and view the root node like any other PV node.
In the root node you certainly don't have a fail high or fail low so you have exact values for all moves unless there is a check mate. But if you use LMR at the root the values are computed at different levels.

At the root they do aspiration search. I don't know it that gives profit.

Futility pruning and null move makes no sense in the root node.
At the root you can (a) fail high due to an aspiration window or (b) fail high on a null-window search which is done for every move after the first one.

null-move at the root clearly is wrong. But I don't see why futility pruning is bad.
Only if a bound would stay equal to infinity compare is useless for
val -margin < infinity and val + margin > -infinity
Why would a bound stay at infinity? Surely after searching one move at the root, you have alpha = best score, and most search with beta = best score + 1. I only do futility in the last 5-6 plies, so it won't be done at the root very long, just for the first 6 iterations or so which go by in microseconds.
beta stays at infinity in the root if you only do pure alpha beta in the root.
If you only do minimax in the root there even are no alpha and beta.

But as I already posted I made a mistake in my first post for I forgot that alpha is updated after every move.
I would hope that everybody at LEAST does PVS at the root (null-window on all but first move). But I also do aspiration which adds another layer of "fail highs" at times. I also do an incremental addition to beta when I fail high on the aspiration window, which can further save significant time in the right positions.
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: variable reductions

Post by Henk »

bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
lucasart wrote:
bob wrote:
tpetzke wrote:I don't do LMR at the root. I just do a scout search for the moves beyond the first. If it fails high, I research with an open window and only if it still increases alpha, it is a new best move.

Thomas...
For the record, LMR at root works, and is effective. If it is good at ply=2, why not at ply=1? Ply=1 also has "lousy moves" near the end of the list.
Complety agree. I never understood why people always want to treat the root node differently. I use the same search function for all kinds of nodes, and view the root node like any other PV node.
In the root node you certainly don't have a fail high or fail low so you have exact values for all moves unless there is a check mate. But if you use LMR at the root the values are computed at different levels.

At the root they do aspiration search. I don't know it that gives profit.

Futility pruning and null move makes no sense in the root node.
At the root you can (a) fail high due to an aspiration window or (b) fail high on a null-window search which is done for every move after the first one.

null-move at the root clearly is wrong. But I don't see why futility pruning is bad.
Only if a bound would stay equal to infinity compare is useless for
val -margin < infinity and val + margin > -infinity
Why would a bound stay at infinity? Surely after searching one move at the root, you have alpha = best score, and most search with beta = best score + 1. I only do futility in the last 5-6 plies, so it won't be done at the root very long, just for the first 6 iterations or so which go by in microseconds.
beta stays at infinity in the root if you only do pure alpha beta in the root.
If you only do minimax in the root there even are no alpha and beta.

But as I already posted I made a mistake in my first post for I forgot that alpha is updated after every move.
I would hope that everybody at LEAST does PVS at the root (null-window on all but first move). But I also do aspiration which adds another layer of "fail highs" at times. I also do an incremental addition to beta when I fail high on the aspiration window, which can further save significant time in the right positions.
For me it is about the idea to get a list of moves with exact values in the root which you sort and are input for the next higher levels.