variable reductions

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: variable reductions

Post by bob »

Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
lucasart wrote:
bob wrote:
tpetzke wrote:I don't do LMR at the root. I just do a scout search for the moves beyond the first. If it fails high, I research with an open window and only if it still increases alpha, it is a new best move.

Thomas...
For the record, LMR at root works, and is effective. If it is good at ply=2, why not at ply=1? Ply=1 also has "lousy moves" near the end of the list.
Complety agree. I never understood why people always want to treat the root node differently. I use the same search function for all kinds of nodes, and view the root node like any other PV node.
In the root node you certainly don't have a fail high or fail low so you have exact values for all moves unless there is a check mate. But if you use LMR at the root the values are computed at different levels.

At the root they do aspiration search. I don't know it that gives profit.

Futility pruning and null move makes no sense in the root node.
At the root you can (a) fail high due to an aspiration window or (b) fail high on a null-window search which is done for every move after the first one.

null-move at the root clearly is wrong. But I don't see why futility pruning is bad.
Only if a bound would stay equal to infinity compare is useless for
val -margin < infinity and val + margin > -infinity
Why would a bound stay at infinity? Surely after searching one move at the root, you have alpha = best score, and most search with beta = best score + 1. I only do futility in the last 5-6 plies, so it won't be done at the root very long, just for the first 6 iterations or so which go by in microseconds.
beta stays at infinity in the root if you only do pure alpha beta in the root.
If you only do minimax in the root there even are no alpha and beta.

But as I already posted I made a mistake in my first post for I forgot that alpha is updated after every move.
I would hope that everybody at LEAST does PVS at the root (null-window on all but first move). But I also do aspiration which adds another layer of "fail highs" at times. I also do an incremental addition to beta when I fail high on the aspiration window, which can further save significant time in the right positions.
For me it is about the idea to get a list of moves with exact values in the root which you sort and are input for the next higher levels.
You can't get exact values beyond the first iteration... So I don't quite understand what you mean. My root scores for initial sorting are produce by a call to Quiesce() which will catch hanging pieces, and also tacks on a static evaluation just like any Quiesce() call. But that is a 0-ply search. If a move becomes best but is not first in the list, it gets moved to the front without changing any other ordering, so I am not sure how that might be improved.
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: variable reductions

Post by Henk »

bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
lucasart wrote:
bob wrote:
tpetzke wrote:I don't do LMR at the root. I just do a scout search for the moves beyond the first. If it fails high, I research with an open window and only if it still increases alpha, it is a new best move.

Thomas...
For the record, LMR at root works, and is effective. If it is good at ply=2, why not at ply=1? Ply=1 also has "lousy moves" near the end of the list.
Complety agree. I never understood why people always want to treat the root node differently. I use the same search function for all kinds of nodes, and view the root node like any other PV node.
In the root node you certainly don't have a fail high or fail low so you have exact values for all moves unless there is a check mate. But if you use LMR at the root the values are computed at different levels.

At the root they do aspiration search. I don't know it that gives profit.

Futility pruning and null move makes no sense in the root node.
At the root you can (a) fail high due to an aspiration window or (b) fail high on a null-window search which is done for every move after the first one.

null-move at the root clearly is wrong. But I don't see why futility pruning is bad.
Only if a bound would stay equal to infinity compare is useless for
val -margin < infinity and val + margin > -infinity
Why would a bound stay at infinity? Surely after searching one move at the root, you have alpha = best score, and most search with beta = best score + 1. I only do futility in the last 5-6 plies, so it won't be done at the root very long, just for the first 6 iterations or so which go by in microseconds.
beta stays at infinity in the root if you only do pure alpha beta in the root.
If you only do minimax in the root there even are no alpha and beta.

But as I already posted I made a mistake in my first post for I forgot that alpha is updated after every move.
I would hope that everybody at LEAST does PVS at the root (null-window on all but first move). But I also do aspiration which adds another layer of "fail highs" at times. I also do an incremental addition to beta when I fail high on the aspiration window, which can further save significant time in the right positions.
For me it is about the idea to get a list of moves with exact values in the root which you sort and are input for the next higher levels.
You can't get exact values beyond the first iteration... So I don't quite understand what you mean. My root scores for initial sorting are produce by a call to Quiesce() which will catch hanging pieces, and also tacks on a static evaluation just like any Quiesce() call. But that is a 0-ply search. If a move becomes best but is not first in the list, it gets moved to the front without changing any other ordering, so I am not sure how that might be improved.
If you search each move at the root node at level N with window -infinity, +infinity then these moves all get exact values. Later, when you search the root node at level N + I you can use all these values for move ordering.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: variable reductions

Post by bob »

Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
lucasart wrote:
bob wrote:
tpetzke wrote:I don't do LMR at the root. I just do a scout search for the moves beyond the first. If it fails high, I research with an open window and only if it still increases alpha, it is a new best move.

Thomas...
For the record, LMR at root works, and is effective. If it is good at ply=2, why not at ply=1? Ply=1 also has "lousy moves" near the end of the list.
Complety agree. I never understood why people always want to treat the root node differently. I use the same search function for all kinds of nodes, and view the root node like any other PV node.
In the root node you certainly don't have a fail high or fail low so you have exact values for all moves unless there is a check mate. But if you use LMR at the root the values are computed at different levels.

At the root they do aspiration search. I don't know it that gives profit.

Futility pruning and null move makes no sense in the root node.
At the root you can (a) fail high due to an aspiration window or (b) fail high on a null-window search which is done for every move after the first one.

null-move at the root clearly is wrong. But I don't see why futility pruning is bad.
Only if a bound would stay equal to infinity compare is useless for
val -margin < infinity and val + margin > -infinity
Why would a bound stay at infinity? Surely after searching one move at the root, you have alpha = best score, and most search with beta = best score + 1. I only do futility in the last 5-6 plies, so it won't be done at the root very long, just for the first 6 iterations or so which go by in microseconds.
beta stays at infinity in the root if you only do pure alpha beta in the root.
If you only do minimax in the root there even are no alpha and beta.

But as I already posted I made a mistake in my first post for I forgot that alpha is updated after every move.
I would hope that everybody at LEAST does PVS at the root (null-window on all but first move). But I also do aspiration which adds another layer of "fail highs" at times. I also do an incremental addition to beta when I fail high on the aspiration window, which can further save significant time in the right positions.
For me it is about the idea to get a list of moves with exact values in the root which you sort and are input for the next higher levels.
You can't get exact values beyond the first iteration... So I don't quite understand what you mean. My root scores for initial sorting are produce by a call to Quiesce() which will catch hanging pieces, and also tacks on a static evaluation just like any Quiesce() call. But that is a 0-ply search. If a move becomes best but is not first in the list, it gets moved to the front without changing any other ordering, so I am not sure how that might be improved.
If you search each move at the root node at level N with window -infinity, +infinity then these moves all get exact values. Later, when you search the root node at level N + I you can use all these values for move ordering.
You ARE kidding, right? Otherwise you just found a way to slow your search down by 20x or so. Take a normal chess program and watch how much time it spends on move one (a lot) vs on the rest of the moves (hardly any). You make it take a LOT of time for each of the rest of the moves. You are slowing the thing down by a factor of 20, to improve move ordering that won't even get you a factor of 2x in return...
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: variable reductions

Post by Henk »

bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
lucasart wrote:
bob wrote:
tpetzke wrote:I don't do LMR at the root. I just do a scout search for the moves beyond the first. If it fails high, I research with an open window and only if it still increases alpha, it is a new best move.

Thomas...
For the record, LMR at root works, and is effective. If it is good at ply=2, why not at ply=1? Ply=1 also has "lousy moves" near the end of the list.
Complety agree. I never understood why people always want to treat the root node differently. I use the same search function for all kinds of nodes, and view the root node like any other PV node.
In the root node you certainly don't have a fail high or fail low so you have exact values for all moves unless there is a check mate. But if you use LMR at the root the values are computed at different levels.

At the root they do aspiration search. I don't know it that gives profit.

Futility pruning and null move makes no sense in the root node.
At the root you can (a) fail high due to an aspiration window or (b) fail high on a null-window search which is done for every move after the first one.

null-move at the root clearly is wrong. But I don't see why futility pruning is bad.
Only if a bound would stay equal to infinity compare is useless for
val -margin < infinity and val + margin > -infinity
Why would a bound stay at infinity? Surely after searching one move at the root, you have alpha = best score, and most search with beta = best score + 1. I only do futility in the last 5-6 plies, so it won't be done at the root very long, just for the first 6 iterations or so which go by in microseconds.
beta stays at infinity in the root if you only do pure alpha beta in the root.
If you only do minimax in the root there even are no alpha and beta.

But as I already posted I made a mistake in my first post for I forgot that alpha is updated after every move.
I would hope that everybody at LEAST does PVS at the root (null-window on all but first move). But I also do aspiration which adds another layer of "fail highs" at times. I also do an incremental addition to beta when I fail high on the aspiration window, which can further save significant time in the right positions.
For me it is about the idea to get a list of moves with exact values in the root which you sort and are input for the next higher levels.
You can't get exact values beyond the first iteration... So I don't quite understand what you mean. My root scores for initial sorting are produce by a call to Quiesce() which will catch hanging pieces, and also tacks on a static evaluation just like any Quiesce() call. But that is a 0-ply search. If a move becomes best but is not first in the list, it gets moved to the front without changing any other ordering, so I am not sure how that might be improved.
If you search each move at the root node at level N with window -infinity, +infinity then these moves all get exact values. Later, when you search the root node at level N + I you can use all these values for move ordering.
You ARE kidding, right? Otherwise you just found a way to slow your search down by 20x or so. Take a normal chess program and watch how much time it spends on move one (a lot) vs on the rest of the moves (hardly any). You make it take a LOT of time for each of the rest of the moves. You are slowing the thing down by a factor of 20, to improve move ordering that won't even get you a factor of 2x in return...
No not kidding. What if N is small and I > N.

20 times a shallow search doesn't cost much.
Joerg Oster
Posts: 937
Joined: Fri Mar 10, 2006 4:29 pm
Location: Germany

Re: variable reductions

Post by Joerg Oster »

Look at this test in fishtest. http://tests.stockfishchess.org/tests/v ... 37ae3e1f5a
It does cost elo.
Jörg Oster
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: variable reductions

Post by Henk »

Maybe not for Skipper. I think even if I would use move values for N = 1 move ordering for remainder root moves at say level 12 is better than I have now.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: variable reductions

Post by bob »

Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
lucasart wrote:
bob wrote:
tpetzke wrote:I don't do LMR at the root. I just do a scout search for the moves beyond the first. If it fails high, I research with an open window and only if it still increases alpha, it is a new best move.

Thomas...
For the record, LMR at root works, and is effective. If it is good at ply=2, why not at ply=1? Ply=1 also has "lousy moves" near the end of the list.
Complety agree. I never understood why people always want to treat the root node differently. I use the same search function for all kinds of nodes, and view the root node like any other PV node.
In the root node you certainly don't have a fail high or fail low so you have exact values for all moves unless there is a check mate. But if you use LMR at the root the values are computed at different levels.

At the root they do aspiration search. I don't know it that gives profit.

Futility pruning and null move makes no sense in the root node.
At the root you can (a) fail high due to an aspiration window or (b) fail high on a null-window search which is done for every move after the first one.

null-move at the root clearly is wrong. But I don't see why futility pruning is bad.
Only if a bound would stay equal to infinity compare is useless for
val -margin < infinity and val + margin > -infinity
Why would a bound stay at infinity? Surely after searching one move at the root, you have alpha = best score, and most search with beta = best score + 1. I only do futility in the last 5-6 plies, so it won't be done at the root very long, just for the first 6 iterations or so which go by in microseconds.
beta stays at infinity in the root if you only do pure alpha beta in the root.
If you only do minimax in the root there even are no alpha and beta.

But as I already posted I made a mistake in my first post for I forgot that alpha is updated after every move.
I would hope that everybody at LEAST does PVS at the root (null-window on all but first move). But I also do aspiration which adds another layer of "fail highs" at times. I also do an incremental addition to beta when I fail high on the aspiration window, which can further save significant time in the right positions.
For me it is about the idea to get a list of moves with exact values in the root which you sort and are input for the next higher levels.
You can't get exact values beyond the first iteration... So I don't quite understand what you mean. My root scores for initial sorting are produce by a call to Quiesce() which will catch hanging pieces, and also tacks on a static evaluation just like any Quiesce() call. But that is a 0-ply search. If a move becomes best but is not first in the list, it gets moved to the front without changing any other ordering, so I am not sure how that might be improved.
If you search each move at the root node at level N with window -infinity, +infinity then these moves all get exact values. Later, when you search the root node at level N + I you can use all these values for move ordering.
You ARE kidding, right? Otherwise you just found a way to slow your search down by 20x or so. Take a normal chess program and watch how much time it spends on move one (a lot) vs on the rest of the moves (hardly any). You make it take a LOT of time for each of the rest of the moves. You are slowing the thing down by a factor of 20, to improve move ordering that won't even get you a factor of 2x in return...
No not kidding. What if N is small and I > N.

20 times a shallow search doesn't cost much.
I don't see how it can happen. The first move generally takes MOST of the time for the iteration. The rest are dismissed quickly thanks to alpha/beta and reductions. If you search with a full window, every root move takes as much time as the first. Don't see how that can even pay for a fraction of the extra work, just to improve move ordering slightly.

Alpha/beta does NOT need perfect ordering anywhere, it just needs to search a first move that is good enough to produce a cutoff. At the root, the best move first is good, but the order of the rest is irrelevant.
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: variable reductions

Post by Henk »

bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
lucasart wrote:
bob wrote:
tpetzke wrote:I don't do LMR at the root. I just do a scout search for the moves beyond the first. If it fails high, I research with an open window and only if it still increases alpha, it is a new best move.

Thomas...
For the record, LMR at root works, and is effective. If it is good at ply=2, why not at ply=1? Ply=1 also has "lousy moves" near the end of the list.
Complety agree. I never understood why people always want to treat the root node differently. I use the same search function for all kinds of nodes, and view the root node like any other PV node.
In the root node you certainly don't have a fail high or fail low so you have exact values for all moves unless there is a check mate. But if you use LMR at the root the values are computed at different levels.

At the root they do aspiration search. I don't know it that gives profit.

Futility pruning and null move makes no sense in the root node.
At the root you can (a) fail high due to an aspiration window or (b) fail high on a null-window search which is done for every move after the first one.

null-move at the root clearly is wrong. But I don't see why futility pruning is bad.
Only if a bound would stay equal to infinity compare is useless for
val -margin < infinity and val + margin > -infinity
Why would a bound stay at infinity? Surely after searching one move at the root, you have alpha = best score, and most search with beta = best score + 1. I only do futility in the last 5-6 plies, so it won't be done at the root very long, just for the first 6 iterations or so which go by in microseconds.
beta stays at infinity in the root if you only do pure alpha beta in the root.
If you only do minimax in the root there even are no alpha and beta.

But as I already posted I made a mistake in my first post for I forgot that alpha is updated after every move.
I would hope that everybody at LEAST does PVS at the root (null-window on all but first move). But I also do aspiration which adds another layer of "fail highs" at times. I also do an incremental addition to beta when I fail high on the aspiration window, which can further save significant time in the right positions.
For me it is about the idea to get a list of moves with exact values in the root which you sort and are input for the next higher levels.
You can't get exact values beyond the first iteration... So I don't quite understand what you mean. My root scores for initial sorting are produce by a call to Quiesce() which will catch hanging pieces, and also tacks on a static evaluation just like any Quiesce() call. But that is a 0-ply search. If a move becomes best but is not first in the list, it gets moved to the front without changing any other ordering, so I am not sure how that might be improved.
If you search each move at the root node at level N with window -infinity, +infinity then these moves all get exact values. Later, when you search the root node at level N + I you can use all these values for move ordering.
You ARE kidding, right? Otherwise you just found a way to slow your search down by 20x or so. Take a normal chess program and watch how much time it spends on move one (a lot) vs on the rest of the moves (hardly any). You make it take a LOT of time for each of the rest of the moves. You are slowing the thing down by a factor of 20, to improve move ordering that won't even get you a factor of 2x in return...
No not kidding. What if N is small and I > N.

20 times a shallow search doesn't cost much.
I don't see how it can happen. The first move generally takes MOST of the time for the iteration. The rest are dismissed quickly thanks to alpha/beta and reductions. If you search with a full window, every root move takes as much time as the first. Don't see how that can even pay for a fraction of the extra work, just to improve move ordering slightly.

Alpha/beta does NOT need perfect ordering anywhere, it just needs to search a first move that is good enough to produce a cutoff. At the root, the best move first is good, but the order of the rest is irrelevant.
If N == 1 costs me only 20 * 1 milliseconds search time and I use these values to improve the move ordering at higher levels (slightly) in the root node then there is still a (small) gain.

First move is not always the best move. A research at move I is better than a research at move I + K unless the new value of the best move is almost equal to the previous best move or am I wrong.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: variable reductions

Post by bob »

Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
bob wrote:
Henk wrote:
lucasart wrote:
bob wrote:
tpetzke wrote:I don't do LMR at the root. I just do a scout search for the moves beyond the first. If it fails high, I research with an open window and only if it still increases alpha, it is a new best move.

Thomas...
For the record, LMR at root works, and is effective. If it is good at ply=2, why not at ply=1? Ply=1 also has "lousy moves" near the end of the list.
Complety agree. I never understood why people always want to treat the root node differently. I use the same search function for all kinds of nodes, and view the root node like any other PV node.
In the root node you certainly don't have a fail high or fail low so you have exact values for all moves unless there is a check mate. But if you use LMR at the root the values are computed at different levels.

At the root they do aspiration search. I don't know it that gives profit.

Futility pruning and null move makes no sense in the root node.
At the root you can (a) fail high due to an aspiration window or (b) fail high on a null-window search which is done for every move after the first one.

null-move at the root clearly is wrong. But I don't see why futility pruning is bad.
Only if a bound would stay equal to infinity compare is useless for
val -margin < infinity and val + margin > -infinity
Why would a bound stay at infinity? Surely after searching one move at the root, you have alpha = best score, and most search with beta = best score + 1. I only do futility in the last 5-6 plies, so it won't be done at the root very long, just for the first 6 iterations or so which go by in microseconds.
beta stays at infinity in the root if you only do pure alpha beta in the root.
If you only do minimax in the root there even are no alpha and beta.

But as I already posted I made a mistake in my first post for I forgot that alpha is updated after every move.
I would hope that everybody at LEAST does PVS at the root (null-window on all but first move). But I also do aspiration which adds another layer of "fail highs" at times. I also do an incremental addition to beta when I fail high on the aspiration window, which can further save significant time in the right positions.
For me it is about the idea to get a list of moves with exact values in the root which you sort and are input for the next higher levels.
You can't get exact values beyond the first iteration... So I don't quite understand what you mean. My root scores for initial sorting are produce by a call to Quiesce() which will catch hanging pieces, and also tacks on a static evaluation just like any Quiesce() call. But that is a 0-ply search. If a move becomes best but is not first in the list, it gets moved to the front without changing any other ordering, so I am not sure how that might be improved.
If you search each move at the root node at level N with window -infinity, +infinity then these moves all get exact values. Later, when you search the root node at level N + I you can use all these values for move ordering.
You ARE kidding, right? Otherwise you just found a way to slow your search down by 20x or so. Take a normal chess program and watch how much time it spends on move one (a lot) vs on the rest of the moves (hardly any). You make it take a LOT of time for each of the rest of the moves. You are slowing the thing down by a factor of 20, to improve move ordering that won't even get you a factor of 2x in return...
No not kidding. What if N is small and I > N.

20 times a shallow search doesn't cost much.
I don't see how it can happen. The first move generally takes MOST of the time for the iteration. The rest are dismissed quickly thanks to alpha/beta and reductions. If you search with a full window, every root move takes as much time as the first. Don't see how that can even pay for a fraction of the extra work, just to improve move ordering slightly.

Alpha/beta does NOT need perfect ordering anywhere, it just needs to search a first move that is good enough to produce a cutoff. At the root, the best move first is good, but the order of the rest is irrelevant.
If N == 1 costs me only 20 * 1 milliseconds search time and I use these values to improve the move ordering at higher levels (slightly) in the root node then there is still a (small) gain.

First move is not always the best move. A research at move I is better than a research at move I + K unless the new value of the best move is almost equal to the previous best move or am I wrong.
I am not sure we are talking the same thing, so for clarity:

In a normal search where you do not change your mind at the root, the first move takes M units of time, each remaining move takes N. where N is much less than M. Here's real output from a 1 min/move game Crafty played in testing last night:

Code: Select all

         26->  21.65/50.26    0.62   21. Rd1 Re8 22. Nf3 Bc5 23. Bxf6 gxf6 24.
                                     Bf1 h6 25. Rd2 Bb6 26. Re2 Rxe2 27. Bxe2
                                     d4 28. Nh4 Bc8 29. Kg2 Kb7 30. Kf3 c5 31.
                                     Kf4 Ba5 32. Bd3 Bd2+ 33. Ke4 Bd7 34. Ng6
         27    39.31/50.26    0.62   21. Rd1 Re8 22. Nf3 Bc5 23. Re1 Bb4 24.
                                     Rd1 Bc5 25. Bxf6 gxf6 26. Bf1 h6 27. Rd2
                                     Bb6 28. Re2 Rxe2 29. Bxe2 d4 30. Nh4 Bc8
                                     31. Kg2 Kb7 32. Kf3 c5 33. Kf4 Ba5 34.
                                     Bd3 Bd2+ 35. Ke4 Bd7 36. Ng6
         27->  41.81/50.26    0.62   21. Rd1 Re8 22. Nf3 Bc5 23. Re1 Bb4 24.
                                     Rd1 Bc5 25. Bxf6 gxf6 26. Bf1 h6 27. Rd2
                                     Bb6 28. Re2 Rxe2 29. Bxe2 d4 30. Nh4 Bc8
                                     31. Kg2 Kb7 32. Kf3 c5 33. Kf4 Ba5 34.
                                     Bd3 Bd2+ 35. Ke4 Bd7 36. Ng6
It finished depth=26 after 21.65 seconds. It finished the first move at depth=27 after 39.31 seconds. And it searched the remainder of the root moves and completed them all at 41.81 seconds.

first move took 17.66 seconds, rest of the moves in total took 2.5 seconds. If I had searched each root move with a full window, that is 30 moves at around 17 seconds EACH. That search would have taken 31 * 17 = 500 seconds, not 20 seconds. That's a killer. Here is my ply one move ordering, which if you recall is just a simple call to q-search after making each root move:

Code: Select all

31 moves at root
        move   score
         Rd1      85
        Bxf6      73
         Bf4      62
         Nf3      55
         Nb1      19
        cxd5    -129
         Ne4    -230
        Bxd5    -260
         Re1    -287
         Be4    -308
          f4    -309
         Bf3    -318
         Rc1    -320
          h4    -323
          a4    -323
         Rb1    -325
         Kh1    -325
          h3    -325
          a3    -325
         Bd4    -328
         Ra1    -328
         Bh3    -331
          f3    -332
         Bb2    -340
         Ba1    -340
          g4    -342
         Bh1    -347
       Bxc7+    -350
          c5    -364
         Bc3    -415
         Bd6    -447
In this case, ordering was correct from the get-go, but in general it is pretty good overall even when it does change its mind due to seeing some deeper tactic that the simple q-search did not see. Now here's the question: what is the potential gain for improving move ordering?

Suppose the first move is not best, currently. Crafty will spend 17 seconds on the first, 17 seconds on the one that is better, and 2 seconds on the rest. By spending over 500 seconds, you have better ordering but since you search root moves with a full window, ordering doesn't get you a thing, and it has a HUGE cost.

If you stop doing the full-window searches after some specific depth limit (such as iteration #12 for example) you still incurred that big cost early on for very little benefit. I don't see how it can pay off.
Angrim
Posts: 97
Joined: Mon Jun 25, 2012 10:16 pm
Location: Forks, WA
Full name: Ben Nye

Re: variable reductions

Post by Angrim »

You were not talking about the same thing. He was talking about using a 1 ply(or at least extremely shallow) full width search at the beginning to get an initial order for all of the root moves. Then using a normal search after that. Frankly, that was pretty obvious from what he wrote. I think an extra second or two spent reading would have saved you a minute on writing.
bob wrote:
Henk wrote: If N == 1 costs me only 20 * 1 milliseconds search time and I use these values to improve the move ordering at higher levels (slightly) in the root node then there is still a (small) gain.

First move is not always the best move. A research at move I is better than a research at move I + K unless the new value of the best move is almost equal to the previous best move or am I wrong.
I am not sure we are talking the same thing, so for clarity:

In a normal search where you do not change your mind at the root, the first move takes M units of time, each remaining move takes N. where N is much less than M. Here's real output from a 1 min/move game Crafty played in testing last night:

Code: Select all

         26->  21.65/50.26    0.62   21. Rd1 Re8 22. Nf3 Bc5 23. Bxf6 gxf6 24.
                                     Bf1 h6 25. Rd2 Bb6 26. Re2 Rxe2 27. Bxe2
                                     d4 28. Nh4 Bc8 29. Kg2 Kb7 30. Kf3 c5 31.
                                     Kf4 Ba5 32. Bd3 Bd2+ 33. Ke4 Bd7 34. Ng6
         27    39.31/50.26    0.62   21. Rd1 Re8 22. Nf3 Bc5 23. Re1 Bb4 24.
                                     Rd1 Bc5 25. Bxf6 gxf6 26. Bf1 h6 27. Rd2
                                     Bb6 28. Re2 Rxe2 29. Bxe2 d4 30. Nh4 Bc8
                                     31. Kg2 Kb7 32. Kf3 c5 33. Kf4 Ba5 34.
                                     Bd3 Bd2+ 35. Ke4 Bd7 36. Ng6
         27->  41.81/50.26    0.62   21. Rd1 Re8 22. Nf3 Bc5 23. Re1 Bb4 24.
                                     Rd1 Bc5 25. Bxf6 gxf6 26. Bf1 h6 27. Rd2
                                     Bb6 28. Re2 Rxe2 29. Bxe2 d4 30. Nh4 Bc8
                                     31. Kg2 Kb7 32. Kf3 c5 33. Kf4 Ba5 34.
                                     Bd3 Bd2+ 35. Ke4 Bd7 36. Ng6
It finished depth=26 after 21.65 seconds. It finished the first move at depth=27 after 39.31 seconds. And it searched the remainder of the root moves and completed them all at 41.81 seconds.

first move took 17.66 seconds, rest of the moves in total took 2.5 seconds. If I had searched each root move with a full window, that is 30 moves at around 17 seconds EACH. That search would have taken 31 * 17 = 500 seconds, not 20 seconds. That's a killer. Here is my ply one move ordering, which if you recall is just a simple call to q-search after making each root move:

Code: Select all

31 moves at root
        move   score
         Rd1      85
        Bxf6      73
         Bf4      62
         Nf3      55
         Nb1      19
        cxd5    -129
         Ne4    -230
        Bxd5    -260
         Re1    -287
         Be4    -308
          f4    -309
         Bf3    -318
         Rc1    -320
          h4    -323
          a4    -323
         Rb1    -325
         Kh1    -325
          h3    -325
          a3    -325
         Bd4    -328
         Ra1    -328
         Bh3    -331
          f3    -332
         Bb2    -340
         Ba1    -340
          g4    -342
         Bh1    -347
       Bxc7+    -350
          c5    -364
         Bc3    -415
         Bd6    -447
In this case, ordering was correct from the get-go, but in general it is pretty good overall even when it does change its mind due to seeing some deeper tactic that the simple q-search did not see. Now here's the question: what is the potential gain for improving move ordering?

Suppose the first move is not best, currently. Crafty will spend 17 seconds on the first, 17 seconds on the one that is better, and 2 seconds on the rest. By spending over 500 seconds, you have better ordering but since you search root moves with a full window, ordering doesn't get you a thing, and it has a HUGE cost.

If you stop doing the full-window searches after some specific depth limit (such as iteration #12 for example) you still incurred that big cost early on for very little benefit. I don't see how it can pay off.