bob wrote:
...
If I fail low at any node, this is stored in the hash table as a "UPPER" position and there is obviously no best move to store. In Crafty, as a search is carried out, I save the first move searched (which may be the hash move, or the first capture tried, or the first killer (if there is no hash or safe capture move) or just the first move generated in the worst case. When I discover that this is a "UPPER" position with no best move, I store the first move searched as the best move. Why? The next time I hit this position, I will search that move without a move generation, and without a best move that is the move I would end up searching first anyway, so why not?
...
However, this now means that _every_ hash hit will have a best move (the EXACT/LOWER positions obviously would, and now UPPER also).
...
i agree that having a move available will improve the search,
and results in positive elo gain.
i dont agree that is has to be the first move given by the moveselector.
(as we all know,which tries: tMove,captures ...)
i am prettey sure, that using the move with the highest fail-lo score
is the better choice, at least it is more dynamic.Further i believe
that will result in smaller trees, and speed up the search more
than your approach.
The problem is in your terminology. You do not think of "highest fail-lo score" as that makes no sense. For each fail-low move/score you get back, are you _certain_ that your opponent found the _best_ refutation? Or did a killer just happen to be good enough? The point is, the scores you get back are not scores, they are all bounds. And you can't compare bounds to figure out which one might be best, because these are _very_ loose bounds at best.
Simple test. Ed Schroeder did exactly what you are doing years ago. We had this same discussion. He removed that code and found his trees got _smaller_. Easy to test.
Chan Rasjid wrote:Hello Michael,
Your scheme differs from that of Bob and it is not what he wants.
In an all-node, Bob's scheme either stores the original tt-move or the first move done,ie the move with the highest original ordering. In your scheme, the move stored might be way down in the original ordering. Such a move, if first searched in a hash hit in a subsequent search, will not be searching a move with the highest ordering and it would be a bad move to try.
Rasjid.
yes, i try a move a move which might be way down in _original_ move ordering. But there is no argument (for now),
that is telling us, it would be worse (both moves failed lo, the first
and move-n with the higher fail-lo score)
At the end, if we have a real allNode, which is now and for all revisits
an allNode, we might store a move by random.That would not hurt.
But my opinion is, if we dont do it by random then we need a _formular_
to pick a (best)move.
bobs formular: (1)
first move given by moveselection code
my formular: (2)
the value with the highest fail-lo score.(btw the most unimportant
thing is not to have _any_ extra code as advantage)
why should (1) be better than (2) ?
Michael
1. has zero extra code. Only considering ALL nodes, you have two possibilities when you arrive at one. (a) you have a hash hit but the bounds or score doesn't let you stop searching, but you do have a hash move to try first. (b) you do not have a hash move, so you try the move your move generator pops out first.
My approach does both. If you have an existing hash move, it will be preserved when we store the hash entry for this ALL node after we finish. If not, we will store the first move we searched, which is as good as any place to start since that is what we would do with no hash move at all. Requires no tweaking in the hash store code to preserve an existing hash move either... So simple _and_ very effective.
And, of course, verified with a "few" games on our cluster.
I think you should still check that you don't overwrite a tt move so that you don't lose it when you call HashStore() with no move from a null move search.
I think it is better not to save anything at all after a nullmove search. We had a thread about it some time ago, originated but someone what was saving it with depth-R rather than R. Depth-R seemed to work better, despite it is a bug. For me, it works even better not to save anything, which may mean that saving depth-R is good because it could be overwritten faster.
Miguel
When I added the HashStore() after a null-move search I found a gain, not a loss. I don't see why a loss would occur. You just did a null-move search. It failed high. Why, the next time you reach search with this same position, you'd want to have to repeat the null-move search again is beyond me. In Crafty, the first one fails high, the second search terminates with a hash hit. And no, you do _not_ store depth-R, that is wrong. You store "depth" because a null-move search with "depth" plies remaining just failed high. At any other point in the tree you reach this identical position, with depth <= draft (depth from table) you again should fail high...
This is a trivial change, and since I don't remember the Elo change, I have this one queued up to run as well thru the usual 30K game test. However, there are a couple of other tests queued up first, so it might be tomorrow before I have results.
bob wrote:
...
If I fail low at any node, this is stored in the hash table as a "UPPER" position and there is obviously no best move to store. In Crafty, as a search is carried out, I save the first move searched (which may be the hash move, or the first capture tried, or the first killer (if there is no hash or safe capture move) or just the first move generated in the worst case. When I discover that this is a "UPPER" position with no best move, I store the first move searched as the best move. Why? The next time I hit this position, I will search that move without a move generation, and without a best move that is the move I would end up searching first anyway, so why not?
...
However, this now means that _every_ hash hit will have a best move (the EXACT/LOWER positions obviously would, and now UPPER also).
...
i agree that having a move available will improve the search,
and results in positive elo gain.
i dont agree that is has to be the first move given by the moveselector.
(as we all know,which tries: tMove,captures ...)
i am prettey sure, that using the move with the highest fail-lo score
is the better choice, at least it is more dynamic.Further i believe
that will result in smaller trees, and speed up the search more
than your approach.
The problem is in your terminology. You do not think of "highest fail-lo score" as that makes no sense. For each fail-low move/score you get back, are you _certain_ that your opponent found the _best_ refutation? Or did a killer just happen to be good enough? The point is, the scores you get back are not scores, they are all bounds. And you can't compare bounds to figure out which one might be best, because these are _very_ loose bounds at best.
Simple test. Ed Schroeder did exactly what you are doing years ago. We had this same discussion. He removed that code and found his trees got _smaller_. Easy to test.
Chan Rasjid wrote:Hello Michael,
Your scheme differs from that of Bob and it is not what he wants.
In an all-node, Bob's scheme either stores the original tt-move or the first move done,ie the move with the highest original ordering. In your scheme, the move stored might be way down in the original ordering. Such a move, if first searched in a hash hit in a subsequent search, will not be searching a move with the highest ordering and it would be a bad move to try.
Rasjid.
yes, i try a move a move which might be way down in _original_ move ordering. But there is no argument (for now),
that is telling us, it would be worse (both moves failed lo, the first
and move-n with the higher fail-lo score)
At the end, if we have a real allNode, which is now and for all revisits
an allNode, we might store a move by random.That would not hurt.
But my opinion is, if we dont do it by random then we need a _formular_
to pick a (best)move.
bobs formular: (1)
first move given by moveselection code
my formular: (2)
the value with the highest fail-lo score.(btw the most unimportant
thing is not to have _any_ extra code as advantage)
why should (1) be better than (2) ?
Michael
1. has zero extra code. Only considering ALL nodes, you have two possibilities when you arrive at one. (a) you have a hash hit but the bounds or score doesn't let you stop searching, but you do have a hash move to try first. (b) you do not have a hash move, so you try the move your move generator pops out first.
My approach does both. If you have an existing hash move, it will be preserved when we store the hash entry for this ALL node after we finish. If not, we will store the first move we searched, which is as good as any place to start since that is what we would do with no hash move at all. Requires no tweaking in the hash store code to preserve an existing hash move either... So simple _and_ very effective.
And, of course, verified with a "few" games on our cluster.
I think you should still check that you don't overwrite a tt move so that you don't lose it when you call HashStore() with no move from a null move search.
I think it is better not to save anything at all after a nullmove search. We had a thread about it some time ago, originated but someone what was saving it with depth-R rather than R. Depth-R seemed to work better, despite it is a bug. For me, it works even better not to save anything, which may mean that saving depth-R is good because it could be overwritten faster.
Miguel
When I added the HashStore() after a null-move search I found a gain, not a loss. I don't see why a loss would occur. You just did a null-move search. It failed high. Why, the next time you reach search with this same position, you'd want to have to repeat the null-move search again is beyond me.
Because you are occupying a slot that may be more important to be reserved for other positions, which are more expensive to be re-searched (cannot be cut off with NM, for instance).
You may not see this effect if you never fill the hash-table. In super fast games, that won't happen unless you use a very small hash table (~4MB or so). What HT size do you use for your testing?
Miguel
In Crafty, the first one fails high, the second search terminates with a hash hit. And no, you do _not_ store depth-R, that is wrong. You store "depth" because a null-move search with "depth" plies remaining just failed high. At any other point in the tree you reach this identical position, with depth <= draft (depth from table) you again should fail high...
This is a trivial change, and since I don't remember the Elo change, I have this one queued up to run as well thru the usual 30K game test. However, there are a couple of other tests queued up first, so it might be tomorrow before I have results.
bob wrote:
...
If I fail low at any node, this is stored in the hash table as a "UPPER" position and there is obviously no best move to store. In Crafty, as a search is carried out, I save the first move searched (which may be the hash move, or the first capture tried, or the first killer (if there is no hash or safe capture move) or just the first move generated in the worst case. When I discover that this is a "UPPER" position with no best move, I store the first move searched as the best move. Why? The next time I hit this position, I will search that move without a move generation, and without a best move that is the move I would end up searching first anyway, so why not?
...
However, this now means that _every_ hash hit will have a best move (the EXACT/LOWER positions obviously would, and now UPPER also).
...
i agree that having a move available will improve the search,
and results in positive elo gain.
i dont agree that is has to be the first move given by the moveselector.
(as we all know,which tries: tMove,captures ...)
i am prettey sure, that using the move with the highest fail-lo score
is the better choice, at least it is more dynamic.Further i believe
that will result in smaller trees, and speed up the search more
than your approach.
The problem is in your terminology. You do not think of "highest fail-lo score" as that makes no sense. For each fail-low move/score you get back, are you _certain_ that your opponent found the _best_ refutation? Or did a killer just happen to be good enough? The point is, the scores you get back are not scores, they are all bounds. And you can't compare bounds to figure out which one might be best, because these are _very_ loose bounds at best.
Simple test. Ed Schroeder did exactly what you are doing years ago. We had this same discussion. He removed that code and found his trees got _smaller_. Easy to test.
Chan Rasjid wrote:Hello Michael,
Your scheme differs from that of Bob and it is not what he wants.
In an all-node, Bob's scheme either stores the original tt-move or the first move done,ie the move with the highest original ordering. In your scheme, the move stored might be way down in the original ordering. Such a move, if first searched in a hash hit in a subsequent search, will not be searching a move with the highest ordering and it would be a bad move to try.
Rasjid.
yes, i try a move a move which might be way down in _original_ move ordering. But there is no argument (for now),
that is telling us, it would be worse (both moves failed lo, the first
and move-n with the higher fail-lo score)
At the end, if we have a real allNode, which is now and for all revisits
an allNode, we might store a move by random.That would not hurt.
But my opinion is, if we dont do it by random then we need a _formular_
to pick a (best)move.
bobs formular: (1)
first move given by moveselection code
my formular: (2)
the value with the highest fail-lo score.(btw the most unimportant
thing is not to have _any_ extra code as advantage)
why should (1) be better than (2) ?
Michael
1. has zero extra code. Only considering ALL nodes, you have two possibilities when you arrive at one. (a) you have a hash hit but the bounds or score doesn't let you stop searching, but you do have a hash move to try first. (b) you do not have a hash move, so you try the move your move generator pops out first.
My approach does both. If you have an existing hash move, it will be preserved when we store the hash entry for this ALL node after we finish. If not, we will store the first move we searched, which is as good as any place to start since that is what we would do with no hash move at all. Requires no tweaking in the hash store code to preserve an existing hash move either... So simple _and_ very effective.
And, of course, verified with a "few" games on our cluster.
I think you should still check that you don't overwrite a tt move so that you don't lose it when you call HashStore() with no move from a null move search.
I think it is better not to save anything at all after a nullmove search. We had a thread about it some time ago, originated but someone what was saving it with depth-R rather than R. Depth-R seemed to work better, despite it is a bug. For me, it works even better not to save anything, which may mean that saving depth-R is good because it could be overwritten faster.
Miguel
When I added the HashStore() after a null-move search I found a gain, not a loss. I don't see why a loss would occur. You just did a null-move search. It failed high. Why, the next time you reach search with this same position, you'd want to have to repeat the null-move search again is beyond me.
Because you are occupying a slot that may be more important to be reserved for other positions, which are more expensive to be re-searched (cannot be cut off with NM, for instance).
Not sure this is a reasonable assumption. In Crafty, I do not hash in q-search. On my 8-core box I have been using for about 3 years, I search about 20M-30M nodes per second. If you assume 100 seconds for a search, that becomes 2-3 billion nodes per second. But only 10% or so of nodes get hashed since the rest are q-search. so 200M positions. At 16 bytes per position. 4 gigs of Ram and you can store every last position assuming no collisions. At 16 gigs of RAM, you can pretty near fit everything anyway.
This seems to be the old "bird in the hand vs 2 in the bush" idea. Here, the bird in the hand is the result of a search that just failed high at reduced depth. The two in the bush represent the position that this might overwrite, and which might be useful later.
You may not see this effect if you never fill the hash-table. In super fast games, that won't happen unless you use a very small hash table (~4MB or so). What HT size do you use for your testing?
I use 256M, which equates to 16M entries. On ICC I typically use 32M entries unless it is a major tournament with longish time controls, where I usually use 8 gigs of RAM or 512M entries...
Miguel
In Crafty, the first one fails high, the second search terminates with a hash hit. And no, you do _not_ store depth-R, that is wrong. You store "depth" because a null-move search with "depth" plies remaining just failed high. At any other point in the tree you reach this identical position, with depth <= draft (depth from table) you again should fail high...
This is a trivial change, and since I don't remember the Elo change, I have this one queued up to run as well thru the usual 30K game test. However, there are a couple of other tests queued up first, so it might be tomorrow before I have results.
bob wrote:
...
If I fail low at any node, this is stored in the hash table as a "UPPER" position and there is obviously no best move to store. In Crafty, as a search is carried out, I save the first move searched (which may be the hash move, or the first capture tried, or the first killer (if there is no hash or safe capture move) or just the first move generated in the worst case. When I discover that this is a "UPPER" position with no best move, I store the first move searched as the best move. Why? The next time I hit this position, I will search that move without a move generation, and without a best move that is the move I would end up searching first anyway, so why not?
...
However, this now means that _every_ hash hit will have a best move (the EXACT/LOWER positions obviously would, and now UPPER also).
...
i agree that having a move available will improve the search,
and results in positive elo gain.
i dont agree that is has to be the first move given by the moveselector.
(as we all know,which tries: tMove,captures ...)
i am prettey sure, that using the move with the highest fail-lo score
is the better choice, at least it is more dynamic.Further i believe
that will result in smaller trees, and speed up the search more
than your approach.
The problem is in your terminology. You do not think of "highest fail-lo score" as that makes no sense. For each fail-low move/score you get back, are you _certain_ that your opponent found the _best_ refutation? Or did a killer just happen to be good enough? The point is, the scores you get back are not scores, they are all bounds. And you can't compare bounds to figure out which one might be best, because these are _very_ loose bounds at best.
Simple test. Ed Schroeder did exactly what you are doing years ago. We had this same discussion. He removed that code and found his trees got _smaller_. Easy to test.
Chan Rasjid wrote:Hello Michael,
Your scheme differs from that of Bob and it is not what he wants.
In an all-node, Bob's scheme either stores the original tt-move or the first move done,ie the move with the highest original ordering. In your scheme, the move stored might be way down in the original ordering. Such a move, if first searched in a hash hit in a subsequent search, will not be searching a move with the highest ordering and it would be a bad move to try.
Rasjid.
yes, i try a move a move which might be way down in _original_ move ordering. But there is no argument (for now),
that is telling us, it would be worse (both moves failed lo, the first
and move-n with the higher fail-lo score)
At the end, if we have a real allNode, which is now and for all revisits
an allNode, we might store a move by random.That would not hurt.
But my opinion is, if we dont do it by random then we need a _formular_
to pick a (best)move.
bobs formular: (1)
first move given by moveselection code
my formular: (2)
the value with the highest fail-lo score.(btw the most unimportant
thing is not to have _any_ extra code as advantage)
why should (1) be better than (2) ?
Michael
1. has zero extra code. Only considering ALL nodes, you have two possibilities when you arrive at one. (a) you have a hash hit but the bounds or score doesn't let you stop searching, but you do have a hash move to try first. (b) you do not have a hash move, so you try the move your move generator pops out first.
My approach does both. If you have an existing hash move, it will be preserved when we store the hash entry for this ALL node after we finish. If not, we will store the first move we searched, which is as good as any place to start since that is what we would do with no hash move at all. Requires no tweaking in the hash store code to preserve an existing hash move either... So simple _and_ very effective.
And, of course, verified with a "few" games on our cluster.
I think you should still check that you don't overwrite a tt move so that you don't lose it when you call HashStore() with no move from a null move search.
I think it is better not to save anything at all after a nullmove search. We had a thread about it some time ago, originated but someone what was saving it with depth-R rather than R. Depth-R seemed to work better, despite it is a bug. For me, it works even better not to save anything, which may mean that saving depth-R is good because it could be overwritten faster.
Miguel
When I added the HashStore() after a null-move search I found a gain, not a loss. I don't see why a loss would occur. You just did a null-move search. It failed high. Why, the next time you reach search with this same position, you'd want to have to repeat the null-move search again is beyond me.
Because you are occupying a slot that may be more important to be reserved for other positions, which are more expensive to be re-searched (cannot be cut off with NM, for instance).
Not sure this is a reasonable assumption. In Crafty, I do not hash in q-search. On my 8-core box I have been using for about 3 years, I search about 20M-30M nodes per second. If you assume 100 seconds for a search, that becomes 2-3 billion nodes per second. But only 10% or so of nodes get hashed since the rest are q-search. so 200M positions. At 16 bytes per position. 4 gigs of Ram and you can store every last position assuming no collisions. At 16 gigs of RAM, you can pretty near fit everything anyway.
This seems to be the old "bird in the hand vs 2 in the bush" idea. Here, the bird in the hand is the result of a search that just failed high at reduced depth. The two in the bush represent the position that this might overwrite, and which might be useful later.
Correct, and it is a gamble as many things in CC. Bird in the hand or two in the bush? What choice will give you more birds in the long run depends on the resources available. Since you use plenty of memory for your test, it is logical that the way you do it now gives you a benefit in the conditions of your test. That does not mean you can extrapolate the results to a situation when the resources are limited (slots become very valuable all of the sudden).
If you test with a ratio HT-size/time-per-move more similar to a typical game, your results may (or not) be reverted.
Miguel
You may not see this effect if you never fill the hash-table. In super fast games, that won't happen unless you use a very small hash table (~4MB or so). What HT size do you use for your testing?
I use 256M, which equates to 16M entries. On ICC I typically use 32M entries unless it is a major tournament with longish time controls, where I usually use 8 gigs of RAM or 512M entries...
Miguel
In Crafty, the first one fails high, the second search terminates with a hash hit. And no, you do _not_ store depth-R, that is wrong. You store "depth" because a null-move search with "depth" plies remaining just failed high. At any other point in the tree you reach this identical position, with depth <= draft (depth from table) you again should fail high...
This is a trivial change, and since I don't remember the Elo change, I have this one queued up to run as well thru the usual 30K game test. However, there are a couple of other tests queued up first, so it might be tomorrow before I have results.
bob wrote:
...
If I fail low at any node, this is stored in the hash table as a "UPPER" position and there is obviously no best move to store. In Crafty, as a search is carried out, I save the first move searched (which may be the hash move, or the first capture tried, or the first killer (if there is no hash or safe capture move) or just the first move generated in the worst case. When I discover that this is a "UPPER" position with no best move, I store the first move searched as the best move. Why? The next time I hit this position, I will search that move without a move generation, and without a best move that is the move I would end up searching first anyway, so why not?
...
However, this now means that _every_ hash hit will have a best move (the EXACT/LOWER positions obviously would, and now UPPER also).
...
i agree that having a move available will improve the search,
and results in positive elo gain.
i dont agree that is has to be the first move given by the moveselector.
(as we all know,which tries: tMove,captures ...)
i am prettey sure, that using the move with the highest fail-lo score
is the better choice, at least it is more dynamic.Further i believe
that will result in smaller trees, and speed up the search more
than your approach.
The problem is in your terminology. You do not think of "highest fail-lo score" as that makes no sense. For each fail-low move/score you get back, are you _certain_ that your opponent found the _best_ refutation? Or did a killer just happen to be good enough? The point is, the scores you get back are not scores, they are all bounds. And you can't compare bounds to figure out which one might be best, because these are _very_ loose bounds at best.
Simple test. Ed Schroeder did exactly what you are doing years ago. We had this same discussion. He removed that code and found his trees got _smaller_. Easy to test.
Chan Rasjid wrote:Hello Michael,
Your scheme differs from that of Bob and it is not what he wants.
In an all-node, Bob's scheme either stores the original tt-move or the first move done,ie the move with the highest original ordering. In your scheme, the move stored might be way down in the original ordering. Such a move, if first searched in a hash hit in a subsequent search, will not be searching a move with the highest ordering and it would be a bad move to try.
Rasjid.
yes, i try a move a move which might be way down in _original_ move ordering. But there is no argument (for now),
that is telling us, it would be worse (both moves failed lo, the first
and move-n with the higher fail-lo score)
At the end, if we have a real allNode, which is now and for all revisits
an allNode, we might store a move by random.That would not hurt.
But my opinion is, if we dont do it by random then we need a _formular_
to pick a (best)move.
bobs formular: (1)
first move given by moveselection code
my formular: (2)
the value with the highest fail-lo score.(btw the most unimportant
thing is not to have _any_ extra code as advantage)
why should (1) be better than (2) ?
Michael
1. has zero extra code. Only considering ALL nodes, you have two possibilities when you arrive at one. (a) you have a hash hit but the bounds or score doesn't let you stop searching, but you do have a hash move to try first. (b) you do not have a hash move, so you try the move your move generator pops out first.
My approach does both. If you have an existing hash move, it will be preserved when we store the hash entry for this ALL node after we finish. If not, we will store the first move we searched, which is as good as any place to start since that is what we would do with no hash move at all. Requires no tweaking in the hash store code to preserve an existing hash move either... So simple _and_ very effective.
And, of course, verified with a "few" games on our cluster.
I think you should still check that you don't overwrite a tt move so that you don't lose it when you call HashStore() with no move from a null move search.
I think it is better not to save anything at all after a nullmove search. We had a thread about it some time ago, originated but someone what was saving it with depth-R rather than R. Depth-R seemed to work better, despite it is a bug. For me, it works even better not to save anything, which may mean that saving depth-R is good because it could be overwritten faster.
Miguel
When I added the HashStore() after a null-move search I found a gain, not a loss. I don't see why a loss would occur. You just did a null-move search. It failed high. Why, the next time you reach search with this same position, you'd want to have to repeat the null-move search again is beyond me.
Because you are occupying a slot that may be more important to be reserved for other positions, which are more expensive to be re-searched (cannot be cut off with NM, for instance).
Not sure this is a reasonable assumption. In Crafty, I do not hash in q-search. On my 8-core box I have been using for about 3 years, I search about 20M-30M nodes per second. If you assume 100 seconds for a search, that becomes 2-3 billion nodes per second. But only 10% or so of nodes get hashed since the rest are q-search. so 200M positions. At 16 bytes per position. 4 gigs of Ram and you can store every last position assuming no collisions. At 16 gigs of RAM, you can pretty near fit everything anyway.
This seems to be the old "bird in the hand vs 2 in the bush" idea. Here, the bird in the hand is the result of a search that just failed high at reduced depth. The two in the bush represent the position that this might overwrite, and which might be useful later.
Correct, and it is a gamble as many things in CC. Bird in the hand or two in the bush? What choice will give you more birds in the long run depends on the resources available. Since you use plenty of memory for your test, it is logical that the way you do it now gives you a benefit in the conditions of your test. That does not mean you can extrapolate the results to a situation when the resources are limited (slots become very valuable all of the sudden).
If you test with a ratio HT-size/time-per-move more similar to a typical game, your results may (or not) be reverted.
I can ramp up the time control, but in reality, my 16 million entries for game in 30 seconds roughly would need 60 times as much for game in 30 minutes (both do have increments). ANd for 2 hour games, 4x that. My 8gb of RAM more than keeps up when you think about it. In testing I use 16M entries, in a tournament I use 512M entries, which is 32x more entries for a search space that is maybe 32x bigger, so pretty equal.
Miguel
You may not see this effect if you never fill the hash-table. In super fast games, that won't happen unless you use a very small hash table (~4MB or so). What HT size do you use for your testing?
I use 256M, which equates to 16M entries. On ICC I typically use 32M entries unless it is a major tournament with longish time controls, where I usually use 8 gigs of RAM or 512M entries...
Miguel
In Crafty, the first one fails high, the second search terminates with a hash hit. And no, you do _not_ store depth-R, that is wrong. You store "depth" because a null-move search with "depth" plies remaining just failed high. At any other point in the tree you reach this identical position, with depth <= draft (depth from table) you again should fail high...
This is a trivial change, and since I don't remember the Elo change, I have this one queued up to run as well thru the usual 30K game test. However, there are a couple of other tests queued up first, so it might be tomorrow before I have results.
I think that most if not all elo difference is coming from keeping the "old" hash-move as it has been the best move in some other search on the same position.
Greetings Volker
That is the effect of this approach. If there _was_ a hash move for this position, it is always searched first. So it would be the one that is stored in an ALL node, which is the only place this trick is used. If there was no hash move to suggest what to search first, I just use the first move searched, since that's a reasonable place to start (if I had no best move that is the same move I would start with since it is the first one produced by the generator).
Hi Bob,
I did understand your optimization (using first searched move as hash move in all_nodes). But for testing the ttSE trick you commented it out and lost a lot of elo. My point is that the elo loss is from loosing a former best-move and not from calling some more move generations. Thus instead of commenting out the hole line try to replace it with keeping the hash-move version. It should bring back all elo´s you lost. If not it is a hint for an improvement for me
Mincho Georgiev wrote:It gave me -5 ELO without the check for move for the new entry saving. Could be wrong result though.
Simple change so I have this queued up for a test run...
When you mentioned this, I originally thought that I was doing this. And at one point, I was, but it was broken. I used to stuff the hash-move into a variable "current_move[ply]" and the null-move search was storing this value. But I later modified the code to store that hash move elsewhere so that I could make sure to not search it a second time (along with killers) when I try "the rest of the moves".
Fixing this was a +2 Elo improvement. Again, not big, but not insignificant.
Thanks for prompting me to check and notice it was broken...
In recent versions, the move I stored was always "0"...