gladius wrote:Wow, Garbochess JS failed miserably on this position, until I tested out your suggestion and moved to always replace. Then it solved it instantly.
Guess I have some work to do on my replacement strategy .
Here is some important advice that quite a few overlook: "NEVER, and I do mean NEVER fail to store a current position in the hash table. Let me repeat, NEVER. You can do whatever you want to choose what to replace, but you MUST replace something.
That seems counter-intuitive, but it is critical. I use a bucket of 4 entries and choose the best of the 4 (actually the worst, but you get my drift) to replace, and I always store the entry from a fail-high, a fail-low, or an EXACT search.
Thanks for your answer.
When I turn of the hash tables I use about 15 second to reach ply 15. With the hash tables I reach ply 23 (2,5 million nodes) at 15 seconds. So the hash tables are doing some good. I'm thinking it might be the replacement scheme that is bad. I use 2 slots, the first slot I replace if deeper, the second slot I use allways replace.
I usually use equidistributed-draft replacement: replace the same position if it is in the bucket, otherwise replace the primary entry if the draft it contains is over-represented in the table, and if not, replace the entry with the lowest draft in the bucket.
gladius wrote:Wow, Garbochess JS failed miserably on this position, until I tested out your suggestion and moved to always replace. Then it solved it instantly.
Guess I have some work to do on my replacement strategy .
Here is some important advice that quite a few overlook: "NEVER, and I do mean NEVER fail to store a current position in the hash table. Let me repeat, NEVER. You can do whatever you want to choose what to replace, but you MUST replace something.
That seems counter-intuitive, but it is critical. I use a bucket of 4 entries and choose the best of the 4 (actually the worst, but you get my drift) to replace, and I always store the entry from a fail-high, a fail-low, or an EXACT search.
To expand this concept I would say that it should be stored in a way it could be read back immediately. For instance, if you replace w/o checking whether the same position is already there, you could replace another slot, but you read back an old one.
gladius wrote:Wow, Garbochess JS failed miserably on this position, until I tested out your suggestion and moved to always replace. Then it solved it instantly.
Guess I have some work to do on my replacement strategy .
Here is some important advice that quite a few overlook: "NEVER, and I do mean NEVER fail to store a current position in the hash table. Let me repeat, NEVER. You can do whatever you want to choose what to replace, but you MUST replace something.
That seems counter-intuitive, but it is critical. I use a bucket of 4 entries and choose the best of the 4 (actually the worst, but you get my drift) to replace, and I always store the entry from a fail-high, a fail-low, or an EXACT search.
To expand this concept I would say that it should be stored in a way it could be read back immediately. For instance, if you replace w/o checking whether the same position is already there, you could replace another slot, but you read back an old one.
I never considered the idea of storing the same entry twice in a hash table, no more than I would want a cache to store the same memory block twice. As they used to write on old 14th century ocean maps, "here there be dragons..."
I would consider two copies of anything to be a bug. And failing to store something at every opportunity is almost as serious...
jacobbl wrote:Thanks for your answer.
When I turn of the hash tables I use about 15 second to reach ply 15. With the hash tables I reach ply 23 (2,5 million nodes) at 15 seconds. So the hash tables are doing some good. I'm thinking it might be the replacement scheme that is bad. I use 2 slots, the first slot I replace if deeper, the second slot I use allways replace.
What replacement scheme would you suggest?
That is commonly called "The Belle hash idea" as developed by Ken Thompson. Works well. But you have to avoid the potential flaw of having the same signature stored twice because the first (depth-preferred entry) has a worthless, but deep-draft entry that is difficult to replace. If you store the same signature in the second entry, but on a probe get a hit on the first entry, then you have a serious flaw...
Ken actually used two hash tables. Depth-preferred entry table was 1/2 the size of the always store table.
bob wrote:I never considered the idea of storing the same entry twice in a hash table, no more than I would want a cache to store the same memory block twice. As they used to write on old 14th century ocean maps, "here there be dragons..."
I would consider two copies of anything to be a bug. And failing to store something at every opportunity is almost as serious...
jacobbl wrote:Looks like I have a serious flaw...
Nothing is better, then there is hope for an improvement
Is there a lot to gain on more complicated replacement schemes?
Not in this type of endgames because not many positions are visited. First, I suggest to debug everything with a simple "replace always" table and a material only eval.
bob wrote:
Here is some important advice that quite a few overlook: "NEVER, and I do mean NEVER fail to store a current position in the hash table. Let me repeat, NEVER. You can do whatever you want to choose what to replace, but you MUST replace something.
Quibble: if the hash table causes a cutoff, you don't want to store anything.