Testing Transposition Tables

Discussion of chess software programming and technical issues.

Moderator: Ras

bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Testing Transposition Tables

Post by bob »

Pradu wrote:
cms271828 wrote:[d]4k3/8/8/8/8/8/4P3/4K3 w 1-0
Out of curiosity, does anyone know how many ply it takes to get checkmate in that position, I had engine go upto 36 ply, and it still hadn't found it.
Thanks
http://www.k4it.de/index.php?topic=egtb&lang=en
Mate in 22, by Kd2 or Kf2
On my laptop, crafty finds it at depth=41... this is a mate in 22 by endgame tables, so you should probably find it at depth=45 if check extensions don't help you shorten that a bit once the pawn promotes...

Code: Select all

               38     3.56     +3   1. Kf2!!   (4.1Mnps)             
               38     3.83     +M   1. Kf2!!   (4.1Mnps)             
               38    36.90  Mat22   1. Kf2 Kd8 2. Kf3 Kd7 3. Ke4 Ke6 4.
                                    e3 Kd6 5. Kf5 Ke7 6. Ke5 Kd7 7. Kf6
                                    Kc6 8. Ke6 Kb5 9. e4 Kc5 10. e5 Kc6
                                    11. Kf7 Kd7 12. e6+ <HT>
               38->  37.06  Mat22   1. Kf2 Kd8 2. Kf3 Kd7 3. Ke4 Ke6 4.
                                    e3 Kd6 5. Kf5 Ke7 6. Ke5 Kd7 7. Kf6
                                    Kc6 8. Ke6 Kb5 9. e4 Kc5 10. e5 Kc6
                                    11. Kf7 Kd7 12. e6+ <HT>
               39    39.10  Mat22   1. Kf2 Kd8 2. Kf3 Kd7 3. Ke4 Ke6 4.
                                    e3 Kd6 5. Kf5 Ke7 6. Ke5 Kd7 7. Kf6
                                    Kc6 8. Ke6 Kb5 9. e4 Kc5 10. e5 <HT>
               39->  39.17  Mat22   1. Kf2 Kd8 2. Kf3 Kd7 3. Ke4 Ke6 4.
                                    e3 Kd6 5. Kf5 Ke7 6. Ke5 Kd7 7. Kf6
                                    Kc6 8. Ke6 Kb5 9. e4 Kc5 10. e5 <HT>
               40    45.61  Mat22   1. Kf2 Kd8 2. Kf3 Kd7 3. Ke4 Ke6 4.
                                    e3 Kd6 5. Kf5 Ke7 6. Ke5 Kd7 7. Kf6
                                    Kc6 8. Ke6 Kb5 9. e4 <HT>
               40->  45.69  Mat22   1. Kf2 Kd8 2. Kf3 Kd7 3. Ke4 Ke6 4.
                                    e3 Kd6 5. Kf5 Ke7 6. Ke5 Kd7 7. Kf6
                                    Kc6 8. Ke6 Kb5 9. e4 <HT>
               41     1:02  Mat22   1. Kf2 Kd8 2. Kf3 Kd7 3. Ke4 Ke6 4.
                                    e3 Kd6 5. Kf5 Ke7 6. Ke5 Kd7 7. Kf6
                                    Kc6 8. Ke6 Kb5 9. e4 Kc5 10. e5 Kc6
                                    11. Kf7 Kb5 12. e6 Kb4 13. e7 Kc4 14.
                                    e8=Q Kd3 15. Ke6 Ke2 16. Qd8 Ke1 17.
                                    Qd3 Kf2 18. Kf5 <HT>

jesper_nielsen

Re: Testing Transposition Tables

Post by jesper_nielsen »

cms271828 wrote:Yes, you are right, I overlooked that.

So if you are at a root node, and you probe hash, which immediately returns a value, you then need a best move, which I why I have..

Code: Select all

PROBEHASH()
{
    if(entry  &&   entry.depth >= DEPTH )
    {
             if( DEPTH == MAX_DEPTH ) BESTMOVE_=entry.move;

             ...
    }
}
Does that make sense, since if I miss it out, I have no best move for the root node, and nothing to play :)

Thanks
At root, I would use the hash move regardless of the depth.
If it was best at at shallower depth, it might still be best.

I guess that is really the point of having the bestmove in the hashtable.
If you do not get at cutoff/exact value, at least you might get a good guess to what the best move might be.

Another point is that you can try the hashmove before generating the rest of the moves, thereby saving time, if the move turns out to cause a cutoff.

Sorry to jump into the middle of the discussion, and sorry if this has been said before :-)

Best regards,
Jesper
User avatar
cms271828
Posts: 316
Joined: Wed Apr 12, 2006 10:47 pm

Re: Testing Transposition Tables

Post by cms271828 »

I was thinking about my hash table replacement scheme, perhaps I'm not doing it right, or maybe I could do it better.

I currently look for the entry with the 64bit key at positions
index,index+1,index+2,index+3.

A) If one of those contains the key, I always overwrite that entry.

B) If none of those four entries contains the key, I overwrite the entry with the smallest depth (or if possible, fill the first empty entry I find).

Regarding A), is it better to always overwrite, or only overwrite if the depth is >= the depth of the stored entry? I read the table can fill up with old entries if you do this, so I wasn't sure what the best approach is.

Thanks
Colin
Harald Johnsen

Re: Testing Transposition Tables

Post by Harald Johnsen »

It should be safe to allways replace.
Perhaps it's even needed when an all node become a pv node, you want to store the best move in your table to replace the 'no move' that there is currently in this entry. This best move could come from an IID search (a search with a lower remaining depth).

HJ.
User avatar
hgm
Posts: 28391
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Testing Transposition Tables

Post by hgm »

Always store the new entry, even if it has smaller depth than what is already there. It is just as important to store recent entries as it is to store deep entries. And in your scheme the deep entries are protected in 75% of the table, which should be enough to not lose really valuable ones.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Testing Transposition Tables

Post by bob »

cms271828 wrote:I was thinking about my hash table replacement scheme, perhaps I'm not doing it right, or maybe I could do it better.

I currently look for the entry with the 64bit key at positions
index,index+1,index+2,index+3.

A) If one of those contains the key, I always overwrite that entry.

B) If none of those four entries contains the key, I overwrite the entry with the smallest depth (or if possible, fill the first empty entry I find).

Regarding A), is it better to always overwrite, or only overwrite if the depth is >= the depth of the stored entry? I read the table can fill up with old entries if you do this, so I wasn't sure what the best approach is.

Thanks
First, let me explain the problem, then a workable solution...

The problem with hashing is that it has both a global and a local utility. Global in the sense that an entry stored when searching Mi at the root might be used to shorten a search on Mj because Mi and Mj lead to a transposition of moves that result in the same position. (hence the name transposition/refutation table). That is important, particularly in positions like Fine #70.

The local characteristic stems from the above, but when the trees are very large, the table can become saturated with "deep draft" entries making it impossible to store any shallow-draft positions. This hurts the sub-tree searches since they will get no hash hits at all.

The solution, originally implemented by Ken Thompson in Belle, was to use a two-table approach. One table of size N, using the depth-preferred overwrite approach. The other table of size 2N using an "always store" approach.

The depth-preferred table handles the global transposition case, while the always store table improves the efficiency of sub-tree searches. Crafty uses this in an improved form that uses just one table, but with three entries per set, the first entry is the "depth-preferred" entry, while the other two comprise the "always store" entries...

This is both simple and effective, and has been used by many programs. An added advantage is that it is very simple to implement, which makes it very fast...
User avatar
cms271828
Posts: 316
Joined: Wed Apr 12, 2006 10:47 pm

Re: Testing Transposition Tables

Post by cms271828 »

Thanks, thats pretty much what it says on Bruce Morlands site.
I can only seem to get my table size upto 2^21 elements (using java),
without expanding the memory allowed.
I think I'll stick with 1 hash table for now.

When using hash table with QS, is it worth probing the hash during QS (depth <= 0) ? :?

And for storing, I only store when depth > 0 (This saves time in storing, and leaves more space for more important entries at higher depths)
Is this generally how its done?

Thanks :)
Colin
User avatar
hgm
Posts: 28391
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Testing Transposition Tables

Post by hgm »

Hard to answer. Just try it out, and measure what it is faster. Both for the storing and for the probing.

If you find it is faster not to store or probe QS, you might consider the following: Make a second, small hash table, which always fits in your L2 cache, (say 256KB if you have a 1MB L2 cache), and use that exclusively for storing and probing QS nodes.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Testing Transposition Tables

Post by bob »

cms271828 wrote:Thanks, thats pretty much what it says on Bruce Morlands site.
I can only seem to get my table size upto 2^21 elements (using java),
without expanding the memory allowed.
I think I'll stick with 1 hash table for now.

When using hash table with QS, is it worth probing the hash during QS (depth <= 0) ? :?
I don't probe or store in the qsearch. I found it slightly reduced the size of the tree to do so, but then the cost of doing it offset making it a wash. Not storing reduces the load on the table which is good.


And for storing, I only store when depth > 0 (This saves time in storing, and leaves more space for more important entries at higher depths)
Is this generally how its done?

Thanks :)
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Testing Transposition Tables

Post by bob »

cms271828 wrote:Thanks, thats pretty much what it says on Bruce Morlands site.
I can only seem to get my table size upto 2^21 elements (using java),
without expanding the memory allowed.
I think I'll stick with 1 hash table for now.

When using hash table with QS, is it worth probing the hash during QS (depth <= 0) ? :?

And for storing, I only store when depth > 0 (This saves time in storing, and leaves more space for more important entries at higher depths)
Is this generally how its done?

Thanks :)
The depth=0 question is harder to answer. If you hash in q-search, I would certainly store there as well, otherwise why are you hashing after making a capture?