Andreas, I am quite sure (JA compile!) it is Komodo 1.3.RubiChess wrote: ↑Sun Oct 06, 2019 10:27 amIs this really Komodo 13? Only 11 Elo better than Rodent?xr_a_y wrote: ↑Sun Oct 06, 2019 6:41 am
Here is some input, at TC 40/20sec TT=128Mb
...Code: Select all
Rank Name Elo +/- Games Score Draws 1 komodo-13-64-ja 108 23 692 65.1% 28.2% ...
Looking for TT policy advice
Moderators: bob, hgm, Harvey Williamson
Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
- Guenther
- Posts: 3098
- Joined: Wed Oct 01, 2008 4:33 am
- Location: Regensburg, Germany
- Full name: Guenther Simon
- Contact:
Re: Looking for TT policy advice
Current foe list count : [101]
http://rwbc-chess.de/chronology.htm
http://rwbc-chess.de/chronology.htm
Re: Looking for TT policy advice
yes 1.3 of course
Minic is totally killed by Komodo 10, I don't even try 13 yet ...

Minic is totally killed by Komodo 10, I don't even try 13 yet ...
Vivien Clauzon
Weini
https://github.com/tryingsomestuff/Weini
Minic
https://github.com/tryingsomestuff/Minic
Weini
https://github.com/tryingsomestuff/Weini
Minic
https://github.com/tryingsomestuff/Minic
Re: Looking for TT policy advice
After working on the little bug I found previously, I tried some different TT scheme without any success. I use the same table to store eval and score and it is used in qsearch also. It feels like something about depth is needed, about entry age maybe but I couldn't found something better than just replacing the worst depth (with only 2 buckets ...)hgm wrote: ↑Fri Oct 04, 2019 1:18 pmAt short TC the entire tree often fits in the TT, so the replacement algorithm does not matter as it is hardly exercised. My exprience is that you only start to notice a slowdown if you shrink the TT size to less than 10% of the node count, even with rather simple replacement schemes.
For running with very much overloaded TT, it is important to make sure intermediate depth are not completely pushed out of the table: what is not deep enough to fight its way into a depth-preferred slot, will end up in the always-replace slot. There the survival time is very short, as these slots are constantly bombarded by a flood of d=0 (or whetever your lowest stored depth is) entries.
One way around that is allowing shallow entries to replace deeper ones when the depth of the latter is overrepresented in the table. (E.g. when there are more of that depth than d=1 entries.) For this you would have too keep a histogram of table depth, however. An alterative is to use under-cut replacement, where you replace a depth-preferred entry by a new entry of a depth that is equal, higher or exactly one lower. This will eventually flush any depth out of the table, so one should view it as a kind of always-replace slot. But it tends to hang on longer to the deeper entries.
My current favorite scheme is to probe only 3 entries in each bucket; the hash key determining which 3: one that keeps track of the highest depth so far (subject to aging uring game play, but not in analysis), one under-cut slot, and one always-replace slot. If the total number of slots in a bucket is not a multiple of 3, some keys can share some of the entries (e.g. only a single always-replace entry per bucket).
Vivien Clauzon
Weini
https://github.com/tryingsomestuff/Weini
Minic
https://github.com/tryingsomestuff/Minic
Weini
https://github.com/tryingsomestuff/Weini
Minic
https://github.com/tryingsomestuff/Minic
- hgm
- Posts: 23772
- Joined: Fri Mar 10, 2006 9:06 am
- Location: Amsterdam
- Full name: H G Muller
- Contact:
Re: Looking for TT policy advice
I suppose you mean "with only two slots per bucket".
True, you don't have much leeway in that case, as it is very important to always store in the TT. You could use undercut replacement, replacing the highest depth of the pair rather than the lowest when the new depth is exactly one less. Or do that in only 25% (say) of the cases, based on the low bits of the node counter.
But it would be better to have more slots per bucket. I usuallytry to organize the TT such that buckets are exactly 64 bytes (so they fit in a cache line, and a probe will need only a single memory (burst) access). That often gives me 5 or 6 slots per bucket. Then I use a probe scheme that probes 3 of thos slots. Like 0, 1 and 2 or 0, 3 and 4 when I have 5 slots (depending on the hash key), where 0 would be the always-replace slot.
True, you don't have much leeway in that case, as it is very important to always store in the TT. You could use undercut replacement, replacing the highest depth of the pair rather than the lowest when the new depth is exactly one less. Or do that in only 25% (say) of the cases, based on the low bits of the node counter.
But it would be better to have more slots per bucket. I usuallytry to organize the TT such that buckets are exactly 64 bytes (so they fit in a cache line, and a probe will need only a single memory (burst) access). That often gives me 5 or 6 slots per bucket. Then I use a probe scheme that probes 3 of thos slots. Like 0, 1 and 2 or 0, 3 and 4 when I have 5 slots (depending on the hash key), where 0 would be the always-replace slot.
Re: Looking for TT policy advice
Well for now I use quite big TT entry : 24bytes ...hgm wrote: ↑Fri Oct 11, 2019 7:07 amI suppose you mean "with only two slots per bucket".
True, you don't have much leeway in that case, as it is very important to always store in the TT. You could use undercut replacement, replacing the highest depth of the pair rather than the lowest when the new depth is exactly one less. Or do that in only 25% (say) of the cases, based on the low bits of the node counter.
But it would be better to have more slots per bucket. I usuallytry to organize the TT such that buckets are exactly 64 bytes (so they fit in a cache line, and a probe will need only a single memory (burst) access). That often gives me 5 or 6 slots per bucket. Then I use a probe scheme that probes 3 of thos slots. Like 0, 1 and 2 or 0, 3 and 4 when I have 5 slots (depending on the hash key), where 0 would be the always-replace slot.
Code: Select all
struct Entry{
Move m; //int
Hash h; // int64
ScoreType score, eval; //short int
Bound b; // char
DepthType d; // signed char
short int generation;
};
Vivien Clauzon
Weini
https://github.com/tryingsomestuff/Weini
Minic
https://github.com/tryingsomestuff/Minic
Weini
https://github.com/tryingsomestuff/Weini
Minic
https://github.com/tryingsomestuff/Minic
Re: Looking for TT policy advice
This is not 20 bytes?xr_a_y wrote: ↑Fri Oct 11, 2019 2:54 pmCode: Select all
struct Entry{ Move m; //int Hash h; // int64 ScoreType score, eval; //short int Bound b; // char DepthType d; // signed char short int generation; };
4 + 8 + 2 + 2 + 1 + 1 + 2 = 20
Re: Looking for TT policy advice
Yes but as the struct is ill-organized there is some padding in the middle. That can be optimized in first place.
Vivien Clauzon
Weini
https://github.com/tryingsomestuff/Weini
Minic
https://github.com/tryingsomestuff/Minic
Weini
https://github.com/tryingsomestuff/Weini
Minic
https://github.com/tryingsomestuff/Minic
- hgm
- Posts: 23772
- Joined: Fri Mar 10, 2006 9:06 am
- Location: Amsterdam
- Full name: H G Muller
- Contact:
Re: Looking for TT policy advice
Sure, a large part of the hash key is completely redundant, as it can only be the index of the entry. Usually storing 32 bits as signature is good enough, and I was told Stockfish even uses only 16 bits. Wasting memory on padding is of course never a good idea. Only or very large or complex variants I need more than 2 bytes or the move.
It is also not clear to me what 'generation' is used for.
It is also not clear to me what 'generation' is used for.
Re: Looking for TT policy advice
"generation" is "age" of the entry (it could be smaller or even a bool the way it is used for now...).hgm wrote: ↑Fri Oct 11, 2019 5:20 pmSure, a large part of the hash key is completely redundant, as it can only be the index of the entry. Usually storing 32 bits as signature is good enough, and I was told Stockfish even uses only 16 bits. Wasting memory on padding is of course never a good idea. Only or very large or complex variants I need more than 2 bytes or the move.
It is also not clear to me what 'generation' is used for.
a move is coded so that
Code: Select all
inline ScoreType Move2Score(Move h) { assert(h != INVALIDMOVE); return (h >> 16) & 0xFFFF; }
inline Square Move2From (Move h) { assert(h != INVALIDMOVE); return (h >> 10) & 0x3F ; }
inline Square Move2To (Move h) { assert(h != INVALIDMOVE); return (h >> 4) & 0x3F ; }
inline MType Move2Type (Move h) { assert(h != INVALIDMOVE); return MType(h & 0xF) ; }
Vivien Clauzon
Weini
https://github.com/tryingsomestuff/Weini
Minic
https://github.com/tryingsomestuff/Minic
Weini
https://github.com/tryingsomestuff/Weini
Minic
https://github.com/tryingsomestuff/Minic