Weaker play with TT

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: Weaker play with TT

Post by Sven »

maksimKorzh wrote: Mon Jan 11, 2021 3:29 pm
hgm wrote: Mon Jan 11, 2021 1:53 pm I suppose you use separate 32-bit keys for the signature and the index, since JavaScript doesn't have a 64-bit type. For the signature you don't care what the sign is, it is only used for a comparison of numbers. You want the index to be within the range of the TT, though, which also means non-negative.

Note that the fact that the index key can be negative is self-inflicted. In Zobrist hashing the key is an XOR of all the Zobrist keys. If you had defined all Zobrist keys with the sign bit zero (i.e. positive), their XOR would also be positive. In fact there is no need to do any masking during TT access; you could have done the masking on the Zobrist keys, so that the bits you want to mask away would always be zero to start with.
re: I suppose you use separate 32-bit keys for the signature and the index, since JavaScript doesn't have a 64-bit type. For the signature you don't care what the sign is, it is only used for a comparison of numbers. You want the index to be within the range of the TT, though, which also means non-negative.

Didn't exactly get what you mean.
I have my own simple XOR shift based pseudo random number generator to have same hash keys in any environment.
My hash keys are 32bit signed (I tried to make them unsigned via new Uint32Array(hashKey)[0] but after XORing it drops back to being negative)
I assume by "signature" you mean unique position identifier to detect repetitions, if so - yes.
And yes, I want the index to be within the range of TT, Martins solution "(hashKey & 0x7FFFFFFF) % hashEntries" seems to be working fine.
Masking Zobrist key itself is very interesting idea but I like to keep them as they are for simplicity -
I mean when masking is done for tt index calculation it's clear that negative bit is being stripped to fit tt length while
masking Zobrist key might a bit tricky to understand WHY to do that, say someone would like to take my engine
as a bases then long time before implementing tt he would have no use of stripped negative bit in Zobrist key
while it might be an unclear part in the code - I try to avoid code forcing to guess what the author meant.
Only buggy code makes wondering)
That confuses me a bit. Why isn't it possible to simply work with unsigned integers (in this case 32 bit), and therefore to avoid any possible errors introduced by "hiding"/"stripping" the sign bit?

My other point is related to your number of hash table entries and its relation to your hash table size. In order to keep both under control you would urgently need to know the exact size of one hash entry in bytes. "80" does not appear to be correct (that would be 5 integers of 16*8=128 bit which I suppose you don't have). If that size is 20 bytes (5 32-bit integers) then your hash table size in bytes should be some (2^N) * 20 where hashEntries := 2^N. So for a 320 MB hash table you would need N=24, for instance. Your index would simply be (hashKey % hashEntries).
Sven Schüle (engine author: Jumbo, KnockOut, Surprise)
User avatar
maksimKorzh
Posts: 771
Joined: Sat Sep 08, 2018 5:37 pm
Location: Ukraine
Full name: Maksim Korzh

Re: Weaker play with TT

Post by maksimKorzh »

mar wrote: Mon Jan 11, 2021 3:25 pm
maksimKorzh wrote: Mon Jan 11, 2021 3:17 pm P.S. Martin, comewhere in the Cheng thread you've mentioned something like: "because I've made no progress since 4.40dev and because CEGT is already testing Guenther's compiles" - I just wanted to ask what you do when feel that kind of reaching the limits - when you feel like you can't make an engine stronger or can't make any progress - What do you do in that case? I mean Stockfish kind of never stucks and getting improved forever but many
other engines reaching some Elo level and then just stuck there forever.

What is the secret of "keep going" the progress?
Thanks in advance?
ah yes, that statement was not entirely correct as my actual version was ~10-15 elo stronger than dev,
but since CEGT already started to test Guenther's compile, I decided to make it an official release,
I didn't want a dev version to pollute the rating lists. Also had I released something that's only 10 elo stronger in selfplay,
due to error bars it might as well be listed below 4.40 dev, so I was sort of forced to release.
lesson learned: never push to master until the release is ready :)

the secret is simply motivation and patience, easier said than done of course.
when I can't make progress I switch to something else than chess (=take a break) and perhaps at some time in the future
the motivation will return for a while
Very insightful, thank you.

P.S. Unfortunately fixing hash entry indexing didn't resolve the issue of weaker play.
I ran around 100 games and it shows that TT version 100!!! Elo points weaker compared to no TT.
I'm very confused... Why the hack that can be...???
op12no2
Posts: 490
Joined: Tue Feb 04, 2014 12:25 pm
Full name: Colin Jenkins

Re: Weaker play with TT

Post by op12no2 »

Hi Maksim, I did reply to your email re TT - did you see it?
Dann Corbit
Posts: 12540
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Weaker play with TT

Post by Dann Corbit »

maksimKorzh wrote: Mon Jan 11, 2021 11:23 pm P.S. Unfortunately fixing hash entry indexing didn't resolve the issue of weaker play.
I ran around 100 games and it shows that TT version 100!!! Elo points weaker compared to no TT.
I'm very confused... Why the hack that can be...???
There is pretty much no way that a complete evaluation is faster than a hash table lookup, so the answer is probably "a bug"

Usually, the problem is caused by the hash key itself.
Null move, e.p. status, castle flags, side to move are common things that did not get calculated correctly, or possibly did not get undone.
I assume that you calculate the hash for the position incrementally.
If so, create a method that calculates the hash from the current game state.
Then compare the incremental hash with the fully recalculated hash.

Hash is important for lots of things. You will get better move ordering with a proper hash table.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
User avatar
maksimKorzh
Posts: 771
Joined: Sat Sep 08, 2018 5:37 pm
Location: Ukraine
Full name: Maksim Korzh

Re: Weaker play with TT

Post by maksimKorzh »

op12no2 wrote: Mon Jan 11, 2021 11:55 pm Hi Maksim, I did reply to your email re TT - did you see it?
Hi Colin, nope - no email from - only that regarding PRNG you've sent previously, could you please resend it?
User avatar
maksimKorzh
Posts: 771
Joined: Sat Sep 08, 2018 5:37 pm
Location: Ukraine
Full name: Maksim Korzh

Re: Weaker play with TT

Post by maksimKorzh »

Dann Corbit wrote: Tue Jan 12, 2021 3:37 am
maksimKorzh wrote: Mon Jan 11, 2021 11:23 pm P.S. Unfortunately fixing hash entry indexing didn't resolve the issue of weaker play.
I ran around 100 games and it shows that TT version 100!!! Elo points weaker compared to no TT.
I'm very confused... Why the hack that can be...???
There is pretty much no way that a complete evaluation is faster than a hash table lookup, so the answer is probably "a bug"

Usually, the problem is caused by the hash key itself.
Null move, e.p. status, castle flags, side to move are common things that did not get calculated correctly, or possibly did not get undone.
I assume that you calculate the hash for the position incrementally.
If so, create a method that calculates the hash from the current game state.
Then compare the incremental hash with the fully recalculated hash.

Hash is important for lots of things. You will get better move ordering with a proper hash table.
Hi Dann

I've been checking hash key correctness when implemented them initially - checked whether incrementally update one matches generated from scratch and it always did. Indeed I have an improved move ordering and TT slices around half of the nodes at depth 5 in initial position.
User avatar
maksimKorzh
Posts: 771
Joined: Sat Sep 08, 2018 5:37 pm
Location: Ukraine
Full name: Maksim Korzh

Re: Weaker play with TT

Post by maksimKorzh »

Sven wrote: Mon Jan 11, 2021 6:12 pm
maksimKorzh wrote: Mon Jan 11, 2021 3:29 pm
hgm wrote: Mon Jan 11, 2021 1:53 pm I suppose you use separate 32-bit keys for the signature and the index, since JavaScript doesn't have a 64-bit type. For the signature you don't care what the sign is, it is only used for a comparison of numbers. You want the index to be within the range of the TT, though, which also means non-negative.

Note that the fact that the index key can be negative is self-inflicted. In Zobrist hashing the key is an XOR of all the Zobrist keys. If you had defined all Zobrist keys with the sign bit zero (i.e. positive), their XOR would also be positive. In fact there is no need to do any masking during TT access; you could have done the masking on the Zobrist keys, so that the bits you want to mask away would always be zero to start with.
re: I suppose you use separate 32-bit keys for the signature and the index, since JavaScript doesn't have a 64-bit type. For the signature you don't care what the sign is, it is only used for a comparison of numbers. You want the index to be within the range of the TT, though, which also means non-negative.

Didn't exactly get what you mean.
I have my own simple XOR shift based pseudo random number generator to have same hash keys in any environment.
My hash keys are 32bit signed (I tried to make them unsigned via new Uint32Array(hashKey)[0] but after XORing it drops back to being negative)
I assume by "signature" you mean unique position identifier to detect repetitions, if so - yes.
And yes, I want the index to be within the range of TT, Martins solution "(hashKey & 0x7FFFFFFF) % hashEntries" seems to be working fine.
Masking Zobrist key itself is very interesting idea but I like to keep them as they are for simplicity -
I mean when masking is done for tt index calculation it's clear that negative bit is being stripped to fit tt length while
masking Zobrist key might a bit tricky to understand WHY to do that, say someone would like to take my engine
as a bases then long time before implementing tt he would have no use of stripped negative bit in Zobrist key
while it might be an unclear part in the code - I try to avoid code forcing to guess what the author meant.
Only buggy code makes wondering)
That confuses me a bit. Why isn't it possible to simply work with unsigned integers (in this case 32 bit), and therefore to avoid any possible errors introduced by "hiding"/"stripping" the sign bit?

My other point is related to your number of hash table entries and its relation to your hash table size. In order to keep both under control you would urgently need to know the exact size of one hash entry in bytes. "80" does not appear to be correct (that would be 5 integers of 16*8=128 bit which I suppose you don't have). If that size is 20 bytes (5 32-bit integers) then your hash table size in bytes should be some (2^N) * 20 where hashEntries := 2^N. So for a 320 MB hash table you would need N=24, for instance. Your index would simply be (hashKey % hashEntries).
Hi Sven

Do you mean stripping negative bit might have caused inappropriate play?

re: 80
- I didn't find a way to get the size of the single number in JS (the docs vaguely saying they are it's 64bit double precision and it depends on javascript implementation). So my 80 = 16 + 16 + 16 + 16 + 16 where 16 = 8 + 8 bytes, asuuming double 64bit. I was checking the actual memory taken on my
laptop when changing tt size and it was roughly matching, I mean when I chanded 16MB to 64MB RAM was around 64MB bigger.
I have a concern regarding 32bit integers - it would certainly be true for C but probably not in JS unless JS "Number" type is dynamically converted
to 32-bit integer somehow - but I can't see that in the docs.

So are you sure we can assume that we deal with 32bit integers in javascript??
op12no2
Posts: 490
Joined: Tue Feb 04, 2014 12:25 pm
Full name: Colin Jenkins

Re: Weaker play with TT

Post by op12no2 »

maksimKorzh wrote: Tue Jan 12, 2021 12:50 pm Hi Colin, nope - no email from - only that regarding PRNG you've sent previously, could you please resend it?
Done. PS As I recall you can get up to 54 reliable bits :roll: with a simple Javascript number - even though there are more floating (ha) around - p4wn does that I think. But I use 2 32 bit ints. Since 2015 ish there has been a Bigint concept in Javascriipt but I've never looked at it in any detail - when I glanced at it I decided it was not useful for hashes, but I may well be wrong. I sometimes plead with the V8 developers to give us native 64 bit ints but it always falls on deaf ears - I even offered them free pizza for life once :) You can get uint 32's using typed arrays, which is what I do. One could argue that for speed you should use typed arrays for everything but the optimiser does an amazing job - watching the nps increase during analysis.

https://developer.mozilla.org/en-US/doc ... cts/BigInt
http://p4wn.sourceforge.net/
https://developer.mozilla.org/en-US/doc ... ped_arrays
https://github.com/op12no2/lozza

Colin
User avatar
maksimKorzh
Posts: 771
Joined: Sat Sep 08, 2018 5:37 pm
Location: Ukraine
Full name: Maksim Korzh

Re: Weaker play with TT

Post by maksimKorzh »

op12no2 wrote: Tue Jan 12, 2021 1:26 pm
maksimKorzh wrote: Tue Jan 12, 2021 12:50 pm Hi Colin, nope - no email from - only that regarding PRNG you've sent previously, could you please resend it?
Done. PS As I recall you can get up to 54 reliable bits :roll: with a simple Javascript number - even though there are more floating (ha) around - p4wn does that I think. But I use 2 32 bit ints. Since 2015 ish there has been a Bigint concept in Javascriipt but I've never looked at it in any detail - when I glanced at it I decided it was not useful for hashes, but I may well be wrong. I sometimes plead with the V8 developers to give us native 64 bit ints but it always falls on deaf ears - I even offered them free pizza for life once :) You can get uint 32's using typed arrays, which is what I do. One could argue that for speed you should use typed arrays for everything but the optimiser does an amazing job - watching the nps increase during analysis.

https://developer.mozilla.org/en-US/doc ... cts/BigInt
http://p4wn.sourceforge.net/
https://developer.mozilla.org/en-US/doc ... ped_arrays
https://github.com/op12no2/lozza

Colin
Yup. I'm aware of BigInts and typed arrays - was thinking to consider them but rejected due to the same reasons you did.
Free pizza for life is cool) It reminds me a meme where Indian pauper says: "writing C++ code for food"
(no discrimination or racism - I have many Indian fellow developers and respect them and what they do,
if any Indian developers reading this - please don't get me wrong)
op12no2
Posts: 490
Joined: Tue Feb 04, 2014 12:25 pm
Full name: Colin Jenkins

Re: Weaker play with TT

Post by op12no2 »

maksimKorzh wrote: Tue Jan 12, 2021 3:03 pm Yup. I'm aware of BigInts and typed arrays - was thinking to consider them but rejected due to the same reasons you did.
Just to clarify, I do use typed arrays, I don't use big ints.

cj