syzygy wrote:diep wrote:Empirically we can show that using 32 bits numbers in a sequential manner (2 adjacent numbers forming a 64 bits number with the top 32 bits number just a few cycles away from the 32 lowbits) is not safe if the same SIMPLE generator generates them, if you are going to use the generated 64 bits number 'as is' and run it on a couple of dozens of cores during a game; provided that you want 0 errors.
I take it that by SIMPLE you mean BAD.
There is no reason a BAD random generator generating 64-bit integers would do worse or better than a BAD random generator generating 32-bit integers which are then pairwise concatenated to 64-bit integers.
The same holds for a GOOD random generator. My advice is to use a GOOD one. As long as it is GOOD, it does not matter whether it generates 64 bits at a time, 32 bits at a time, or just a single bit at a time.
Btw, random bits can be downloaded from
http://www.random.org/files/
Just read 64 bits at a time to produce your 64-bit ints.
It does matter as 64 bits already is what you really need every single bit from for Zobrist in computerchess as engines search so deep.
So the total search space they span is pretty big.
Any flaw will backfire.
If we have entries like:
{R(x),x},
{R(R(R(x))),R(R(x))}
etc.
then there always will be a multilineair connection.
That results in reducing number of safetybits you have.
Numbers with a multilineair connection are easier to crack of course,
even for Zobrist - cracking in this case means a collission.
Murphy's law will hunt you.
You can easily test this by measuring number of collisions a game.
Measuring can be done real fast by using for example a 128 bits zobrist instead of 64 bits.
Then you go measure like i did during games and you'll see you get more collissions and errors.
Even in 5 minutes blitz games in this manner you'll get easier collissions.
Just play a bunch and compare.
I did in 2003 a bunch of extensive tests there.
Nowadays everyone has a hashtable of this size and gets enough search depth to be able to measure that.
Also we assume you overwrite in hashtable depthbased. So not some idiotic manner of overwriting that only amateurs use.
Just measure.
Doing searches at a static position is not a good idea as of course as the total search space you span with your search is smaller than 2^64 then.
This is one of the big mistakes made in previous collission researches that were published.
They didn't play games... ...just searched 8 plies... ...and hashtable didn't do normal overwriting depthbased.
Most programs follow what i posted back in 90s already.
Have a few bits in hashtable, say a bit or 8. Now just overwrite using depthleft + searchesdonefromopeningsposition
You add those 2 up and overwrite the smallest entry you find within the 4 or 8 entries or so you sequential scan.
That means that during games a lot of old info is still there from previous moves. That expands the search space suddenly to above 10^19, especially in endgame.
Then suddenly the collissions start coming if you are stubborn and combined 32 bits numbers.... ...to give one example...
I put months of my time into this. All those researchers always, they just busy for 5 minutes and publish another crap research. If you just search for 5 minutes at 1 position you'll find nothing of course.
Even 200 cores @ some hours with a 100+ GB hashtable had just 1 collission when done at openingsposition.
That's not interesting research you know. If the search space your search spanned is not huge, you can write anything on any paper anyway as there simply wasn't a chance in heaven to get a single colllission then.
Gotta play games!
Most collissions are in endgame - if you think about it you'll realize why.