I'm going to have a simple configuration file - which specifies the client and how it's invoked. You can also have a separate configuration file for the perft binary. I want to keep it simple. At the moment the binary is hard-coded into the client - but that is easily fixed.ibid wrote:OK, thanks. Since mine couldn't output that even if I wanted it to.Don wrote:The program expects to see each move output for display purposes, but it's not required. The GUI displayed the board and all the moves and reports the results of each move as things progresses. So if the perft binary does not report intermediate results, it will just affect the display, nothing else and certainly not the integrity of the calculations.
I am reducing my perft program to something more appropriate for this right now and this thought comes up: how about the possibility of the client passing a number of threads to the perft as an extra argument? this way they could enter that in the GUI, change it as needed, etc. (although clearly it would not take effect until the next position), without each program needing to get that from an initialization file. Just a thought.
-paul
Perft(14)
Moderator: Ras
-
- Posts: 5106
- Joined: Tue Apr 29, 2008 4:27 pm
Re: CookieCat reloaded
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
-
- Posts: 5694
- Joined: Tue Feb 28, 2012 11:56 pm
Re: CookieCat reloaded
I only quickly read through this thread, so maybe this is redundant, but to detect hashing errors it would seem a good idea to use a (good!) random generator for generating the Zobrist keys and seed the generator differently on every run, e.g. based on the system clock. This way, hashing errors will not be duplicated and go undetected in the verification run of a position.rbarreira wrote:Maybe you have thought about this more than I have, but I always thought that a good way to ensure the integrity of results is:
1- Give out work units randomly, not in sequence.
2- Always assign each work unit at least twice (and never to the same person).
3- Don't accept results for a given work unit unless the server has assigned this work unit to that person.
Even with this, I'd still use at least 128-bit hashkeys.
Also it would seem to make sense to spend some effort in getting the client to be as efficient as possible, if only to save the world.
-
- Posts: 4675
- Joined: Mon Mar 13, 2006 7:43 pm
Re: Perft(14)
Don't run after the bus when the next bus, a much better one, will be along in a few minutes (days).Don wrote:Here is an example bash script that tries to do the right thing, but it will exit BEFORE the calculation is written to stdout:Code: Select all
#!/bin/bash export s=`cat <<EOF sfen $1 2 2 perftbulk $2 exit EOF ` echo "$s" | ./cookie | while read line ; do echo $line done
-
- Posts: 5106
- Joined: Tue Apr 29, 2008 4:27 pm
Re: CookieCat reloaded
That is in fact redundant. I made the same suggestion. I called them rotating tables but that is probably not very descriptive.syzygy wrote:I only quickly read through this thread, so maybe this is redundant, .....rbarreira wrote:Maybe you have thought about this more than I have, but I always thought that a good way to ensure the integrity of results is:
1- Give out work units randomly, not in sequence.
2- Always assign each work unit at least twice (and never to the same person).
3- Don't accept results for a given work unit unless the server has assigned this work unit to that person.
What I also will now suggest is that you generated more than 64 bits but you keep the table size the same. The other random bits should be used to generate the address to store the key.
If you are generating random keys on each run, there is no need to go crazy with bits as long as you are doing a verification run anyway. I am speaking from a purely performance perspective - if the hash thing slows you down even by 1% you have lost - because only 1 in many hundred of thousands of positions will have to be re-checked. Adding key to the hash table entry will slow you down much more than 1% however. A good pragmatic compromise is to use extra hash bits for the address and to generate the zobrist tables randomly.
... but to detect hashing errors it would seem a good idea to use a (good!) random generator for generating the Zobrist keys and seed the generator differently on every run, e.g. based on the system clock. This way, hashing errors will not be duplicated and go undetected in the verification run of a position.
Even with this, I'd still use at least 128-bit hashkeys.
Also it would seem to make sense to spend some effort in getting the client to be as efficient as possible, if only to save the world.
Capital punishment would be more effective as a preventive measure if it were administered prior to the crime.
-
- Posts: 5694
- Joined: Tue Feb 28, 2012 11:56 pm
Re: CookieCat reloaded
Ok, then never mindDon wrote:That is in fact redundant. I made the same suggestion. I called them rotating tables but that is probably not very descriptive.syzygy wrote:I only quickly read through this thread, so maybe this is redundant, .....rbarreira wrote:Maybe you have thought about this more than I have, but I always thought that a good way to ensure the integrity of results is:
1- Give out work units randomly, not in sequence.
2- Always assign each work unit at least twice (and never to the same person).
3- Don't accept results for a given work unit unless the server has assigned this work unit to that person.

Yes, I did see that. Certainly an improvement over what is normal (and sufficient) for chess engines.What I also will now suggest is that you generated more than 64 bits but you keep the table size the same. The other random bits should be used to generate the address to store the key.
True, but more important of course is that the final result is correct, i.e. that all hashing errors are caught. If 1 in 100 runs has a collision, with millions of runs there is a significant probability of one such collision going unnoticed just because one run and its verification run both have a (different) collision, but the numbers happen to be off by the same amount.If you are generating random keys on each run, there is no need to go crazy with bits as long as you are doing a verification run anyway. I am speaking from a purely performance perspective - if the hash thing slows you down even by 1% you have lost - because only 1 in many hundred of thousands of positions will have to be re-checked.
On the other hand, 64 bits will probably give collisions far less than 1 in 100 runs...
-
- Posts: 900
- Joined: Tue Apr 27, 2010 3:48 pm
Re: CookieCat reloaded
Having different hash keys for each run is a great idea. Just make sure that the hash keys (or the information to generate them) are saved so that a client's work can be perfectly reproduced, in order to be able to detect cheating.