Another request for help if I may.
I am evaluating positions with SHREDDER10 and searches constrained by ply-depth rather than time. Following best scientific practice, I want the evaluations to be reproducible by independent parties , assuming that is possible - so am wondering what SHREDDER options to set via UCI commands.
I'm not using 'Position Learning' and clear out any 'Position Learning' already done. Eiko B advised me on that aone.
The real puzzle is whether I can safely use Hash Tables or not. I appreciate that they will affect the evaluations if search is set to some time-limit. But when I'm searching to 'p' plies of depth, do they just speed up the search or can they still affect the evaluations.
Thanks for help in advance - Guy
The effect of Hash Table use on move-evaluations
Moderator: Ras
-
- Posts: 4643
- Joined: Sun Mar 12, 2006 2:40 am
- Full name: Eelco de Groot
Re: The effect of Hash Table use on move-evaluations
Hello Guy,
You can safely use the hashtables but of course the results may differ depending on the amount of hashtables that you use. Sometimes using smaller tables can actually speed up the search even in terms of plydepth to solution, usually it is the opposite, but there are exceptions. With fixed amount of hastables and starting absolutely from zero, -restart the program if necessary each time to make sure it is in its "ground state" (that is not the proper term I believe but I hope the meaning is clear)- and the results should be 100% reproducible. Unless a) there are bugs in the program or b) the program itself is designed with some random factors in the search.
This last thing, b) I think I've seen happening more frequently than some people would expect or it could be an influence of small timing differences as Robert Hyatt says, leading to different games from identical set-ups. I don't think it happens in Analysis mode though, only in games.
Restarting a program should not really be necessary but in some programs the hash contents are not fully cleared for instance if you just start a new analysis. Only testing can make sure if this is needed for a program. Some programs have an UCI option to clear the hash contents but I usually restart the whole GUI if it is a new, or untested, program that I would like to do some analysis with and want reproducible output.
I assume you use the Shredder native UCI GUI? It should not make a difference but you never know if different GUIs have a slightly different implementation or do not incorporate all standards of the protocol. You would assume Shredder 10 and its own GUI is totally compliant with the UCI protocol because Stefan Meyer-Kahlen wrote the protocol himself.
Testing some standard position first that should give an identical output each time, independant of the GUI or other differences caused by just having a different computer, can be used to verify that two users have the same correct set-up for Shredder 10 for doing a particular new analysis, or that you are using the correct setup for any new analysis session. If you are using Shredder 10 with more than one thread, reproducibility no longer applies though! Results then can differ substantially each time you run the same position.
Regards, Eelco
You can safely use the hashtables but of course the results may differ depending on the amount of hashtables that you use. Sometimes using smaller tables can actually speed up the search even in terms of plydepth to solution, usually it is the opposite, but there are exceptions. With fixed amount of hastables and starting absolutely from zero, -restart the program if necessary each time to make sure it is in its "ground state" (that is not the proper term I believe but I hope the meaning is clear)- and the results should be 100% reproducible. Unless a) there are bugs in the program or b) the program itself is designed with some random factors in the search.
This last thing, b) I think I've seen happening more frequently than some people would expect or it could be an influence of small timing differences as Robert Hyatt says, leading to different games from identical set-ups. I don't think it happens in Analysis mode though, only in games.
Restarting a program should not really be necessary but in some programs the hash contents are not fully cleared for instance if you just start a new analysis. Only testing can make sure if this is needed for a program. Some programs have an UCI option to clear the hash contents but I usually restart the whole GUI if it is a new, or untested, program that I would like to do some analysis with and want reproducible output.
I assume you use the Shredder native UCI GUI? It should not make a difference but you never know if different GUIs have a slightly different implementation or do not incorporate all standards of the protocol. You would assume Shredder 10 and its own GUI is totally compliant with the UCI protocol because Stefan Meyer-Kahlen wrote the protocol himself.
Testing some standard position first that should give an identical output each time, independant of the GUI or other differences caused by just having a different computer, can be used to verify that two users have the same correct set-up for Shredder 10 for doing a particular new analysis, or that you are using the correct setup for any new analysis session. If you are using Shredder 10 with more than one thread, reproducibility no longer applies though! Results then can differ substantially each time you run the same position.
Regards, Eelco
-
- Posts: 4643
- Joined: Sun Mar 12, 2006 2:40 am
- Full name: Eelco de Groot
Correcton Re: The effect of Hash Table
Actually now that I think about the last part of this sentence is not true, not in terms of plydepths to solution, only usually in terms of time to solution there is a speed up when using more hash. You would think that the program should normally be able to find the solution always within the same number of plies, independant of amount of hashtables. It may depend on the implementation though and it was often tested in testpositions of Michael Gurevich in CSS, that less hash sometimes was faster, using more hash sometimes lead to a slower solution timewise.Eelco de Groot wrote: ....speed up the search even in terms of plydepth to solution, usually it is the opposite, ...
I'm not sure if plywise this also can happen, I can't rule it out, because otherwise how can there be a speedup in time using less hash if the solution is not found on an earlier plydepth? Not sure on the mechanism though...

But given a fixed amount of hash, the results should be reproducible for that amount.
Eelco
Re: The effect of Hash Table use on move-evaluations
I wonder whether you really have a choice here. Can you run analysis in shredder without using hashtables?guyhaw wrote:Another request for help if I may.
I am evaluating positions with SHREDDER10 and searches constrained by ply-depth rather than time. Following best scientific practice, I want the evaluations to be reproducible by independent parties , assuming that is possible - so am wondering what SHREDDER options to set via UCI commands.
I'm not using 'Position Learning' and clear out any 'Position Learning' already done. Eiko B advised me on that aone.
The real puzzle is whether I can safely use Hash Tables or not. I appreciate that they will affect the evaluations if search is set to some time-limit. But when I'm searching to 'p' plies of depth, do they just speed up the search or can they still affect the evaluations.
Thanks for help in advance - Guy
Anyway, there are more things than hashtables that can ruin the reproducibility of results, like not clearing move ordering statistics or simply bugs in engines. Of course you need to remember to clear the hashtable before each search. Best way to check is whether you get the same nodecount for multiple attempts on the same position, also when you've done other positions in between (i.e. pos a, b c and then pos a again.
Of course, when publishing your results, you should also specify the exact software version, hashtable size (as different sizes may give different evaluation scores and/or moves) and all other engine settings ...
Be aware that in some positions hashtables give better speedup than others, so whatever you've proven for one hashsize, may not be correct for another.
Richard.
-
- Posts: 2088
- Joined: Mon Mar 13, 2006 2:31 am
- Location: North Carolina, USA
Re: The effect of Hash Table use on move-evaluations
The main thing I see missing is use of multiple processors.
For completely reproducible results you have to use a single
processor.
For completely reproducible results you have to use a single
processor.