Suppose I had a 16-core machine and I wanted to find the "best" value for Min Split Depth (for use in SF when doing deep searches, not blitz games). My inclination would be to modify the code in benchmark.cpp like this:
Code: Select all
LZsMacPro-OSX6: ~/Documents/Chess/Stockfish/src] diff benchmark.cpp_orig benchmark.cpp
83,85c83,85
< string ttSize = (is >> token) ? token : "32";
< string threads = (is >> token) ? token : "1";
< string limit = (is >> token) ? token : "13";
---
> string ttSize = (is >> token) ? token : "4096";
> string threads = (is >> token) ? token : "16";
> string limit = (is >> token) ? token : "35";
90a91,92
> Options["Min Split Depth"] = 7; // Vary this to tune
>
I'd then run Stockfish's bench, and get an output like this one (which came from my current 8-core machine):
Code: Select all
===========================
Total time (ms) : 1339757
Nodes searched : 9722597395
Nodes/second : 7256985
My inclination is minimize time to depth. Of course, whatever I do, I'd take an average of several runs of the bench command, and then I'd increment the Min Split Depth value, rebuild and repeat.
Thanks for any advice.
