syzygy wrote:It makes sense to only measure selective search depth in PV nodes. It saves some cpu cycli and reporting selective search depth is not much more than a gimmick anyway.
Exactly! "Gimmick" is the word I was looking for. I don't think the selective depth makes any sense nowadays. Regardless of how you define it, it's rather meaningless, in engines with massive reductions. Perhaps in the old days when you have a plain/simple alpha/beta with search extensions (no reductions), printing the selective depth made some sense. But today...?
My advice is that we should simply remove this useless feature. Many engines do not print out a selective depth.
I agree that removing it is better than what we have today.
The information about the length of the longest line that the computer searchs is interesting but it is not the selective depth that we see.
syzygy wrote: If you mean the code that builds the relevant entry of the material hash table, yes that code loops through the specialised endgames. But the hit rate of the material hashtable is probably 99.99%, so those few cycli are irrelevant.
Yes you are correct, looking up the specialized function is on the cold path, because in the majority of the cases we find the value already cached and return immediately.
Moreover I just want to nitpick that we don't 'loop through' endgames, but because a std::map is used the look up time is logarithmic with the number of endgames.
Moreover I just want to nitpick that we don't 'loop through' endgames, but because a std::map is used the look up time is logarithmic with the number of endgames.
Even if that bothers anyone, std::unordered_map (or boost::hash_map if not using C++11) could be used and then the lookup time would be constant.
AlvaroBegue wrote: Even if that bothers anyone, std::unordered_map (or boost::hash_map if not using C++11) could be used and then the lookup time would be constant.
std::unordered_map is C++11, boost::hash_map is even not standard. Adding full boost library dependency for this is laughable.
AlvaroBegue wrote: Even if that bothers anyone, std::unordered_map (or boost::hash_map if not using C++11) could be used and then the lookup time would be constant.
std::unordered_map is C++11, boost::hash_map is even not standard. Adding full boost library dependency for this is laughable.
I wasn't proposing this for Stockfish. I mentioned it because it might be of general interest. Using C++11 or Boost in a private engine is perfectly reasonable (I am using C++11 myself).
By the way, it looks like I got the name wrong, and Boost also calls its container unordered_map.
AlvaroBegue wrote: Even if that bothers anyone, std::unordered_map (or boost::hash_map if not using C++11) could be used and then the lookup time would be constant.
std::unordered_map is C++11, boost::hash_map is even not standard. Adding full boost library dependency for this is laughable.
With the bcp tool, you can extract only the necessary dependencies and include those in your source tree (not sure if it is compatible with the GPL though).
A linear loop would probably be fastest in this particular case (just 16 cases). But whatever solution is chosen, the difference won't be measurable given the hit rate of the material hash table.