It requires the setting of environment variables RTBWDIR and RTBZDIR with the paths to .rtbw and .rtbz files. I suppose it would be nicer if uci options were used instead.
The present version uses the WDL tables during the search. Once the game has reached a tablebase position, it uses the DTZ tables to select "good moves" (that preserve the win under the 50-move rule or preserve the draw or might convert a loss into a 50-move draw with suboptimal play from the opponent) and lets the engine do a search on those without acccessing the tables. The resulting game play is much nicer than when playing "DTZ-optimal". It also has a better chance of converting a draw into a win.
I do not exclude that some bugs are left, so if anyone tests this and has problems, just let me know.
The present version is not very suitable for simply probing the tables from the GUI to see if a position is a win or loss. Once you give the engine a tablebase position, it will do a search on the "good moves" and report a score found by searching. To solve this I could make an option to switch between DTZ-optimal play and the present approach, but it is not a very nice solution.
Hi Ronald,
Can you integrate your TB code with Stockfish 4? I am thinking to run some tests to compare it with regular Stockfish 4, because I suspect the Elo gain may be larger than that from Nalimov's or other tablebases (which is hardly measurable).
Thanks,
Kirill
I've updated to the latest sources and created a tag "sf_4_tb".
To get Stockfish 4 with my changes the following should work:
I finished generating the full 3-4-5-6 men set in both WDL and DTZ. With an i7-3770 with 32gigs RAM it took 14 days, followed by just under 4 days to do a verify.
jshriver wrote:I finished generating the full 3-4-5-6 men set in both WDL and DTZ. With an i7-3770 with 32gigs RAM it took 14 days, followed by just under 4 days to do a verify.
Sharaf_DG wrote:Thanks for the update....Is the current counter accurate when using multiple threads?
My CPU is 4 cores, unlike Ronalds which is 8. He was using 16 threads, and I was using 8.
I probably could have shaved a day or two off by not doing the verify after each generation. In fact after day 4-5 I did stop it and went on without the verification instead relying on it in post-generation.
Not sure why the verifier takes so long on my system (perhaps slow disc?). Anyway that's why I made the md5sum as a quick and dirty test for validity.
I created the md5sum list, then did the fresh verify on the full set using the built-in tools (with no errors) then did a md5sum -c against it again and 0 errors.