What is the best way to obtain the 7-piece tablebases?

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

User avatar
Nordlandia
Posts: 2822
Joined: Fri Sep 25, 2015 9:38 pm
Location: Sortland, Norway

Re: What is the best way to obtain the 7-piece tablebases?

Post by Nordlandia »

How much RAM would KRPPvKRPP.wdl consume in a rook endgame?

A magnitude over KRPPvKRP.wdl ?
Dann Corbit
Posts: 12777
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: What is the best way to obtain the 7-piece tablebases?

Post by Dann Corbit »

In traditional tablebase files, the data is compressed into blocks. Those blocks are decompressed into RAM.
Because the files are enormous, it is inevitable that enormous amounts of disk are read and enormous amounts of data are decompressed,
But there can be other schemes to store and retrieve the data. For instance, the data could be held in a MonetDB database, with pages of positions as key values. These pages could be made as small as you like, down to bare single records. But the smaller you mkae the records, the less effective the data compression will be. If you go clear to single records the disk cost will be much higher but you could make the memory requirement unimportant,

Now, this is all theoretical. And it is hard to know what technologies we will have five years from now. But I guess it will be like the other formats.
At first, the Edwards 4 men files were plenty and all we could handle. Then came five and six men files in Nalimov and other formats. Finally we had Lomonosov and Syzygy 7 man files. As time moved along we were able to use them. The 7 man files would have been a cruel joke in 1988 because the cost of the disk and the memory for decompression would have been outlandish. Even if they built the 8 man files today, we could not use them very effectively. But in five years we will have a lot more RAM and chances are that disk will keep accelerating too. I have 5 GB/sec M.2 disks in my machine the size of a stick of chewing gum that hold 2 TB each. In five years, they will be a lot faster, smaller, and cheaper.

So if we have patience, we will see more and more miracles. That''s how technology works
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
duncan
Posts: 12038
Joined: Mon Jul 07, 2008 10:50 pm

Re: What is the best way to obtain the 7-piece tablebases?

Post by duncan »

Dann Corbit wrote: Tue Jun 23, 2020 5:10 am
So RAM cost will dominate when it becomes feasible for ordinary {albeit a bit nuts} humans to do it.
I might even be able to give it a go. But who knows.

The only people who are a bit nuts is the government for not spending a paltry million dollars for a stunning 8 piece database. If you do it, you will be the sane one. :)
syzygy
Posts: 5693
Joined: Tue Feb 28, 2012 11:56 pm

Re: What is the best way to obtain the 7-piece tablebases?

Post by syzygy »

duncan wrote: Fri Jun 26, 2020 10:56 am
Dann Corbit wrote: Thu Jun 25, 2020 6:20 pm You don't need enormous ram to use the tablebase files.
You need enormous ram to build them.
But even accessing 8 men files, you will need a lot of RAM.
Either that, or there will be a speed trade-off.
and by a lot of RAM to access 8 men files do you mean many TBs of RAM ?
To use them in a search, the more RAM the better because it will be used to cache the TB data retrieved from SSD, which reduces the number of random read accesses to SSD (which are still much slower than accesses to RAM).

If you just want to probe a few positions, you need almost no RAM. But then you could as well not download them at all but access them over the internet.
syzygy
Posts: 5693
Joined: Tue Feb 28, 2012 11:56 pm

Re: What is the best way to obtain the 7-piece tablebases?

Post by syzygy »

Dann Corbit wrote: Fri Jun 26, 2020 1:06 pm In traditional tablebase files, the data is compressed into blocks. Those blocks are decompressed into RAM.
Because the files are enormous, it is inevitable that enormous amounts of disk are read and enormous amounts of data are decompressed,
The blocks of Syzygy WDL files are either 32 or 64 bytes, so not a lot of data is decompressed per access. They are decompressed on the fly, so the decompressed data is never stored.

But access is much faster if those 32 or 64 bytes (and the part of the index structure necessary to locate that block) is already in RAM. Until you can afford 1,000 TB of Optane memory.