hgm wrote: ↑Sat Jun 01, 2024 11:05 amSo nowadays you don't really need a super-computer to do a 7-men. A 4GB machine with a 1 TB HD should do.
I wonder if anyone has ever managed to implement this, though
Perhaps the generator of Marc Bourzutschky and Yakov Konoval uses such an approach, but I don't think they can do pawnful tables.
(I am not saying it cannot be done.)
Pawnful EGT require only little memory to generate, because you would generate those one P-slice at the time. Even a single
Pawn reduces the memory requirement by a factor 8, and every additional Pawn by another factor 64.
As for being implemented: the problem is of course that the EGT already exist. So why would anyone do it?
hgm wrote: ↑Tue Jun 11, 2024 6:47 amAs for being implemented: the problem is of course that the EGT already exist. So why would anyone do it?
To generate 8-men TBs on decent hardware.
But storage requirements may be the bigger problem here, especially if you go to pawnful tables which require you to actually keep the pawnless (and also pawnful) tables around in a usable format.
I’m generating a few 9-man endings to explore this, but am limited by my computer resources (two lower-end workstations with 1.5 TB RAM) to endings with significant permutation symmetries.
I’m generating a few 9-man endings to explore this, but am limited by my computer resources (two lower-end workstations with 1.5 TB RAM) to endings with significant permutation symmetries.
Let's consider a 9-man ending with maximum symmetry such as KNNNNvKRRR. If we place the Ks togethers in 462 ways using the board's symmetry, then we have Bin(62,4) ways to place the NNNN and Bin(58,3) ways to place the RRR. With 1 byte per position, this would requires 2 * 462 * (62*61*60*59*58*57*56) / (24 * 6) bytes = 14.5 TB. Bourzutschky and Konoval can apparently at least do this ending with 1.5TB, so they do something "smarter".
And apparently they computed KQRBvKQRN, which would need 2 * 462 * 62*61*60*59*58*57 bytes = 37.2 TB.
I guess they don't store those tables, and they don't do pawns. The full 8-men set probably needs about 1 petabyte for compressed WDL alone. It is possible but not at all practical.
syzygy wrote: ↑Thu Jun 13, 2024 11:41 pm
Let's consider a 9-man ending with maximum symmetry such as KNNNNvKRRR. If we place the Ks togethers in 462 ways using the board's symmetry, then we have Bin(62,4) ways to place the NNNN and Bin(58,3) ways to place the RRR. With 1 byte per position, this would requires 2 * 462 * (62*61*60*59*58*57*56) / (24 * 6) bytes = 14.5 TB. Bourzutschky and Konoval can apparently at least do this ending with 1.5TB, so they do something "smarter".
With the Wu & Beal algo, they only need 1 bit in RAM (doing lots of sequential I/O from and to disk), so they would need only 0.9 TB in RAM.
And apparently they computed KQRBvKQRN, which would need 2 * 462 * 62*61*60*59*58*57 bytes = 37.2 TB.
I think they can do 1 bishop color at a time which means 18.6 TB on disk for that endgame before compression.
Doing 1-bit in RAM and a lot of sequential I/O (Wu & Beal algo) they need 1.16 TB of RAM for the generation of that table.
I guess they don't store those tables, and they don't do pawns. The full 8-men set probably needs about 1 petabyte for compressed WDL alone. It is possible but not at all practical.
If they do the Wu & Beal 1-bit RAM algo, it should take a lot of I/O back and from disk. A 1 TB bitmap from a HDD @ 80 MB/s takes 3,5 hours. With an SSD @200 MB/s it still takes 1,5 hours. *Per iteration*, so times 400+, should take about a month per tablebase. Perhaps they have very fast SSDs and manage not to trash them with all the I/O.
syzygy wrote: ↑Thu Jun 13, 2024 11:41 pmAnd apparently they computed KQRBvKQRN, which would need 2 * 462 * 62*61*60*59*58*57 bytes = 37.2 TB.
I think they can do 1 bishop color at a time which means 18.6 TB on disk for that endgame before compression.
I don't think that is true. The color of a bishop is not preserved by all board symmetries. For example, if you place the two Ks in 462 ways as usual, then a king move can change the color of the bishop. Or look at it in this way: if you have the information for all positions with the bishop light squares, then you also have the information for all positions with the bishop on the dark squares, since you can mirror the board in either the horizontal or vertical axis.
So with 1 bit per position, you would need over 2.3 TB of RAM (if my calculation is correct).
\If they do the Wu & Beal 1-bit RAM algo, it should take a lot of I/O back and from disk. A 1 TB bitmap from a HDD @ 80 MB/s takes 3,5 hours. With an SSD @200 MB/s it still takes 1,5 hours. *Per iteration*, so times 400+, should take about a month per tablebase. Perhaps they have very fast SSDs and manage not to trash them with all the I/O.
I'm afraid an SSD would wear out very quickly when abused in this way. But HDDs perform well on sequential reads and writes, and compression should also help a lot.
This is why I suggested to store the white x black matrix as 1K x 4K blocks, rather than by row or column. That way you can bring 4K rows in memory by contiguous acces, while 1K colums can still be read in contiguous sections of 4K. This reduces the required number of seek operations and rotational delays.
syzygy wrote: ↑Fri Jun 14, 2024 10:43 pm
I'm afraid an SSD would wear out very quickly when abused in this way. But HDDs perform well on sequential reads and writes, and compression should also help a lot.
I wonder if the SSD wear out is still true in 2024. But even for HDDs: a Western Digital Gold Enterprise 24 TB HDD has 550 TB of writes per year in its 5-year warranty. At its max write speed of 298 MB/s that translates in just over 1 TB/h, so after 550 hours per year you have spent a year’s TBW. That translates to 16 weeks of sustained writing before the 5-year warranty has been exceeded. I find that hard to believe that HDD would break down under that load.
syzygy wrote: ↑Fri Jun 14, 2024 10:43 pm
I'm afraid an SSD would wear out very quickly when abused in this way. But HDDs perform well on sequential reads and writes, and compression should also help a lot.
I wonder if the SSD wear out is still true in 2024. But even for HDDs: a Western Digital Gold Enterprise 24 TB HDD has 550 TB of writes per year in its 5-year warranty. At its max write speed of 298 MB/s that translates in just over 1 TB/h, so after 550 hours per year you have spent a year’s TBW. That translates to 16 weeks of sustained writing before the 5-year warranty has been exceeded. I find that hard to believe that HDD would break down under that load.
There are “Data Centre” SSDs, e.g. Kingston DC600M with 7.7 TB which can sustain 1 Drive Write Per Day for 5 years. Goes for €700 here in the Netherlands. Wouldn’t that be robust enough for on disk generation?