Ender8pieces wrote: ↑Thu Feb 05, 2026 3:08 pm
As for Marc Bourzutschky’s Work, From what I’ve gathered, Marc calculated multiple 8-piece pawnless tables but not all of them and he did not store them for distribution. Why did he stop there?
The available data suggests that what stopped him was the generator's RAM requirement of 1 bit per position (a requirement I did not read anywhere, but which seems to fit with the data). KQRBvKQRN requires 2.32 TB, which does not fit in 1.5 TB and indeed seems not to have been generated. KQRRvKQRB requires only 1.16 TB, which does fit in 1.5 TB. He might have generated KQRBvKQRB because you can split it in positions with equally colored bishops and positions with opposite bishops. He probably did not generate KQRNvKQRN.
If he indeed did not store the generated pawnless tables, that explains why he did not generate the pawnful tables (apart from the "opposing pawns" ones). Even if he did store the pawnless tables at least temporarily, the form in which they were stored may have been unsuitable to initialize promotions at the start of generating a pawnful table.
As for the 19-Slice Model, To clarify the memory layout, are these 19 K-slices out of the standard 462 or 924 (turn-dependent)or 64? If you load bka2, does that includes all possible wk under symmetry?
It seems it is enough to be able to hold in RAM 19 K-slices (of a single color) out of 2x462 K-slices at 1 bit per position. It has yet to be proven that this works by someone providing an actual implementation. The amount of data to be streamed to/from disk would still be the same. If instead you can only hold 9 K-slices, it can still be made to work but there would be a lot more streaming to/from disk.
Marc's generator seems to require holding 462 K-slices (of 1 color) in RAM.
I think his generator used Thompson's 1996 algorithm, which basically generates WDL plus DTZ for white losses only. You then need to run it again to get DTZ for black losses, and you need to do a 1-ply search to get DTZ for winning positions.
The Wu-Beal agorithm has the same RAM requirement but generates full DTZ (or with a bit more work DTM). I tnink according to the Wu-Beal paper it streams more data than Thompson's algorithm run twice (once for each side), but it also creates a more complete table.
In the end some of the information generated will be thrown away again during compression, so generating the complete table might be overkill. DTZ storing only losses but for both sides is conceptually quite nice because the 1-ply search to get DTZ for a winning position would only require you to check the winning moves, which might be few (and checking whether a move is winning can be done by probing WDL, which is cheap(er)). However, it is annoying for tables that are almost completely won for one side, because then (1) nearly every move will be winning, so there is no reduction in moves to search and, worse, (2) the side with the losses will typically have far more legal positions than the side with the wins -> compressed size is much larger than a half-sided syzygy DTZ table would be. (E.g. a randomly picked KQQQQvK position with white to move will probably be illegal because black is in check, whereas the same position with black to move will be legal because white is not in check (the btm position might be unreachable but that is typically not checked during generation).)
19 K-slices require about 98 GB, so fit easily on a machine with 128 GB.
A machine with 256 GB of RAM could hold 2 sets of 19 K-slices (or 19 K-slices with 2 bits per position), which might help speed up the generation by modifying the algorithm to overlap some of the stages (my generator works very similar to the Wu-Beal algorithm, but overlaps the second half of one iteration with the first half of the next iteration, which halves the number of iterations). This will need a better look.
But first the basic idea should be tested, e.g. on some 5- or 6-piece tables.