My idea is to perform a verification via distributed computing using manual assignment of work units. Each work unit would consist of one million ply 7 positions taken form the list of 96,400,068 unique ply 7 positions. Each work unit would be sent to at least two different participants for confirmation calculation.
Each input work unit would be a text file with one million lines. Each line has seven fields: the FEN record followed by the occurrence count. The output work unit would be the same with an addition of the perft(7) of each FEN position appended to the corresponding input record multiplied by the occurrence count (This eighth field in called the product field.)
Each work unit would require computer time from a few days to a a couple of months with a variance depending on several factors. For every second needed to handle one record, the total work time increases by 11 days, 13 hours, 46 minutes, and 40 seconds (one million seconds.).
Participants would use Dropbox, at at least my Dropbox, for work unit interchange unless everyone would be happy with 65+ MB email attachments.
There would be 94 input work units of one million lines each (ca. 65 MB) and a 95th unit at ca. 26 MB in size. An output work unit might be about 80 MB in size.
The perft(14) result would be the grand total of the product fields of all the output work units.
Work unit data is sorted by ACSII ascending order. The very first record of the first work unit is:
Code: Select all
1Bbqkb1r/1p1ppppp/r4n2/p7/3P4/8/PPP1PPPP/RN1QKBNR b KQk - 0 4 3
Code: Select all
rnqQkbnr/ppp1pppp/8/8/8/4P2b/PPPP1PPP/RNB1KBNR b KQkq - 2 4 2

