After the tournament I didn’t have much time to work on rofChade anymore, so it took some time to release the first NN version of rofChade, but here it is!
The current NN network architecture is HALFKA (2×256)x32x32x1. For now, this still gives the best results for rofChade. The network is trained with 2.8B by rofChade generated positions. Around 600M of them are FRC positions. The positions have been rescored a few times with intermediate networks to simulate Reinforcement learning.
The NN code is implemented from scratch, the matrix calculations are heavily influenced by the Cfish implementation. For the training of the networks an older version of the nnue-pytorch trainer from Stockfish is used as base.
To be able to generate FRC positions for training, rofChade 3.0 also supports FRC and DFRC chess variants.
I also upgraded the version of the fathom library, so that 7-men syzygy tablebases are also supported (although I haven’t been able to test if it works).
A special thanks to this version goes to the following persons/groups:
- Ronald de Man for his matrix calculations in Cfish and the syzygy tablebase library
The group that develops the nnue-pytorch trainer (and their interesting discussions on discord)
Andrew Grant for sharing his experience with his FRC implementation and perft test data. It helped me to discover some unexpected FRC bugs
Jon Dart for the fathom library and updates on it
Frank Quisinsky for letting intermediate rofChade versions play in it’s tournament
IpmanChess for testing some intermediate rofChade versions
Graham banks and the rest of the CCRL team for doing a great testing job for so long
The TCEC and CCC team to organize great tournaments
Everybody else who enjoys (computer) chess!