Details
Another release with an improved NNUE evaluation and small search patches (same as last time).
The newest network was trained using a custom trainer on 1.2B FENs from Berserk 6 self play games. Most of the improvements for this release come from tweaks in the trainer and not a largely improve architecture or better data. I was hoping to improve the architecture for this release, but all attempts at this time have failed (miserably I may add).
Minor search patches are included in this release as well.
STC
Code: Select all
ELO | 68.92 +- 4.47 (95%)
CONF | 8.0+0.08s Threads=1 Hash=8MB
GAMES | N: 10000 W: 3160 L: 1202 D: 5638LTC
Code: Select all
ELO | 47.90 +- 3.68 (95%)
CONF | 40.0+0.40s Threads=1 Hash=64MB
GAMES | N: 10000 W: 2183 L: 813 D: 7004Choosing a binary
Binaries currently require `popcnt`.
Binaries with the label `avx512` require your processor to support `avx512`.
Binaries with the label `avx2` require your processor to support `avx2`.
Binaries with the label `pext` require your processor to support `bmi2`.
If you're unsure which to use or what your processor supports
- Download them all
- Open them and run `go depth 24`
- If it crashes, you can't use it.
- Pick the fastest one
Changes
- NN Trainer improvements
- Tuned hyper parameters
- Tuned Eval vs WDL weighting from training data
- L1 Penalty
- LMR Tweaks
- Cut Node reduction added to captures/promotions in LMR
- Simplified out useless reductions
- RFP Tweaks
- RFP Margin Adjusted
- NMP Tweaks
- Margin adjusted via `improving` flag
- Raw null search score returned
Bugs
- Resolved an issue with Berserk suffered from EG blindness due to NMP (Example)
Thanks
- Connor McMonigle (Seer's Author) for continued guidance and advice on NNs
- Andrew Grant (Ethereal's Author) for Open Bench. Development of Berserk would not be possible without this resource
- Kim Kåhre (Koivisto's Author) for rubber duck debugging and theorizing
- Ipmanchess for validating my AVX512 binary works