First success with neural nets

Discussion of chess software programming and technical issues.

Moderators: Harvey Williamson, Dann Corbit, hgm

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
jonkr
Posts: 93
Joined: Wed Nov 13, 2019 12:36 am
Full name: Jonathan Kreuzer

Re: First success with neural nets

Post by jonkr » Sat Nov 21, 2020 9:19 pm

I updated the code. Refactored a lot of it to try to make it easier to use in multiple different games and have some more standard search features and faster neural net calculation.

GuiNN 2.04 against GuiCheckers 1.11 (an old pre-NN version) at .2s per-move
1169-31-1680 W-L-D (+145 elo)
Which is pretty dominating for checkers. It still lost some games, but I expect if it had better training (and more effort making sure search is strong and bug-free) it would be very hard to beat.

I did a first pass on incremental update of first layer, which I think was a small gain in 8x8 checkers (like 5% faster in early game trailing off to even speed.) I think incremental will be more useful in some other games, and also would have bigger impact if I was evaluating every board during search like is pretty common instead of just the final boards.

However it did get me to start to wonder why the rest of the net was as slow as it was. It turns out that even simple stuff like relu activation clamping, and to lesser extent the converting the 32-bit intermediate values to 16-bit fixed values, can cause a big slowdown when using regular code and poorly mixed in to the program flow. I converted all this stuff to SIMD instructions done on a full array for a 20% speed gain. One thing pretty nice about general stuff like NNs is improving multiple programs at once. After pasting the neuralNet cpp files into Slow Chess the endgame test versus SF11 was -27 now.

Post Reply