I tried to draw a small diagram illustrating the structure of the NNUE neural network. It is based on Japanese publication and translations. The translation of the first part of this publication was done by Terje, the rest I translated using Google translate and DeepL.
If there are any errors both in English (I don’t speak English) and in my understanding of the NNUE structure, then please let me know:
Dariusz Orzechowski wrote: ↑Wed Jun 17, 2020 2:04 am
Some more results with two nets from this thread against the latest sf dev. 1 thread, tc 1m+1s, the same 100 short normal openings for both matches. Bigger net is more solid.
Can I take a net and improve it with a new "traninig phase" with new generated training data? I have the evalsave folder with the first training.
Thanks.
Raphexon wrote: ↑Sat Jun 20, 2020 10:45 am
"nps 49143364"
"nps 52486448"
What's your normal nps? The NN should be around 40-50% slower than regular Stockfish.
Also what binary are you using?
using this binary "/Stockfish/src/stockfish"
I included two outputs. One is with the NN loaded, the other is without the NN loaded. rates were 49143364 and 52486448. Which is not the 40-50% you describe.
Raphexon wrote: ↑Sat Jun 20, 2020 10:45 am
"nps 49143364"
"nps 52486448"
What's your normal nps? The NN should be around 40-50% slower than regular Stockfish.
Also what binary are you using?
using this binary "/Stockfish/src/stockfish"
I included two outputs. One is with the NN loaded, the other is without the NN loaded. rates were 49143364 and 52486448. Which is not the 40-50% you describe.
The 40-50% is on one thread. I suggest you to try it. Is possible that with that much threads behaves differently.