Stockfish NN release (NNUE)

Discussion of anything and everything relating to chess playing software and machines.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Post Reply
ChickenLogic
Posts: 69
Joined: Sun Jan 20, 2019 10:23 am
Full name: Julian Willemer

Re: Stockfish NN release (NNUE)

Post by ChickenLogic » Thu Jun 04, 2020 2:25 pm

Raphexon wrote:
Thu Jun 04, 2020 2:16 pm
muppetmuppet wrote:
Thu Jun 04, 2020 10:36 am
I can't seem to get the bigger net to work properly it keeps giving up its queen for a piece. Do I have to do something other than use the half exe and put the net in the eval dir?
If you're using the latest binary (the one with the speedup); it will need a new net because its parameters have changed.
It can use an old net, but it might have eval problems.

Also I've found some extra information so when I'm home I'll add a new txt file to this thread.
Regarding my latest post: I'm using the sped-up version but after all this is still the net that comes with it... idk if this has the same problem.
Currently working on SFNN.

Raphexon
Posts: 302
Joined: Sun Mar 17, 2019 11:00 am
Full name: Henk Drost

Re: Stockfish NN release (NNUE)

Post by Raphexon » Thu Jun 04, 2020 2:26 pm

Joerg Oster wrote:
Thu Jun 04, 2020 2:13 pm
ChickenLogic wrote:
Thu Jun 04, 2020 12:13 pm
Sorry for the double post. Please note that the net is just a test run to look for ideal parameters. It could very well be that the depth=4 training data is showing through.
Training with fixed depth is probably the worst method for Stockfish.
Fixed depth testing/tuning has proven to be a bad idea in the past!

Fixed movetime (maybe 10 ms) would already be much better, I guess.
Yes and no.

Fixed depth testing is nigh-useless if you're testing out search patches. (Hopefully for self-explanatory reasons)
Fixed depth testing is also problematic if you're testing out an eval that changes the speed of the evaluation function.

NNUE nets don't change the search, and the speed between 2 nets is equal.
So fixed depth is fine.
A net that's better at depth=x will be better at depth=x+y

Fixed depth (just like fixed nodes) also has the advantage of having no problems when you run it with a high concurrency.
Fixed depth lets you do stupid stuff like being able to run 16 concurrent games on a 6 core / 12 thread CPU with no loss in strength. Doing 12 concurrent games on a 6 core / 12 thread with 10ms/move will give you 6 normal strength games and 6 weaker than normal games.

Raphexon
Posts: 302
Joined: Sun Mar 17, 2019 11:00 am
Full name: Henk Drost

Re: Stockfish NN release (NNUE)

Post by Raphexon » Thu Jun 04, 2020 2:28 pm

ChickenLogic wrote:
Thu Jun 04, 2020 2:25 pm
Raphexon wrote:
Thu Jun 04, 2020 2:16 pm
muppetmuppet wrote:
Thu Jun 04, 2020 10:36 am
I can't seem to get the bigger net to work properly it keeps giving up its queen for a piece. Do I have to do something other than use the half exe and put the net in the eval dir?
If you're using the latest binary (the one with the speedup); it will need a new net because its parameters have changed.
It can use an old net, but it might have eval problems.

Also I've found some extra information so when I'm home I'll add a new txt file to this thread.
Regarding my latest post: I'm using the sped-up version but after all this is still the net that comes with it... idk if this has the same problem.
It's most likely the old net.
I sincerely doubt the author has trained a new net for it.

ChickenLogic
Posts: 69
Joined: Sun Jan 20, 2019 10:23 am
Full name: Julian Willemer

Re: Stockfish NN release (NNUE)

Post by ChickenLogic » Thu Jun 04, 2020 2:31 pm

Yeah, when I was uploading the files your post wasn't there yet :D Nearly got a heart attack. I will probably redo the depth=4 big-net training for the sped up version since the net did produce a nice result after all. :)
Currently working on SFNN.

ChickenLogic
Posts: 69
Joined: Sun Jan 20, 2019 10:23 am
Full name: Julian Willemer

Re: Stockfish NN release (NNUE)

Post by ChickenLogic » Thu Jun 04, 2020 2:43 pm

Now for another game. This one is by the latest small net I trained which is still not fully saturated. The opponent is Laser 1.7. Our opponents are getting stronger.

12 threads with syzygy. TC: 10+2. I forced them to play the Sicilian but nothing more than e4 c5. The rest was their "own will".
Currently working on SFNN.

ChickenLogic
Posts: 69
Joined: Sun Jan 20, 2019 10:23 am
Full name: Julian Willemer

Re: Stockfish NN release (NNUE)

Post by ChickenLogic » Thu Jun 04, 2020 3:29 pm

https://drive.google.com/file/d/1-v_Oy2 ... xZ5Tj/view
This net won 94 89 17 (W/D/L) over the one provided by the author. This probably is the best net to date.
Currently working on SFNN.

pferd
Posts: 125
Joined: Thu Jul 24, 2014 12:49 pm

Re: Stockfish NN release (NNUE)

Post by pferd » Thu Jun 04, 2020 6:42 pm

Thanks Chicenlogic for sharing your net. It plays some pretty impressive games for me. Both engines on 8 cores each


ChickenLogic
Posts: 69
Joined: Sun Jan 20, 2019 10:23 am
Full name: Julian Willemer

Re: Stockfish NN release (NNUE)

Post by ChickenLogic » Fri Jun 05, 2020 11:35 am

I have 3 more games! 2 of which are against Lc0 running on my 980TI. I used the Stein 14.0 net at 2.5-3knps. SFNN is about 1/2 the speed of regular SF which means the Leela ratio, which usually favors SF on my setup is now about equal. The net SF uses is a new halfkp net trained on depth=6 instead of depth=4 positions.





Admittedly, these are rather short games but I would have expected Leela to crush this net.
Then I decided that it is time for this net to show how it defends when at a rather severe disadvantage. This game is against SF dev in the KGA where SF dev is black!



Of course these might be one off results. But it shows that there is a lot of potential as this net is just a test run once more that didn't converge because I stopped it early.
Currently working on SFNN.

Raphexon
Posts: 302
Joined: Sun Mar 17, 2019 11:00 am
Full name: Henk Drost

Re: Stockfish NN release (NNUE)

Post by Raphexon » Fri Jun 05, 2020 11:37 am

Earlier info:

Code: Select all

1. The trainer converts the searched score in a training sample to the winning percentage with the calculating formula below.
https://twitter.com/issei_y/status/589642166818877440
2. The trainer converts the game result to the winning percentage.  Win=1.0 Lose=0.0
3. The trainer calculates the linear sum of 1. and 2. with the training parameter "lambda".  This will be the teaching signal.
4. The trainer calculates the score of the quiet scene search.
5. The trainer converts 4. to the winning percentage.
6. The trainer adjusts the NN parameters so that 5. gets closer to 3.

A sample in training data consists of the parameters below.
- A position.
- The searched score of the position.
- The move in the position.
- Game ply.
- Game result.
Please refer the following source code.  https://github.com/nodchip/Stockfish/blob/master/src/learn/learn.h#L194
Later info:

"The format of training data is the array of PackedFenValue.
https://github.com/nodchip/Stockfish/bl ... arn.h#L194
PackedFenValue consists of PackedFen, the searched score, move, game ply, and game result. PackedFen is a binary format of FEN. It is modified for Stockfish+NNUE, and is huffman-encoded FEN. Please refer the encoder and the decoder in the following source code.
https://github.com/nodchip/Stockfish/bl ... r.cpp#L156
https://github.com/nodchip/Stockfish/bl ... r.cpp#L265
The name of the struct should be PackedFenValue. But it is PackedSfenValue because I just copied from YaneuraOu.
If we want to convert existing records to training data format, we need to use PackedSfenValue."

Raphexon
Posts: 302
Joined: Sun Mar 17, 2019 11:00 am
Full name: Henk Drost

Re: Stockfish NN release (NNUE)

Post by Raphexon » Fri Jun 05, 2020 1:30 pm

ChickenLogic wrote:
Fri Jun 05, 2020 11:35 am
I have 3 more games! 2 of which are against Lc0 running on my 980TI. I used the Stein 14.0 net at 2.5-3knps. SFNN is about 1/2 the speed of regular SF which means the Leela ratio, which usually favors SF on my setup is now about equal. The net SF uses is a new halfkp net trained on depth=6 instead of depth=4 positions.

Admittedly, these are rather short games but I would have expected Leela to crush this net.
Then I decided that it is time for this net to show how it defends when at a rather severe disadvantage. This game is against SF dev in the KGA where SF dev is black!



Of course these might be one off results. But it shows that there is a lot of potential as this net is just a test run once more that didn't converge because I stopped it early.
That's a pretty awesome hold.

Post Reply