I'm currently generating new training data with it, will be able to release something new in a few days I hope.
Are you still training without draws?
Probably not a good idea, imho.
In my copy of SF-NNUE https://github.com/joergoster/Stockfish-NNUE I enabled to use draw results, as well.
Hopefully, I did everything right.
When compiling from source it will use halfkp_256x2-32-32 architecture by default.
I also think that the initial learning rate (eta = 1.0) might be too high.
Can you share all of your binaries (gensfen, half-kp, half-kp learn, etc.,) on Google Drive or as GitHub releases? I know it's a lot to ask for, but I'd really appreciate it.
Raphexon wrote:
The blas libraries also generate draw fens?
You're an angel in that case.
Yes.
But everything is basically untested, I'm still in the process of cleaning-up/formatting
which also helps me to better understand how things are put together and work. Hopefully!
I'm currently generating new training data with it, will be able to release something new in a few days I hope.
Thanks! I tested it in a short 100 games match against sf11 (1 thread, tc 1m+1s, 2 moves book) and it lost but not by much: sf11 - sf nnue +28-21=51. It's only one test but it looks like in these conditions sf nnue is probably already within 50 Elo distance to the latest sf dev, very promising.
I'm currently generating new training data with it, will be able to release something new in a few days I hope.
Are you still training without draws?
Probably not a good idea, imho.
In my copy of SF-NNUE https://github.com/joergoster/Stockfish-NNUE I enabled to use draw results, as well.
Hopefully, I did everything right.
When compiling from source it will use halfkp_256x2-32-32 architecture by default.
I also think that the initial learning rate (eta = 1.0) might be too high.
Can you share all of your binaries (gensfen, half-kp, half-kp learn, etc.,) on Google Drive or as GitHub releases? I know it's a lot to ask for, but I'd really appreciate it.
I'm currently generating new training data with it, will be able to release something new in a few days I hope.
Thanks! I tested it in a short 100 games match against sf11 (1 thread, tc 1m+1s, 2 moves book) and it lost but not by much: sf11 - sf nnue +28-21=51. It's only one test but it looks like in these conditions sf nnue is probably already within 50 Elo distance to the latest sf dev, very promising.
Sounds good. I started a 5000 games testrun for my Stockfish-testing ratinglist with that nn linked here in that post and that binary (stockfish.nnue-learn-use-blas.halfkp_256x2-32-32.exe). For testing conditions, look on my website: https://www.sp-cc.de
Same 5 opponents like in my latest tests of Stockfish-dev (200601): Komodo 14, Houdini 6, Fire 7.1, Ethereal 12, Xiphos 0.6.
That will take 6-7 days. I will report, if there are crashes or other problems. When it is done, we will have a first valid rating of sf nnue.
Stay tuned!
pohl4711 wrote: ↑Sun Jun 14, 2020 2:05 pm
Sounds good. I started a 5000 games testrun for my Stockfish-testing ratinglist with that nn linked here in that post and that binary (stockfish.nnue-learn-use-blas.halfkp_256x2-32-32.exe). For testing conditions, look on my website: https://www.sp-cc.de
I'm currently generating new training data with it, will be able to release something new in a few days I hope.
Are you still training without draws?
Probably not a good idea, imho.
In my copy of SF-NNUE https://github.com/joergoster/Stockfish-NNUE I enabled to use draw results, as well.
Hopefully, I did everything right.
When compiling from source it will use halfkp_256x2-32-32 architecture by default.
I also think that the initial learning rate (eta = 1.0) might be too high.
Can you share all of your binaries (gensfen, half-kp, half-kp learn, etc.,) on Google Drive or as GitHub releases? I know it's a lot to ask for, but I'd really appreciate it.
I have a halfkp384 net trained. It is 50% slower than the 400kb one while being 30mb in size.
At 30+0.3 with my crep net with a ccrl estimate at 3461 (thanks muppetmuppet!) the big net won 123/220/57. (FT 2 moves opening book, other testers have less of an elo gain but they are also in the error bars of my test. Expect at least a boost of 10 Elo.)
I stopped this test in favour of a 1+1 test with noobs 8 moves book used in regression tests. Testing against SF dev and my crep net.
ChickenLogic wrote: ↑Sun Jun 14, 2020 3:40 pm
I have a halfkp384 net trained. It is 50% slower than the 400kb one while being 30mb in size.
At 30+0.3 with my crep net with a ccrl estimate at 3461 (thanks muppetmuppet!) the big net won 123/220/57. (FT 2 moves opening book, other testers have less of an elo gain but they are also in the error bars of my test. Expect at least a boost of 10 Elo.)
I stopped this test in favour of a 1+1 test with noobs 8 moves book used in regression tests. Testing against SF dev and my crep net.
Also a big thanks to Raphexon as half of the 600m fens were generated by him.
I'll try to increase the quality of the validation files and maybe even use depth 10 training data.
@Cucumber it would be great if you shared your conversion tool so we can finally use the lichess data for training!
Yes, that would be great.
It would not only allow us to use already available data from several sources,
but also allow us to play training games with time control and from different starting positions, books, etc.
I'm curious. These nets how are comparable to Leela ones? In therms of size and speed, for example.
It plays very interesting! And different than Stockfish and Lc0 sometimes. Also sometimes it sees that a position is clearly wining before other engines.