Modern Times wrote: ↑Sat Apr 24, 2021 10:36 am
Does DanaSah 8.8 play chess960 under Cutechess GUI, either as UCI or WinBoard ?
Currently trying it as XBoard at chess960 under Cutechess. Only 10 games played but I think it is probably fine.
DanaSah can play FRC or Chess960 with either xboard or uci protocol. No user configuration is required. It also recognises the 2 commonly used FEN formats, ShredderFEN and xFEN.
I have tried to play in cutechess with the uci protocol:
pedrox wrote: ↑Sat Apr 24, 2021 2:10 pm
DanaSah can play FRC or Chess960 with either xboard or uci protocol. No user configuration is required. It also recognises the 2 commonly used FEN formats, ShredderFEN and xFEN.
I have tried to play in cutechess with the uci protocol:
Ah thank you. I'll try it as UCI instead. I ran previous versions as UCI under ChessGUI, but I no longer use that.
Pedro, I like to ask, how recent is the stock2bd10.nnue net?
Hi Ed,
stock2bd10.nnue was used with danasah or with stockfish? I guess with danasah.
I downloaded from a google drive page in mid march 2000 million positions in binpack format that I think were created with the stockfish master network (sf13?) and with a depth of 10. The file is called: gensfen_multipvdiff_100_d10.binpack, I think it is quite well known in the stockfish nnue group on discord.
Then I used nodchip trainer to train a fresh network. I haven't done a serious test on its strength, but I think it's something like 75 Elo points less than a stockfish net I tested. To make it stronger it would have been better to have more positions and depth, even up to 16000 million.
If I'm not mistaken, with a SIM value of 54 it could even pass as an original job. However I preferred to do something different and instead of using positions with stockfish evaluation or one of their networks I preferred to use ccrl positions and instead of learning by depth I did it by result. This is ccrl402net network and I imagine will have a very low SIM and then the dananet1 network that I use (default) and which is tactically stronger for having trained it in depth 5 with ccrlnet evaluations will have a higher SIM than the ccrl but lower than the one in your test.
Pedro, I like to ask, how recent is the stock2bd10.nnue net?
Hi Ed,
stock2bd10.nnue was used with danasah or with stockfish? I guess with danasah.
Yep.
I downloaded from a google drive page in mid march 2000 million positions in binpack format that I think were created with the stockfish master network (sf13?) and with a depth of 10. The file is called: gensfen_multipvdiff_100_d10.binpack, I think it is quite well known in the stockfish nnue group on discord.
Then I used nodchip trainer to train a fresh network. I haven't done a serious test on its strength, but I think it's something like 75 Elo points less than a stockfish net I tested. To make it stronger it would have been better to have more positions and depth, even up to 16000 million.
If I'm not mistaken, with a SIM value of 54 it could even pass as an original job.
Well noticed, and that was the reason I asked. SIM-SCORE is like SIMEX, numbers can only raise doubt, not proof innocence so to say.
However I preferred to do something different and instead of using positions with stockfish evaluation or one of their networks I preferred to use ccrl positions and instead of learning by depth I did it by result. This is ccrl402net network and I imagine will have a very low SIM and then the dananet1 network that I use (default) and which is tactically stronger for having trained it in depth 5 with ccrlnet evaluations will have a higher SIM than the ccrl but lower than the one in your test.
Well done and welcome to the NNUE family
90% of coding is debugging, the other 10% is writing bugs.
With any luck, it should be more than 100 elo stronger than v1.2.1. With this version, the network weights are now embedded into the binaries. The embedded network is trained entirely on data generated by Seer's search starting with a randomly initialized network. The network is no longer directly or indirectly trained on Stockfish derived training data. The training technique employed is a variant of semisupervised learning and involves starting with a large number of <=6 man positions labeled using Syzygy EGTBs. A lengthier description of the unique training process can be found in the README.
the improvement is modest this time (I'm playing with fire - no pun intended , estimated +15 against other engines (about half of 4.40 vs 4.39)
(self-play hyperbullet 56% against 4.40 so who knows, I usually get about half of self-play) - my apologies to the testers for not jumping hundreds of Elo points
note that I dropped own book because some people abused it for engine-engine testing (this was never intended) to make Cheng look bad
I'm constantly frustrated and annoyed by computer chess - probably not a hobby I intend to keep, we'll see...
anyway - have fun (hopefully 4.41 will do better in D5 than 4.40 did in D4