Speculations about NNUE development

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Dann Corbit, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Madeleine Birchfield
Posts: 278
Joined: Tue Sep 29, 2020 2:29 pm
Location: Dublin, Ireland
Full name: Madeleine Birchfield

Speculations about NNUE development

Post by Madeleine Birchfield » Wed Nov 11, 2020 10:51 pm

[Moderation] This and the following postings was split off from the "New engine releases 2020" thread, because it doesn't report or discuss any actual engine releases, and thus should be considered off-topic.
OliverBr wrote:
Wed Nov 11, 2020 3:39 pm
In my ultra-bullet-tests 8.1 looks 141 ELO stronger than 8:

Code: Select all

   # PLAYER            :  RATING  ERROR  POINTS  PLAYED   (%)     W    D     L  D(%)  CFS(%)
   1 Halogen 8.1       :       0   ----  1331.0    2000  66.5  1128  406   466  20.3     100
   2 OliThink 5.9.0    :    -121     14   669.0    2000  33.5   466  406  1128  20.3     ---

White advantage = 12.99 +/- 7.45
Draw rate (equal opponents) = 21.89 % +/- 0.97

Code: Select all

   # PLAYER            :  RATING  ERROR  POINTS  PLAYED   (%)    W    D    L  D(%)  CFS(%)
   1 OliThink 5.9.0    :      20     14  1058.0    2000  52.9  851  414  735  20.7     100
   2 Halogen 8         :       0   ----   942.0    2000  47.1  735  414  851  20.7     ---

White advantage = 12.67 +/- 6.93
Draw rate (equal opponents) = 20.76 % +/- 0.92
Faszinating how almost everybody seems to surprise the 3000 quite easily. Others need over 20 years and still not there.
With the advent of strong and fast neural network based evaluation functions, I think the 3000 elo limit is too low, and the new limit should be 3200 elo or something.

Kieren Pearson
Posts: 59
Joined: Tue Dec 31, 2019 1:52 am
Full name: Kieren Pearson

Re: New engine releases 2020

Post by Kieren Pearson » Thu Nov 12, 2020 2:22 am

OliverBr wrote:
Wed Nov 11, 2020 3:39 pm
Kieren Pearson wrote:
Wed Nov 11, 2020 10:08 am
New Halogen release

https://github.com/KierenP/Halogen/releases/tag/v8.1

Should be on CCRL 40/15 about 3050 elo
In my ultra-bullet-tests 8.1 looks 141 ELO stronger than 8:

Code: Select all

   # PLAYER            :  RATING  ERROR  POINTS  PLAYED   (%)     W    D     L  D(%)  CFS(%)
   1 Halogen 8.1       :       0   ----  1331.0    2000  66.5  1128  406   466  20.3     100
   2 OliThink 5.9.0    :    -121     14   669.0    2000  33.5   466  406  1128  20.3     ---

White advantage = 12.99 +/- 7.45
Draw rate (equal opponents) = 21.89 % +/- 0.97

Code: Select all

   # PLAYER            :  RATING  ERROR  POINTS  PLAYED   (%)    W    D    L  D(%)  CFS(%)
   1 OliThink 5.9.0    :      20     14  1058.0    2000  52.9  851  414  735  20.7     100
   2 Halogen 8         :       0   ----   942.0    2000  47.1  735  414  851  20.7     ---

White advantage = 12.67 +/- 6.93
Draw rate (equal opponents) = 20.76 % +/- 0.92
Faszinating how almost everybody seems to surprise the 3000 quite easily. Others need over 20 years and still not there.
Thanks Oliver for the very quick initial test results. Halogen in self play was around 160 elo stronger so it seems like most of that gain is going to translate against other opponents which is always a great sign!

Kieren Pearson
Posts: 59
Joined: Tue Dec 31, 2019 1:52 am
Full name: Kieren Pearson

Re: New engine releases 2020

Post by Kieren Pearson » Thu Nov 12, 2020 2:27 am

Madeleine Birchfield wrote:
Wed Nov 11, 2020 10:51 pm
OliverBr wrote:
Wed Nov 11, 2020 3:39 pm
In my ultra-bullet-tests 8.1 looks 141 ELO stronger than 8:

Code: Select all

   # PLAYER            :  RATING  ERROR  POINTS  PLAYED   (%)     W    D     L  D(%)  CFS(%)
   1 Halogen 8.1       :       0   ----  1331.0    2000  66.5  1128  406   466  20.3     100
   2 OliThink 5.9.0    :    -121     14   669.0    2000  33.5   466  406  1128  20.3     ---

White advantage = 12.99 +/- 7.45
Draw rate (equal opponents) = 21.89 % +/- 0.97

Code: Select all

   # PLAYER            :  RATING  ERROR  POINTS  PLAYED   (%)    W    D    L  D(%)  CFS(%)
   1 OliThink 5.9.0    :      20     14  1058.0    2000  52.9  851  414  735  20.7     100
   2 Halogen 8         :       0   ----   942.0    2000  47.1  735  414  851  20.7     ---

White advantage = 12.67 +/- 6.93
Draw rate (equal opponents) = 20.76 % +/- 0.92
Faszinating how almost everybody seems to surprise the 3000 quite easily. Others need over 20 years and still not there.
With the advent of strong and fast neural network based evaluation functions, I think the 3000 elo limit is too low, and the new limit should be 3200 elo or something.
As technology advances people are always going to be surpassing what people did before them with less effort. My elo gains would definitely not have been possible without the massive compute power on OB I was able to use.

I think NN evaluations will continue to be developed and in 10 years will be way ahead of where they are now. Right now not many engines are above 3200 but I wouldn’t be surprised if I’m 10 years people are surpassing 3400 at the same rate they’re passing 3000 now.

OliverBr
Posts: 668
Joined: Tue Dec 18, 2007 8:38 pm
Location: Munich, Germany
Full name: Dr. Oliver Brausch
Contact:

Re: New engine releases 2020

Post by OliverBr » Thu Nov 12, 2020 4:59 am

Madeleine Birchfield wrote:
Wed Nov 11, 2020 10:51 pm
With the advent of strong and fast neural network based evaluation functions, I think the 3000 elo limit is too low, and the new limit should be 3200 elo or something.
I wonder what the "true limit" would be, when everybody was only using his own code based on his own ideas.
Is there anybody who wrote his own NNUE code and uses his own nets?
Chess Engine OliThink: http://brausch.org/home/chess
OliThink GitHub:https://github.com/olithink

Madeleine Birchfield
Posts: 278
Joined: Tue Sep 29, 2020 2:29 pm
Location: Dublin, Ireland
Full name: Madeleine Birchfield

Re: New engine releases 2020

Post by Madeleine Birchfield » Thu Nov 12, 2020 5:35 am

OliverBr wrote:
Thu Nov 12, 2020 4:59 am
I wonder what the "true limit" would be, when everybody was only using his own code based on his own ideas.
Is there anybody who wrote his own NNUE code and uses his own nets?
The three engines so far with their own NNUE architecture code are Seer, Halogen (which its author Kieren Pearson has tested to have reached 3050 elo), and Dragon by Komodo Chess. Everybody else so far has been tinkering with a copy of the NNUE code from Hisayori Noda's Stockfish fork grafted onto their engine, including the official Stockfish team themselves.

Edit: I forgot Minic, which since Minic 3 is also uses its own NNUE code as well.
Last edited by Madeleine Birchfield on Thu Nov 12, 2020 5:41 am, edited 1 time in total.

Madeleine Birchfield
Posts: 278
Joined: Tue Sep 29, 2020 2:29 pm
Location: Dublin, Ireland
Full name: Madeleine Birchfield

Re: New engine releases 2020

Post by Madeleine Birchfield » Thu Nov 12, 2020 5:40 am

Kieren Pearson wrote:
Wed Nov 11, 2020 10:08 am
New Halogen release

https://github.com/KierenP/Halogen/releases/tag/v8.1

Should be on CCRL 40/15 about 3050 elo
I hope we could see Halogen at TCEC and CCCC soon.

connor_mcmonigle
Posts: 79
Joined: Sun Sep 06, 2020 2:40 am
Full name: Connor McMonigle

Re: New engine releases 2020

Post by connor_mcmonigle » Thu Nov 12, 2020 8:22 am

Madeleine Birchfield wrote:
Thu Nov 12, 2020 5:35 am
OliverBr wrote:
Thu Nov 12, 2020 4:59 am
I wonder what the "true limit" would be, when everybody was only using his own code based on his own ideas.
Is there anybody who wrote his own NNUE code and uses his own nets?
The three engines so far with their own NNUE architecture code are Seer, Halogen (which its author Kieren Pearson has tested to have reached 3050 elo), and Dragon by Komodo Chess. Everybody else so far has been tinkering with a copy of the NNUE code from Hisayori Noda's Stockfish fork grafted onto their engine, including the official Stockfish team themselves.

Edit: I forgot Minic, which since Minic 3 is also uses its own NNUE code as well.
FYI, we don't know for sure what the Komodo team is doing, but, in all likelihood, it's using the exact same training code from SF and network architecture (if someone from the Komodo team can correct me here, please do). This means, to avoid GPL, they likely rewrote just the inference code much like many top FOSS engines have done (see Vajolet and RubiChess for examples).

Currently Minic is actually using Seer's NN implementation, but Vivien is taking it in his own direction from there :D Regardless, I'm actually about to make some pretty serious changes to my network architecture anyways (moving away from halfkp features altogether to what I'm dubbing adjacent-piece-piece features which I believe to be superior for chess).

Congrats to Kieren! It looks like I have a lot of catch up work to do if I want to surpass Halogen again

Madeleine Birchfield
Posts: 278
Joined: Tue Sep 29, 2020 2:29 pm
Location: Dublin, Ireland
Full name: Madeleine Birchfield

Re: New engine releases 2020

Post by Madeleine Birchfield » Thu Nov 12, 2020 9:04 am

connor_mcmonigle wrote:
Thu Nov 12, 2020 8:22 am
FYI, we don't know for sure what the Komodo team is doing, but, in all likelihood, it's using the exact same training code from SF and network architecture (if someone from the Komodo team can correct me here, please do). This means, to avoid GPL, they likely rewrote just the inference code much like many top FOSS engines have done (see Vajolet and RubiChess for examples).
Huh, the Komodo team said
lkaufman wrote:
Mon Nov 02, 2020 5:08 pm
We are also announcing our new "Dragon" version of Komodo, which is now playing in the chess.com CCC
tournament as "Mystery" and which we expect to release soon. It uses the new NNUE technology that was
developed for the game of shogi, but not the NNUE code. The search is Komodo search (with some parameters
tuned)
, and the nets we use are all trained on Komodo games and Komodo evals. The net is embedded so the
user need not do anything special to use it (though it can be turned off).
which I interpreted to be referring to the search used in training Dragon (Stockfish's trainer uses Stockfish search), but it could just refer to the fact that Dragon uses the same search as Komodo.

And btw, Vajolet hasn't been updated in an year so perhaps you are thinking of a different engine?

Guenther
Posts: 3694
Joined: Wed Oct 01, 2008 4:33 am
Location: Regensburg, Germany
Full name: Guenther Simon
Contact:

Re: New engine releases 2020

Post by Guenther » Thu Nov 12, 2020 10:38 am

Madeleine Birchfield wrote:
Thu Nov 12, 2020 9:04 am
connor_mcmonigle wrote:
Thu Nov 12, 2020 8:22 am
FYI, we don't know for sure what the Komodo team is doing, but, in all likelihood, it's using the exact same training code from SF and network architecture (if someone from the Komodo team can correct me here, please do). This means, to avoid GPL, they likely rewrote just the inference code much like many top FOSS engines have done (see Vajolet and RubiChess for examples).
...

And btw, Vajolet hasn't been updated in an year so perhaps you are thinking of a different engine?
You don't need to release sth, while doing ('new') things...
https://github.com/elcabesa/vajolet/branches

connor_mcmonigle
Posts: 79
Joined: Sun Sep 06, 2020 2:40 am
Full name: Connor McMonigle

Re: New engine releases 2020

Post by connor_mcmonigle » Thu Nov 12, 2020 4:09 pm

Madeleine Birchfield wrote:
Thu Nov 12, 2020 9:04 am
...which I interpreted to be referring to the search used in training Dragon (Stockfish's trainer uses Stockfish search), but it could just refer to the fact that Dragon uses the same search as Komodo.
Yes. They claim to and are very likely using Komodo's games as training data, but this doesn't mean they implemented new training code + made improvements/changes to the network architecture. This is exceedingly improbable imho.

Likely, what they did for training is the same as DKappe has been doing for a while which involves converting separate data obtained from self play games of a different engine into the packed fen format used by the SF trainer. It seems rather likely they didn't even bother swapping out the SF qsearch code used by the trainer.

To then actually run the networks produced by this process in their engine, they presumably got someone to exactly rewrite just the inference code so they could circumvent the GPL restrictions. If this is the case, I would personally like to see the computer Shogi developers who invested a lot of effort into writing the incredibly optimized and clever training code added to the Dragon authors list. They are responsible for the large majority of the work involved in the increase in strength.

Both Halogen and Seer are comparatively all original. Both just happen to rely on the "efficiently updatable" idea. They probably shouldn't be lumped into the same category as Komodo+NNUE.

(Also see Vajolet's NNUE branch)

Post Reply