Speculations about NNUE development

Discussion of anything and everything relating to chess playing software and machines.

Moderators: Harvey Williamson, Dann Corbit, hgm

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Madeleine Birchfield
Posts: 211
Joined: Tue Sep 29, 2020 2:29 pm
Location: Dublin, Ireland
Full name: Madeleine Birchfield

Re: New engine releases 2020

Post by Madeleine Birchfield » Sat Nov 21, 2020 8:29 am

Guenther wrote:
Sat Nov 21, 2020 7:50 am
I seems this thread was hijacked for speculations about nnue.
(especially Komodo ones, which never was a matter in this thread before at all and shouldn't,
as I never announce commercial releases).
I suggest to spllt that part away from the original thread.

Somehow it started with some dropping in by 'Madeleine'.

Guenther
It was Dietrich Kappe restarting a 9 day old conversation.

dkappe
Posts: 573
Joined: Tue Aug 21, 2018 5:52 pm
Full name: Dietrich Kappe

Re: New engine releases 2020

Post by dkappe » Sat Nov 21, 2020 3:08 pm

AndrewGrant wrote:
Sat Nov 21, 2020 7:19 am
dkappe wrote:
Sat Nov 21, 2020 6:52 am
AndrewGrant wrote:
Sat Nov 21, 2020 6:49 am
Ill note that you failed to deny the claims.
You mean the baseless speculations? Note what you like Andrew, but your rage posts are somewhat tiring.
Rage? What. Also, interesting phrase, "baseless speculations". "baseless accusations" is a thing, but baseless speculations? That is new.
If something from from the newspapers of 1957 is new.

Tord
Posts: 13
Joined: Tue Feb 27, 2018 10:29 am

Re: New engine releases 2020

Post by Tord » Wed Nov 25, 2020 2:04 pm

dkappe wrote:
Sat Nov 21, 2020 6:34 am
P.S. On a more useful note, I’ve started using Tord Romstad’s excellent Chess.jl library (https://github.com/romstad/Chess.jl), though it has one major castling bug that I’m working to fix. Pretty speedy for stuff like qsearch. :D
I'm glad you like it!

Could you please let me know what that that castling bug is? Since I'm presumably more familiar with the code than you are, I think I could fix it quite easily.

Edit: Nevermind, I just saw that there's a GitHub issue on it. I'll have a look.

dkappe
Posts: 573
Joined: Tue Aug 21, 2018 5:52 pm
Full name: Dietrich Kappe

Re: New engine releases 2020

Post by dkappe » Wed Nov 25, 2020 5:26 pm

Tord wrote:
Wed Nov 25, 2020 2:04 pm
dkappe wrote:
Sat Nov 21, 2020 6:34 am
P.S. On a more useful note, I’ve started using Tord Romstad’s excellent Chess.jl library (https://github.com/romstad/Chess.jl), though it has one major castling bug that I’m working to fix. Pretty speedy for stuff like qsearch. :D
I'm glad you like it!

Could you please let me know what that that castling bug is? Since I'm presumably more familiar with the code than you are, I think I could fix it quite easily.

Edit: Nevermind, I just saw that there's a GitHub issue on it. I'll have a look.
Just as I created a pull request. :) Thanks for fixing it.

syzygy
Posts: 4821
Joined: Tue Feb 28, 2012 10:56 pm

Re: New engine releases 2020

Post by syzygy » Wed Nov 25, 2020 11:49 pm

OliverBr wrote:
Thu Nov 12, 2020 4:59 am
Madeleine Birchfield wrote:
Wed Nov 11, 2020 10:51 pm
With the advent of strong and fast neural network based evaluation functions, I think the 3000 elo limit is too low, and the new limit should be 3200 elo or something.
I wonder what the "true limit" would be, when everybody was only using his own code based on his own ideas.
Chess wouldn't have been invented since people would have been busy all day hunting small game. (No way you can kill a mammoth only using your own muscle power and your own ideas.)

ChickenLogic
Posts: 70
Joined: Sun Jan 20, 2019 10:23 am
Full name: Julian Willemer

Re: Speculations about NNUE development

Post by ChickenLogic » Mon Nov 30, 2020 7:04 pm

Once we figure out how to beat SF master net with the tools we have available there will be a "multinet" which replaces the NNUE+classical hybrid approach. I've proven that it is easy to beat classical eval with NNUE in node vs node while also being faster than classical eval. Note that there currently there are conflicts between the initial implementation and current optimisations. It has a 256+x input slice + y hidden layers. Then we decide whether we use the 256 input slice or the smaller input slice (e.g. 64) with a 16x ReLu layer.

In case you want to take a look at the code here it is: https://github.com/Sopel97/Stockfish/tree/multinet3
We don't have a way to train both parts of the net at once (yet). As I said it needs to be ported to nodchip-master as without the current optimisations it loses much more speed than it gains. We have checked with a "fake multinet" with up to date code and it reached 99.x% of NNUE speed so I'm fairly confident it will work.
Currently working on SFNN.

Post Reply