Is this SF NN almost like 20 MB book?

Discussion of anything and everything relating to chess playing software and machines.

Moderators: Harvey Williamson, Dann Corbit, hgm

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Jouni
Posts: 2241
Joined: Wed Mar 08, 2006 7:15 pm

Is this SF NN almost like 20 MB book?

Post by Jouni » Tue Aug 04, 2020 5:26 pm

I made short 100 game bullet level test. SF dev with Fritz tournament book against SF NNUE no book at all. NNUE scored 59% e.g. +63 ELO. Sometimes NN played 18 theory moves at bullet level - stunning! There is UCI parameter BookMoves with default value 16. Is it a hint?
Jouni

Nay Lin Tun
Posts: 651
Joined: Mon Jan 16, 2012 5:34 am

Re: Is this SF NN almost like 20 MB book?

Post by Nay Lin Tun » Tue Aug 04, 2020 5:37 pm

People should start learning how NN works?

Otherwise your question will be laughted as if asking similar question like, " Is the earth flat?"

Gian-Carlo Pascutto
Posts: 1204
Joined: Sat Dec 13, 2008 6:00 pm
Contact:

Re: Is this SF NN almost like 20 MB book?

Post by Gian-Carlo Pascutto » Tue Aug 04, 2020 5:51 pm

Nay Lin Tun wrote:
Tue Aug 04, 2020 5:37 pm
People should start learning how NN works?
If you give a NN a ton of opening positions to learn, it will definitely learn to remember them.

But for NNUE it is more complicated. The output does not contain move recommendations, just the evaluation of the position. I don't think the current learning process tries to match move outcomes, just game outcomes. So there would be no opportunity for such learning. It could remember which book positions are 'bad' and consequently play towards the 'good' ones, though.

The whole book or "no books" discussion is silly. If one engine can use 100M of data files, then so should the others. Doesn't matter what is contained in them.

User avatar
MikeB
Posts: 4487
Joined: Thu Mar 09, 2006 5:34 am
Location: Pen Argyl, Pennsylvania

Re: Is this SF NN almost like 20 MB book?

Post by MikeB » Tue Aug 04, 2020 5:53 pm

Jouni wrote:
Tue Aug 04, 2020 5:26 pm
I made short 100 game bullet level test. SF dev with Fritz tournament book against SF NNUE no book at all. NNUE scored 59% e.g. +63 ELO. Sometimes NN played 18 theory moves at bullet level - stunning! There is UCI parameter BookMoves with default value 16. Is it a hint?
Not at all. It is driven by AI and statistics . It may seem like it's an opening book at times- but that is not the science behind it. It more similar to pattern recognition - at a very high level with a search function , so certain patterns that may be ignored or pruned away by Stockfish Classical engine , are no longer being pruned if they look interesting based on statistics from prior games . No different that how baseball teams now position their fielders.

Edit: It is interesting how well NN play openings with NO BOOk! Scary - ina good way of course.
Image

Dann Corbit
Posts: 11699
Joined: Wed Mar 08, 2006 7:57 pm
Location: Redmond, WA USA
Contact:

Re: Is this SF NN almost like 20 MB book?

Post by Dann Corbit » Tue Aug 04, 2020 5:59 pm

I don't think the question "Is this SF NN almost like 20 MB book?" was meant literally.
Jouni was simply noticing that sf nnue plays openings very well.

IOW, "Look, it plays the openings so well, we can throw away the books." is what I think he was saying.

LC0 also plays openings very well. I suspect that NN approaches work very well for the initial, quiet part of the game.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.

corres
Posts: 3600
Joined: Wed Nov 18, 2015 10:41 am
Location: hungary

Re: Is this SF NN almost like 20 MB book?

Post by corres » Tue Aug 04, 2020 6:02 pm

Gian-Carlo Pascutto wrote:
Tue Aug 04, 2020 5:51 pm
...
If one engine can use 100M of data files, then so should the others. Doesn't matter what is contained in them.
??

Gian-Carlo Pascutto
Posts: 1204
Joined: Sat Dec 13, 2008 6:00 pm
Contact:

Re: Is this SF NN almost like 20 MB book?

Post by Gian-Carlo Pascutto » Tue Aug 04, 2020 6:03 pm

corres wrote:
Tue Aug 04, 2020 6:02 pm
Gian-Carlo Pascutto wrote:
Tue Aug 04, 2020 5:51 pm
...
If one engine can use 100M of data files, then so should the others. Doesn't matter what is contained in them.
??
Some tournaments disallow "books" but allow "neural networks", even if this distinction does not exist in reality, because you can train a neural network to remember openings.

corres
Posts: 3600
Joined: Wed Nov 18, 2015 10:41 am
Location: hungary

Re: Is this SF NN almost like 20 MB book?

Post by corres » Tue Aug 04, 2020 6:21 pm

Gian-Carlo Pascutto wrote:
Tue Aug 04, 2020 6:03 pm
...
Some tournaments disallow "books" but allow "neural networks", even if this distinction does not exist in reality, because you can train a neural network to remember openings.
But the most of chess engine can not "read" neural net, so they need common opening book.

Alayan
Posts: 489
Joined: Tue Nov 19, 2019 7:48 pm
Full name: Alayan Feh

Re: Is this SF NN almost like 20 MB book?

Post by Alayan » Tue Aug 04, 2020 6:30 pm

It's theoretically possible to train a NN to learn and match a classical opening book, then have an hybrid engine that uses the NN move/eval as long as the NN claims the position is in book, then switches to something else.

This is just adding a layer of obfuscation and work (retraining the NN once the book gets a significant enough update) to produce the same end result.

Except this obfuscation scheme wouldn't be banned in tournaments that ban classical book.

Book ban to ensure the focus is on search and evaluation works well for classical engine rather than in a book war, but when mixing in NN engines that are trained to use move suggestions (e.g. the so-called "policy head" of Leela), it gets blurry.

dkappe
Posts: 566
Joined: Tue Aug 21, 2018 5:52 pm
Full name: Dietrich Kappe

Re: Is this SF NN almost like 20 MB book?

Post by dkappe » Tue Aug 04, 2020 7:44 pm

Oh boy. Most nnue are trained at the beginning without regard to game outcome, often at depth 8. Most of the positions they see are maybe 18 to 19 ply into the game and later. They are in essence an approximation of an engine eval at d8.

You can look at my Toga and Night Nurse (based on Bad Gyal) nets and compare them to each other and the many stockfish nets, then explain to me how they are memorizing openings.

Post Reply