Is this SF NN almost like 20 MB book?
Moderators: hgm, Rebel, chrisw
-
- Posts: 3291
- Joined: Wed Mar 08, 2006 8:15 pm
Is this SF NN almost like 20 MB book?
I made short 100 game bullet level test. SF dev with Fritz tournament book against SF NNUE no book at all. NNUE scored 59% e.g. +63 ELO. Sometimes NN played 18 theory moves at bullet level - stunning! There is UCI parameter BookMoves with default value 16. Is it a hint?
Jouni
-
- Posts: 708
- Joined: Mon Jan 16, 2012 6:34 am
Re: Is this SF NN almost like 20 MB book?
People should start learning how NN works?
Otherwise your question will be laughted as if asking similar question like, " Is the earth flat?"
Otherwise your question will be laughted as if asking similar question like, " Is the earth flat?"
-
- Posts: 1243
- Joined: Sat Dec 13, 2008 7:00 pm
Re: Is this SF NN almost like 20 MB book?
If you give a NN a ton of opening positions to learn, it will definitely learn to remember them.
But for NNUE it is more complicated. The output does not contain move recommendations, just the evaluation of the position. I don't think the current learning process tries to match move outcomes, just game outcomes. So there would be no opportunity for such learning. It could remember which book positions are 'bad' and consequently play towards the 'good' ones, though.
The whole book or "no books" discussion is silly. If one engine can use 100M of data files, then so should the others. Doesn't matter what is contained in them.
-
- Posts: 4889
- Joined: Thu Mar 09, 2006 6:34 am
- Location: Pen Argyl, Pennsylvania
Re: Is this SF NN almost like 20 MB book?
Not at all. It is driven by AI and statistics . It may seem like it's an opening book at times- but that is not the science behind it. It more similar to pattern recognition - at a very high level with a search function , so certain patterns that may be ignored or pruned away by Stockfish Classical engine , are no longer being pruned if they look interesting based on statistics from prior games . No different that how baseball teams now position their fielders.Jouni wrote: ↑Tue Aug 04, 2020 7:26 pm I made short 100 game bullet level test. SF dev with Fritz tournament book against SF NNUE no book at all. NNUE scored 59% e.g. +63 ELO. Sometimes NN played 18 theory moves at bullet level - stunning! There is UCI parameter BookMoves with default value 16. Is it a hint?
Edit: It is interesting how well NN play openings with NO BOOk! Scary - ina good way of course.
-
- Posts: 12541
- Joined: Wed Mar 08, 2006 8:57 pm
- Location: Redmond, WA USA
Re: Is this SF NN almost like 20 MB book?
I don't think the question "Is this SF NN almost like 20 MB book?" was meant literally.
Jouni was simply noticing that sf nnue plays openings very well.
IOW, "Look, it plays the openings so well, we can throw away the books." is what I think he was saying.
LC0 also plays openings very well. I suspect that NN approaches work very well for the initial, quiet part of the game.
Jouni was simply noticing that sf nnue plays openings very well.
IOW, "Look, it plays the openings so well, we can throw away the books." is what I think he was saying.
LC0 also plays openings very well. I suspect that NN approaches work very well for the initial, quiet part of the game.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
-
- Posts: 3657
- Joined: Wed Nov 18, 2015 11:41 am
- Location: hungary
Re: Is this SF NN almost like 20 MB book?
??Gian-Carlo Pascutto wrote: ↑Tue Aug 04, 2020 7:51 pm ...
If one engine can use 100M of data files, then so should the others. Doesn't matter what is contained in them.
-
- Posts: 1243
- Joined: Sat Dec 13, 2008 7:00 pm
Re: Is this SF NN almost like 20 MB book?
Some tournaments disallow "books" but allow "neural networks", even if this distinction does not exist in reality, because you can train a neural network to remember openings.corres wrote: ↑Tue Aug 04, 2020 8:02 pm??Gian-Carlo Pascutto wrote: ↑Tue Aug 04, 2020 7:51 pm ...
If one engine can use 100M of data files, then so should the others. Doesn't matter what is contained in them.
-
- Posts: 3657
- Joined: Wed Nov 18, 2015 11:41 am
- Location: hungary
Re: Is this SF NN almost like 20 MB book?
But the most of chess engine can not "read" neural net, so they need common opening book.Gian-Carlo Pascutto wrote: ↑Tue Aug 04, 2020 8:03 pm ...
Some tournaments disallow "books" but allow "neural networks", even if this distinction does not exist in reality, because you can train a neural network to remember openings.
-
- Posts: 550
- Joined: Tue Nov 19, 2019 8:48 pm
- Full name: Alayan Feh
Re: Is this SF NN almost like 20 MB book?
It's theoretically possible to train a NN to learn and match a classical opening book, then have an hybrid engine that uses the NN move/eval as long as the NN claims the position is in book, then switches to something else.
This is just adding a layer of obfuscation and work (retraining the NN once the book gets a significant enough update) to produce the same end result.
Except this obfuscation scheme wouldn't be banned in tournaments that ban classical book.
Book ban to ensure the focus is on search and evaluation works well for classical engine rather than in a book war, but when mixing in NN engines that are trained to use move suggestions (e.g. the so-called "policy head" of Leela), it gets blurry.
This is just adding a layer of obfuscation and work (retraining the NN once the book gets a significant enough update) to produce the same end result.
Except this obfuscation scheme wouldn't be banned in tournaments that ban classical book.
Book ban to ensure the focus is on search and evaluation works well for classical engine rather than in a book war, but when mixing in NN engines that are trained to use move suggestions (e.g. the so-called "policy head" of Leela), it gets blurry.
-
- Posts: 1631
- Joined: Tue Aug 21, 2018 7:52 pm
- Full name: Dietrich Kappe
Re: Is this SF NN almost like 20 MB book?
Oh boy. Most nnue are trained at the beginning without regard to game outcome, often at depth 8. Most of the positions they see are maybe 18 to 19 ply into the game and later. They are in essence an approximation of an engine eval at d8.
You can look at my Toga and Night Nurse (based on Bad Gyal) nets and compare them to each other and the many stockfish nets, then explain to me how they are memorizing openings.
You can look at my Toga and Night Nurse (based on Bad Gyal) nets and compare them to each other and the many stockfish nets, then explain to me how they are memorizing openings.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".