Is this SF NN almost like 20 MB book?

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

dkappe
Posts: 1631
Joined: Tue Aug 21, 2018 7:52 pm
Full name: Dietrich Kappe

Re: Is this SF NN almost like 20 MB book?

Post by dkappe »

Milos wrote: Tue Aug 04, 2020 11:40 pm
How bad is NNUE eval really one sees when testing SF depth 5 with minimum eval (material+mobility+PST) vs SF-NNUE depth 1. It's a slaughter house.
I’m shocked! Shocked! Thing A at depth 5 slaughtered thing B at depth 1? Hard to believe.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
Milos
Posts: 4190
Joined: Wed Nov 25, 2009 1:47 am

Re: Is this SF NN almost like 20 MB book?

Post by Milos »

dkappe wrote: Tue Aug 04, 2020 11:48 pm
Milos wrote: Tue Aug 04, 2020 11:40 pm
How bad is NNUE eval really one sees when testing SF depth 5 with minimum eval (material+mobility+PST) vs SF-NNUE depth 1. It's a slaughter house.
I’m shocked! Shocked! Thing A at depth 5 slaughtered thing B at depth 1? Hard to believe.
Well thing B having approximate eval of depth 12 or 16 or 18 or whatever stored in 20MBs of data. What would be the score if thing B had 20MB book instead?
All you actually managed to demonstrate with you dull essay about NNUE analysis is that NNUE is quite a crappy book. Not that it's not.
dkappe
Posts: 1631
Joined: Tue Aug 21, 2018 7:52 pm
Full name: Dietrich Kappe

Re: Is this SF NN almost like 20 MB book?

Post by dkappe »

Milos wrote: Wed Aug 05, 2020 12:01 am
dkappe wrote: Tue Aug 04, 2020 11:48 pm
Milos wrote: Tue Aug 04, 2020 11:40 pm
How bad is NNUE eval really one sees when testing SF depth 5 with minimum eval (material+mobility+PST) vs SF-NNUE depth 1. It's a slaughter house.
I’m shocked! Shocked! Thing A at depth 5 slaughtered thing B at depth 1? Hard to believe.
Well thing B having approximate eval of depth 12 or 16 or 18 or whatever stored in 20MBs of data. What would be the score if thing B had 20MB book instead?
All you are actually managed to demonstrate with you dull essay about NNUE analysis is that NNUE is quite a crappy book. Not that it's not.
Well, do the following test: 1 & 5 you already have, now do 6 & 10, then 11 & 15. See a pattern?
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
Milos
Posts: 4190
Joined: Wed Nov 25, 2009 1:47 am

Re: Is this SF NN almost like 20 MB book?

Post by Milos »

dkappe wrote: Wed Aug 05, 2020 12:06 am
Milos wrote: Wed Aug 05, 2020 12:01 am
dkappe wrote: Tue Aug 04, 2020 11:48 pm
Milos wrote: Tue Aug 04, 2020 11:40 pm
How bad is NNUE eval really one sees when testing SF depth 5 with minimum eval (material+mobility+PST) vs SF-NNUE depth 1. It's a slaughter house.
I’m shocked! Shocked! Thing A at depth 5 slaughtered thing B at depth 1? Hard to believe.
Well thing B having approximate eval of depth 12 or 16 or 18 or whatever stored in 20MBs of data. What would be the score if thing B had 20MB book instead?
All you are actually managed to demonstrate with you dull essay about NNUE analysis is that NNUE is quite a crappy book. Not that it's not.
Well, do the following test: 1 & 5 you already have, now do 6 & 10, then 11 & 15. See a pattern?
Difference would actually slightly reduce from 1vs5 to 6vs10 and then it would increase back to original one. But again, this only tells us about the search it tell us nothing about evaluation.
Regarding the book, impact is significantly reduced once you go into higher depths. But that is only the case with general books like Cerebellum. With targeted book that is ofc not the case. But my point is that using general book, generated but the engine itself is not much different (fairness-wise) to using interal eval trained by the same engine.
dkappe
Posts: 1631
Joined: Tue Aug 21, 2018 7:52 pm
Full name: Dietrich Kappe

Re: Is this SF NN almost like 20 MB book?

Post by dkappe »

Milos wrote: Wed Aug 05, 2020 1:32 am
dkappe wrote: Wed Aug 05, 2020 12:06 am
Well, do the following test: 1 & 5 you already have, now do 6 & 10, then 11 & 15. See a pattern?
Difference would actually slightly reduce from 1vs5 to 6vs10 and then it would increase back to original one. But again, this only tells us about the search it tell us nothing about evaluation.
Regarding the book, impact is significantly reduced once you go into higher depths. But that is only the case with general books like Cerebellum. With targeted book that is ofc not the case. But my point is that using general book, generated but the engine itself is not much different (fairness-wise) to using interal eval trained by the same engine.
Have you actually run the test, or are you just speculating?
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
cucumber
Posts: 144
Joined: Sun Oct 14, 2018 8:21 pm
Full name: JSmith

Re: Is this SF NN almost like 20 MB book?

Post by cucumber »

Milos wrote: Wed Aug 05, 2020 12:01 am
dkappe wrote: Tue Aug 04, 2020 11:48 pm
Milos wrote: Tue Aug 04, 2020 11:40 pm
How bad is NNUE eval really one sees when testing SF depth 5 with minimum eval (material+mobility+PST) vs SF-NNUE depth 1. It's a slaughter house.
I’m shocked! Shocked! Thing A at depth 5 slaughtered thing B at depth 1? Hard to believe.
Well thing B having approximate eval of depth 12 or 16 or 18 or whatever stored in 20MBs of data. What would be the score if thing B had 20MB book instead?
All you actually managed to demonstrate with you dull essay about NNUE analysis is that NNUE is quite a crappy book. Not that it's not.
Isn’t that true of literally every evaluation function? Let your evaluation function be “return rand();” and you can use it to generate very crappy opening evals, too. Yet nobody would consider this an opening book.

The evaluation definitely doesn’t provide any sort of “good“ opening book without search, since its opening preference varies with depth. The combination if search play evaluation might encode a book by some silly definition, but the argument can be made that that’s also a side effect of SPSA tuning of traditional Stockfish. SF’s search midgame evaluation parameters have been tuned to play out-of-book as well as possible.

This ends up being an argument over the semantics of “opening book,” where you try to stretch “opening book” into a definition that nobody would otherwise use.
cucumber
Posts: 144
Joined: Sun Oct 14, 2018 8:21 pm
Full name: JSmith

Re: Is this SF NN almost like 20 MB book?

Post by cucumber »

Twipply wrote: Tue Aug 04, 2020 11:48 pm
dkappe wrote: Tue Aug 04, 2020 11:32 pm I’m sorry I hurt your feelings. :D
I reacted strongly not because of feelings, but because I think this topic has basically invalidated some of the more recent TCEC Superfinal results, and the admins there should stop ignoring it. However, even if my feelings were hurt, that would not invalidate what I've said nor would it validate your post.
dkappe wrote: Tue Aug 04, 2020 11:32 pm (BTW, I found your engine to be an excellent sparring partner during the development of a0lite.)
Thanks. I'm glad it worked well for you.
Superfinal results haven’t been statically significant for a while, it’s on hardware that nobody would consider accessible, the engines constantly update and the results could theoretically be outdated not long after the sufi begins, etc. Getting “valid” superfinal results is incredibly challenging. Attributing that challenge solely to NN opening behavior is absurd. It’s just a drop in the ocean of other issues.
jp
Posts: 1470
Joined: Mon Apr 23, 2018 7:54 am

Re: Is this SF NN almost like 20 MB book?

Post by jp »

Dann Corbit wrote: Tue Aug 04, 2020 7:59 pm I suspect that NN approaches work very well for the initial, quiet part of the game.
The initial part of the game is not quiet. You are conflating two separate suspicions. (The first suspicion probably has a lot more evidence for it than the second.)
User avatar
Ovyron
Posts: 4556
Joined: Tue Jul 03, 2007 4:30 am

Re: Is this SF NN almost like 20 MB book?

Post by Ovyron »

Jouni wrote: Tue Aug 04, 2020 7:26 pm Is this SF NN almost like 20 MB book?
It's quite the opposite! NNUE isn't learning what opening moves are good. It's learning what moves are good against the openings it plays.

Figures that if you trained a net using the Cerebellum library it would not end playing like it, it'd end up playing the best moves that defeated it. An Anti-brainfish net. But it's unknown if it'd be any good against other entities.
Twipply
Posts: 9
Joined: Fri Dec 02, 2016 8:55 pm

Re: Is this SF NN almost like 20 MB book?

Post by Twipply »

cucumber wrote: Wed Aug 05, 2020 9:17 am
Twipply wrote: Tue Aug 04, 2020 11:48 pm I reacted strongly not because of feelings, but because I think this topic has basically invalidated some of the more recent TCEC Superfinal results, and the admins there should stop ignoring it.
Superfinal results haven’t been statically significant for a while, it’s on hardware that nobody would consider accessible, the engines constantly update and the results could theoretically be outdated not long after the sufi begins, etc. Getting “valid” superfinal results is incredibly challenging. Attributing that challenge solely to NN opening behavior is absurd. It’s just a drop in the ocean of other issues.
My mistake, I didn't mean to suggest that the "NN=book" idea is my only issue with the validity of the TCEC Superfinals. When I said they're invalid I meant in the sense that if it's not a fair fight then I don't care about the result - not unless the underdog manages to win despite the handicap. Of course, any engine author should realise that the Superfinal results are not likely to be statistically significant, myself included.
Engine Programming on Discord -- https://discord.gg/invite/YctB2p4