I’m shocked! Shocked! Thing A at depth 5 slaughtered thing B at depth 1? Hard to believe.
Is this SF NN almost like 20 MB book?
Moderators: hgm, Rebel, chrisw
-
- Posts: 1631
- Joined: Tue Aug 21, 2018 7:52 pm
- Full name: Dietrich Kappe
Re: Is this SF NN almost like 20 MB book?
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
-
- Posts: 4190
- Joined: Wed Nov 25, 2009 1:47 am
Re: Is this SF NN almost like 20 MB book?
Well thing B having approximate eval of depth 12 or 16 or 18 or whatever stored in 20MBs of data. What would be the score if thing B had 20MB book instead?
All you actually managed to demonstrate with you dull essay about NNUE analysis is that NNUE is quite a crappy book. Not that it's not.
-
- Posts: 1631
- Joined: Tue Aug 21, 2018 7:52 pm
- Full name: Dietrich Kappe
Re: Is this SF NN almost like 20 MB book?
Well, do the following test: 1 & 5 you already have, now do 6 & 10, then 11 & 15. See a pattern?Milos wrote: ↑Wed Aug 05, 2020 12:01 amWell thing B having approximate eval of depth 12 or 16 or 18 or whatever stored in 20MBs of data. What would be the score if thing B had 20MB book instead?
All you are actually managed to demonstrate with you dull essay about NNUE analysis is that NNUE is quite a crappy book. Not that it's not.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
-
- Posts: 4190
- Joined: Wed Nov 25, 2009 1:47 am
Re: Is this SF NN almost like 20 MB book?
Difference would actually slightly reduce from 1vs5 to 6vs10 and then it would increase back to original one. But again, this only tells us about the search it tell us nothing about evaluation.dkappe wrote: ↑Wed Aug 05, 2020 12:06 amWell, do the following test: 1 & 5 you already have, now do 6 & 10, then 11 & 15. See a pattern?Milos wrote: ↑Wed Aug 05, 2020 12:01 amWell thing B having approximate eval of depth 12 or 16 or 18 or whatever stored in 20MBs of data. What would be the score if thing B had 20MB book instead?
All you are actually managed to demonstrate with you dull essay about NNUE analysis is that NNUE is quite a crappy book. Not that it's not.
Regarding the book, impact is significantly reduced once you go into higher depths. But that is only the case with general books like Cerebellum. With targeted book that is ofc not the case. But my point is that using general book, generated but the engine itself is not much different (fairness-wise) to using interal eval trained by the same engine.
-
- Posts: 1631
- Joined: Tue Aug 21, 2018 7:52 pm
- Full name: Dietrich Kappe
Re: Is this SF NN almost like 20 MB book?
Have you actually run the test, or are you just speculating?Milos wrote: ↑Wed Aug 05, 2020 1:32 amDifference would actually slightly reduce from 1vs5 to 6vs10 and then it would increase back to original one. But again, this only tells us about the search it tell us nothing about evaluation.
Regarding the book, impact is significantly reduced once you go into higher depths. But that is only the case with general books like Cerebellum. With targeted book that is ofc not the case. But my point is that using general book, generated but the engine itself is not much different (fairness-wise) to using interal eval trained by the same engine.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
-
- Posts: 144
- Joined: Sun Oct 14, 2018 8:21 pm
- Full name: JSmith
Re: Is this SF NN almost like 20 MB book?
Isn’t that true of literally every evaluation function? Let your evaluation function be “return rand();” and you can use it to generate very crappy opening evals, too. Yet nobody would consider this an opening book.Milos wrote: ↑Wed Aug 05, 2020 12:01 amWell thing B having approximate eval of depth 12 or 16 or 18 or whatever stored in 20MBs of data. What would be the score if thing B had 20MB book instead?
All you actually managed to demonstrate with you dull essay about NNUE analysis is that NNUE is quite a crappy book. Not that it's not.
The evaluation definitely doesn’t provide any sort of “good“ opening book without search, since its opening preference varies with depth. The combination if search play evaluation might encode a book by some silly definition, but the argument can be made that that’s also a side effect of SPSA tuning of traditional Stockfish. SF’s search midgame evaluation parameters have been tuned to play out-of-book as well as possible.
This ends up being an argument over the semantics of “opening book,” where you try to stretch “opening book” into a definition that nobody would otherwise use.
-
- Posts: 144
- Joined: Sun Oct 14, 2018 8:21 pm
- Full name: JSmith
Re: Is this SF NN almost like 20 MB book?
Superfinal results haven’t been statically significant for a while, it’s on hardware that nobody would consider accessible, the engines constantly update and the results could theoretically be outdated not long after the sufi begins, etc. Getting “valid” superfinal results is incredibly challenging. Attributing that challenge solely to NN opening behavior is absurd. It’s just a drop in the ocean of other issues.Twipply wrote: ↑Tue Aug 04, 2020 11:48 pmI reacted strongly not because of feelings, but because I think this topic has basically invalidated some of the more recent TCEC Superfinal results, and the admins there should stop ignoring it. However, even if my feelings were hurt, that would not invalidate what I've said nor would it validate your post.
Thanks. I'm glad it worked well for you.
-
- Posts: 1470
- Joined: Mon Apr 23, 2018 7:54 am
Re: Is this SF NN almost like 20 MB book?
The initial part of the game is not quiet. You are conflating two separate suspicions. (The first suspicion probably has a lot more evidence for it than the second.)Dann Corbit wrote: ↑Tue Aug 04, 2020 7:59 pm I suspect that NN approaches work very well for the initial, quiet part of the game.
-
- Posts: 4556
- Joined: Tue Jul 03, 2007 4:30 am
Re: Is this SF NN almost like 20 MB book?
It's quite the opposite! NNUE isn't learning what opening moves are good. It's learning what moves are good against the openings it plays.
Figures that if you trained a net using the Cerebellum library it would not end playing like it, it'd end up playing the best moves that defeated it. An Anti-brainfish net. But it's unknown if it'd be any good against other entities.
-
- Posts: 9
- Joined: Fri Dec 02, 2016 8:55 pm
Re: Is this SF NN almost like 20 MB book?
My mistake, I didn't mean to suggest that the "NN=book" idea is my only issue with the validity of the TCEC Superfinals. When I said they're invalid I meant in the sense that if it's not a fair fight then I don't care about the result - not unless the underdog manages to win despite the handicap. Of course, any engine author should realise that the Superfinal results are not likely to be statistically significant, myself included.cucumber wrote: ↑Wed Aug 05, 2020 9:17 amSuperfinal results haven’t been statically significant for a while, it’s on hardware that nobody would consider accessible, the engines constantly update and the results could theoretically be outdated not long after the sufi begins, etc. Getting “valid” superfinal results is incredibly challenging. Attributing that challenge solely to NN opening behavior is absurd. It’s just a drop in the ocean of other issues.
Engine Programming on Discord -- https://discord.gg/invite/YctB2p4