Black crushing white, weird ?

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
xr_a_y
Posts: 1871
Joined: Sat Nov 25, 2017 2:28 pm
Location: France

Black crushing white, weird ?

Post by xr_a_y »

Generating data for NNUE RL training with nascent nutrient I got this using the 2moves_LT_1000 opening book

Code: Select all

Score of minic_2.51_nascent_nutrient vs minic_2.51_nascent_nutrient2: 347 - 327 - 702 [0.507]
...      minic_2.51_nascent_nutrient playing White: 111 - 213 - 364  [0.426] 688
...      minic_2.51_nascent_nutrient playing Black: 236 - 114 - 338  [0.589] 688
...      White vs Black: 225 - 449 - 702  [0.419] 1376
Elo difference: 5.1 +/- 12.8, LOS: 77.9 %, DrawRatio: 51.0 %
Black is totally crushing white ! It is a book bias ? a net bias ? something wrong with contempt, with tempo ?

This is fixed depth search (depth 10) !

Investigation in progress ... ideas are welcome !
User avatar
xr_a_y
Posts: 1871
Joined: Sat Nov 25, 2017 2:28 pm
Location: France

Re: Black crushing white, weird ?

Post by xr_a_y »

Same thing with Hert500 opening book => not a book bias

Same thing with 11 ply instead of 10 => not a even/odd effect bias

NOT the same thing (white clearly winning) with "napping nexus" instead of "nascent nutrient" => this is a net bias !!

Conclusion nascent nutrient is good playing black ... less good playing white ...

This is still strange ...
mar
Posts: 2554
Joined: Fri Nov 26, 2010 2:00 pm
Location: Czech Republic
Full name: Martin Sedlak

Re: Black crushing white, weird ?

Post by mar »

fixed depth doesn't mean much, neither does 1400 games
Martin Sedlak
mar
Posts: 2554
Joined: Fri Nov 26, 2010 2:00 pm
Location: Czech Republic
Full name: Martin Sedlak

Re: Black crushing white, weird ?

Post by mar »

xr_a_y wrote: Wed Oct 14, 2020 2:52 pm NOT the same thing (white clearly winning) with "napping nexus" instead of "nascent nutrient" => this is a net bias !!

Conclusion nascent nutrient is good playing black ... less good playing white ...

This is still strange ...
really? I thought that it's common to swap board from stm's point of view and then simply evaluate, I don't see how there could be any bias in that case,
but I haven't studied NNUE at all.

I always want symmetric evals in my engine anyway, for debugging purposes

EDIT: of course, this would make the "UE" part harder, question is whether symmetric eval is any good

why do you complain anyway? you jumped hundreds of elo for free :)
Martin Sedlak
User avatar
xr_a_y
Posts: 1871
Joined: Sat Nov 25, 2017 2:28 pm
Location: France

Re: Black crushing white, weird ?

Post by xr_a_y »

mar wrote: Wed Oct 14, 2020 3:31 pm really? I thought that it's common to swap board from stm's point of view and then simply evaluate, I don't see how there could be any bias in that case,
but I haven't studied NNUE at all.

I always want symmetric evals in my engine anyway, for debugging purposes

EDIT: of course, this would make the "UE" part harder, question is whether symmetric eval is any good
This does not happen with "napping nexus" (so with SF learner). There is maybe something wrong in Minic merged learner. I'll dig into that.
mar wrote: Wed Oct 14, 2020 3:31 pm why do you complain anyway? you jumped hundreds of elo for free :)
That is wrong ! MinicNNUE is NOT official Minic, so Minic rating didn't jumped at all. What jumped is my little knowledge about NN and this is already a good new :D
mar
Posts: 2554
Joined: Fri Nov 26, 2010 2:00 pm
Location: Czech Republic
Full name: Martin Sedlak

Re: Black crushing white, weird ?

Post by mar »

xr_a_y wrote: Wed Oct 14, 2020 3:57 pm That is wrong ! MinicNNUE is NOT official Minic, so Minic rating didn't jumped at all. What jumped is my little knowledge about NN and this is already a good new :D
if that's so, then how come people are testing it, assuming you haven't released it to the public :)

I'm new to NNs as well, I only started last friday (actually 14 days ago), but my interest isn't related to chess - I started with MNIST dataset (tiny network, small sample set - handwritten numbers) - I learned a couple of things the hard way, but it was certainly useful and fun.
I've learned that I can train pretty good network with a pure stochastic approach - no GD, no partial derivatives, no backprop, but it's slow as hell.
I've an idea to speed this up quite a lot, but still...

such nets + datasets are super-tiny compared to NNUE/LC0 (I only assume as I don't know how big the LC0 nets actually are), but that was how I wanted to start, anyway. with something very simple
Martin Sedlak
User avatar
xr_a_y
Posts: 1871
Joined: Sat Nov 25, 2017 2:28 pm
Location: France

Re: Black crushing white, weird ?

Post by xr_a_y »

mar wrote: Wed Oct 14, 2020 4:03 pm
xr_a_y wrote: Wed Oct 14, 2020 3:57 pm That is wrong ! MinicNNUE is NOT official Minic, so Minic rating didn't jumped at all. What jumped is my little knowledge about NN and this is already a good new :D
if that's so, then how come people are testing it, assuming you haven't released it to the public :)
Thanks for the question.

MinicNNUE is released, testable, but I don't want it to "replace" Minic in the various rating lists (for reasons already explained).
So I make a difference between "released to public" and "official in rating lists".
Vocabulary matters indeed, you are right.

And good luck in your NN journey.
mar
Posts: 2554
Joined: Fri Nov 26, 2010 2:00 pm
Location: Czech Republic
Full name: Martin Sedlak

Re: Black crushing white, weird ?

Post by mar »

xr_a_y wrote: Wed Oct 14, 2020 4:13 pm Thanks for the question.

MinicNNUE is released, testable, but I don't want it to "replace" Minic in the various rating lists (for reasons already explained).
So I make a difference between "released to public" and "official in rating lists".
Vocabulary matters indeed, you are right.
I see, thanks for clearing up the confusion.
xr_a_y wrote: Wed Oct 14, 2020 4:13 pm And good luck in your NN journey.
good luck in yours as well :)
Martin Sedlak