I don't know either of these engines, but according to the CPW, Schooner 2 has "a simpler evaluation inspired by Xiphos", so the high similarity probably isn't coincidental, but it is caused by Xiphos influence on Schooner rather than the reverse.
Minic version 2
Moderators: hgm, Rebel, chrisw
-
- Posts: 31
- Joined: Tue Feb 27, 2018 11:29 am
Re: Minic version 2
-
- Posts: 725
- Joined: Tue Dec 18, 2007 9:38 pm
- Location: Munich, Germany
- Full name: Dr. Oliver Brausch
Re: Minic version 2
Of course, this is an explanation for the similarity
One thing I do not understand. Reviewing Xiphos' Code, I did not find anything extraordinary in it. Alone from the Code review I would have guessed about Glaurung 2.2 strength, so 2900 ELO. But, actually, Xiphos is 400 ELO stronger.
What exactly makes the difference?
-
- Posts: 4468
- Joined: Fri Apr 21, 2006 4:19 pm
- Location: IASI - the historical capital of MOLDOVA
- Full name: SilvianR
-
- Posts: 31
- Joined: Tue Feb 27, 2018 11:29 am
Re: Minic version 2
I don't know, as I'm not familiar with Xiphos (you're making me curious, though -- perhaps I should have a look), but I'm not really that surprised. Back in the Glaurung 2.2 days, LMR was still in its infancy. There have been many refinements since then. Furthermore, evaluation tuning has improved significantly, partly because of improvements in methodology, and partly because hardware improvements (multi-core CPUs in particular) have made thorough testing possible. If I remember right, almost all the development and testing of Glaurung was done on a single dual-core computer. Everything was hand-tuned, and I don't think I ever tested any change with more than about a hundred blitz games.OliverBr wrote: ↑Wed Oct 07, 2020 6:10 pmOne thing I do not understand. Reviewing Xiphos' Code, I did not find anything extraordinary in it. Alone from the Code review I would have guessed about Glaurung 2.2 strength, so 2900 ELO. But, actually, Xiphos is 400 ELO stronger.
What exactly makes the difference?
If you take Glaurung 2.2, add the most important LMR refinements from current Stockfish, modernise the parallel search, and do some logistic regression tuning of the evaluation weights, I believe it could be hundreds of rating points stronger. Not close to current Stockfish level, of course, but I don't think Xiphos level (300 points behind SF 12 on the CCRL blitz list) would be unrealistic.
-
- Posts: 216
- Joined: Sun Jan 22, 2017 8:30 pm
- Location: Russia
Re: Minic version 2
Ed admitted the error in EO:OliverBr wrote: ↑Wed Oct 07, 2020 1:48 pm Furthermore, they say on http://rebel13.nl/misc/sim2019.html.
"Note the extreme high similarity of 83.57% between Schooner and Xiphos 0.3"
83.57% is quite a number. I don't know what to think about it.
He simply forgot to update the affected video.
-
- Posts: 1871
- Joined: Sat Nov 25, 2017 2:28 pm
- Location: France
Re: Minic version 2
Some news about Minic.
I recently discovered and fixed a very very bad design in MinicNNUE implementation.
Don't want to dig too much into details but as a copy/make engine, I don't want to store position relative stuff in a separate "state" thing (often used later as a sort of stack in other engine). This, and some header include recursive nightmare led me to a bad design using too much heap allocation and deletion when porting NNUE to Minic. I fixed this yesterday, and this results in a 100% speedup of MinicNNUE (now running at around 78% of standard evaluation, just as in others engines that ported NNUE). This is a crazy boost Elo wise of course, with something like +100 at small TC.
Here is some results at 10s+0.1, using various nets.
So with the current Minic2.50NNUE, this gives :
With the next release it will be something like
As a reminder
- nascient_nutrient is a "pure" Minic net, based on Minic data and train with learner inside Minic
- napping_nexus is a homebrew SF nets, based on SF data and train with learner inside Stockfish (Nodchip repo)
- nn-97f742aaefcd is a very strong SV net
So in this test versus a bunch of very very strong engines, NNUE technology with a "pure" Minic homebrew net is already a +230Elo.
With one of the best net available it is +400Elo putting MinicNNUE making use of SF nets near SF9 level at short TC.
I recently discovered and fixed a very very bad design in MinicNNUE implementation.
Don't want to dig too much into details but as a copy/make engine, I don't want to store position relative stuff in a separate "state" thing (often used later as a sort of stack in other engine). This, and some header include recursive nightmare led me to a bad design using too much heap allocation and deletion when porting NNUE to Minic. I fixed this yesterday, and this results in a 100% speedup of MinicNNUE (now running at around 78% of standard evaluation, just as in others engines that ported NNUE). This is a crazy boost Elo wise of course, with something like +100 at small TC.
Here is some results at 10s+0.1, using various nets.
Code: Select all
Rank Name Elo +/- Games Score Draw
1 stockfish.11 243 36 316 80.2% 27.5%
2 stockfish.10 216 35 315 77.6% 30.8%
3 stockfish.9 147 32 316 69.9% 33.5%
4 minic_dev_nnue_nn-97f742aaefcd 144 33 316 69.6% 31.6%
5 stockfish.8 114 32 315 65.9% 34.0%
6 Ethereal 101 32 315 64.1% 34.3%
7 minic_dev_nnue_napping_nexus 62 30 315 58.9% 39.7%
8 stockfish.7 36 31 317 55.2% 36.6%
9 minic_2.50_nnue_nn-97f742aaefcd 7 31 315 51.0% 36.5%
10 minic_dev_nnue_nascent_nutrient -35 30 315 44.9% 40.3%
11 minic_2.50_nnue_napping_nexus -57 31 315 41.9% 36.8%
12 RubiChess -60 33 314 41.4% 26.1%
13 Defenchess_2.2 -79 32 314 38.9% 31.2%
14 minic_2.50_nnue_nascent_nutrient -127 32 316 32.4% 33.9%
15 texel -204 39 315 23.7% 18.1%
16 minic_dev -267 40 317 17.7% 21.5%
17 minic_2.50 -278 43 316 16.8% 17.7%
Code: Select all
Minic2.50 >>> +150 >>> Minic2.50NNUE+nascient_nutrient
Minic2.50NNUE+nascient_nutrient >>> +70 >>> Minic2.50NNUE+napping_nexus
Minic2.50NNUE+napping_nexus >>> +50 >>> Minic2.50NNUE+nn-97f742aaefcd
Code: Select all
Minic2.51 >>> +230 >>> Minic2.51NNUE+nascient_nutrient
Minic2.51NNUE+nascient_nutrient >>> +90 >>> Minic2.51NNUE+napping_nexus
Minic2.51NNUE+napping_nexus >>> +80 >>> Minic2.51NNUE+nn-97f742aaefcd
- nascient_nutrient is a "pure" Minic net, based on Minic data and train with learner inside Minic
- napping_nexus is a homebrew SF nets, based on SF data and train with learner inside Stockfish (Nodchip repo)
- nn-97f742aaefcd is a very strong SV net
So in this test versus a bunch of very very strong engines, NNUE technology with a "pure" Minic homebrew net is already a +230Elo.
With one of the best net available it is +400Elo putting MinicNNUE making use of SF nets near SF9 level at short TC.
-
- Posts: 1871
- Joined: Sat Nov 25, 2017 2:28 pm
- Location: France
Re: Minic version 2
Confirmed at longer TC, here 4min+1s, versus SF9
Code: Select all
0 stockfish.9
1 minic_dev_nnue_nn-97f742aaefcd -3 38 130 49.6% 59.2%
2 minic_dev_nnue_napping_nexus -73 35 130 39.6% 63.8%
3 minic_dev_nnue_nascent_nutrient -184 48 130 25.8% 39.2%
-
- Posts: 133
- Joined: Wed Aug 15, 2007 12:18 pm
- Location: Munich
Re: Minic version 2
This shows the potential of NNUE. "Only" a change of the training method of the NNUE and your nascient_nutrient has the potential for an increase of almost 200 ELO!xr_a_y wrote: ↑Mon Oct 12, 2020 7:54 am Confirmed at longer TC, here 4min+1s, versus SF9
Code: Select all
0 stockfish.9 1 minic_dev_nnue_nn-97f742aaefcd -3 38 130 49.6% 59.2% 2 minic_dev_nnue_napping_nexus -73 35 130 39.6% 63.8% 3 minic_dev_nnue_nascent_nutrient -184 48 130 25.8% 39.2%
Have you already tried to re-evaluate your minic based data with Minic2.51NNUE+nascient_nutrient and then use this data as a basis for a new NNUE training? The question is, can a NNUE be the teacher for a new NNUE? And if so, how often can this process be repeated successfully?
-
- Posts: 1871
- Joined: Sat Nov 25, 2017 2:28 pm
- Location: France
Re: Minic version 2
This is reinforcement learning I think. This is in process.Dokterchen wrote: ↑Mon Oct 12, 2020 2:16 pmHave you already tried to re-evaluate your minic based data with Minic2.51NNUE+nascient_nutrient and then use this data as a basis for a new NNUE training? The question is, can a NNUE be the teacher for a new NNUE? And if so, how often can this process be repeated successfully?xr_a_y wrote: ↑Mon Oct 12, 2020 7:54 am Confirmed at longer TC, here 4min+1s, versus SF9
Code: Select all
0 stockfish.9 1 minic_dev_nnue_nn-97f742aaefcd -3 38 130 49.6% 59.2% 2 minic_dev_nnue_napping_nexus -73 35 130 39.6% 63.8% 3 minic_dev_nnue_nascent_nutrient -184 48 130 25.8% 39.2%
-
- Posts: 1871
- Joined: Sat Nov 25, 2017 2:28 pm
- Location: France
Re: Minic version 2
And threading seems ok.
SF11 gauntlet at 10s+0.1
MinicNNUE using nn-97f742aaefcd
SF11 gauntlet at 10s+0.1
MinicNNUE using nn-97f742aaefcd
Code: Select all
Rank Name Elo +/- Games Score Draw
0 stockfish.11
1 minic_dev_uci_nnue_8t 136 38 196 68.6% 42.3%
2 minic_dev_uci_nnue_4t 52 35 197 57.4% 47.7%
3 minic_dev_uci_nnue_2t -44 33 197 43.7% 53.8%
4 stockfish.10 -44 30 197 43.7% 61.9%
5 minic_dev_uci_nnue -90 38 197 37.3% 39.1%
6 stockfish.9 -133 32 197 31.7% 53.3%