Author: Linmiao Xu
Date: Sun Jan 7 21:15:52 2024 +0100
Timestamp: 1704658552
Dual NNUE with L1-128 smallnet
Credit goes to @mstembera for:
- writing the code enabling dual NNUE:
https://github.com/official-stockfish/S ... /pull/4898
- the idea of trying L1-128 trained exclusively on high simple eval
positions
The L1-128 smallnet is:
- epoch 399 of a single-stage training from scratch
- trained only on positions from filtered data with high material
difference
- defined by abs(simple_eval) > 1000
```yaml
experiment-name: 128--S1-only-hse-v2
training-dataset:
- /data/hse/S3/dfrc99-16tb7p-eval-filt-v2.min.high-simple-eval-1k.binpack
- /data/hse/S3/leela96-filt-v2.min.high-simple-eval-1k.binpack
- /data/hse/S3/test80-apr2022-16tb7p.min.high-simple-eval-1k.binpack
- /data/hse/S7/test60-2020-2tb7p.v6-3072.high-simple-eval-1k.binpack
- /data/hse/S7/test60-novdec2021-12tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test77-nov2021-2tb7p.v6-3072.min.high-simple-eval-1k.binpack
- /data/hse/S7/test77-dec2021-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test77-jan2022-2tb7p.high-simple-eval-1k.binpack
- /data/hse/S7/test78-jantomay2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test79-apr2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test79-may2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
# T80 2022
- /data/hse/S7/test80-may2022-16tb7p.high-simple-eval-1k.binpack
- /data/hse/S7/test80-jun2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test80-jul2022-16tb7p.v6-dd.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-aug2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test80-sep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test80-oct2022-16tb7p.v6-dd.high-simple-eval-1k.binpack
- /data/hse/S7/test80-nov2022-16tb7p-v6-dd.min.high-simple-eval-1k.binpack
# T80 2023
- /data/hse/S7/test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test80-feb2023-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test80-mar2023-2tb7p.v6-sk16.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-apr2023-2tb7p-filter-v6-sk16.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-may2023-2tb7p.v6.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-jun2023-2tb7p.v6-3072.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-jul2023-2tb7p.v6-3072.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-aug2023-2tb7p.v6.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-sep2023-2tb7p.high-simple-eval-1k.binpack
- /data/hse/S7/test80-oct2023-2tb7p.high-simple-eval-1k.binpack
start-from-engine-test-net: False
nnue-pytorch-branch: linrock/nnue-pytorch/L1-128
engine-test-branch: linrock/Stockfish/L1-128-nolazy
engine-base-branch: linrock/Stockfish/L1-128
num-epochs: 500
lambda: 1.0
```
Experiment yaml configs converted to easy_train.sh commands with:
https://github.com/linrock/nnue-tools/b ... y_train.py
Binpacks interleaved at training time with:
https://github.com/official-stockfish/n ... h/pull/259
Data filtered for high simple eval positions with:
https://github.com/linrock/nnue-data/bl ... l_plain.py
https://github.com/linrock/Stockfish/bl ... #L626-L655
Training data can be found at:
https://robotmoon.com/nnue-training-data/
Local elo at 25k nodes per move of
L1-128 smallnet (nnue-only eval) vs. L1-128 trained on standard S1 data:
nn-epoch399.nnue : -318.1 +/- 2.1
Passed STC:
https://tests.stockfishchess.org/tests/ ... a1fcd49e3b
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 62432 W: 15875 L: 15521 D: 31036 Elo +1.97
Ptnml(0-2): 177, 7331, 15872, 7633, 203
Passed LTC:
https://tests.stockfishchess.org/tests/ ... cf40aaac6e
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 64830 W: 16118 L: 15738 D: 32974 Elo +2.04
Ptnml(0-2): 43, 7129, 17697, 7497, 49
closes https://github.com/official-stockfish/Stockfish/pulls
Bench: 1330050
Co-Authored-By: mstembera <5421953+>
see source
Dual NNUE-new leaf in Stockfish dev
Moderators: hgm, Rebel, chrisw
-
- Posts: 74
- Joined: Wed Dec 04, 2019 11:25 am
- Full name: Prasanna Bandihole
-
- Posts: 58
- Joined: Mon Mar 27, 2023 8:29 pm
- Full name: Dmitry Frosty
Re: Dual NNUE-new leaf in Stockfish dev
New, serious step in chess engines development. But I still prefer one network than two.bmp1974 wrote: ↑Mon Jan 08, 2024 5:04 pm Author: Linmiao Xu
Date: Sun Jan 7 21:15:52 2024 +0100
Timestamp: 1704658552
Dual NNUE with L1-128 smallnet
Credit goes to @mstembera for:
- writing the code enabling dual NNUE:
https://github.com/official-stockfish/S ... /pull/4898
- the idea of trying L1-128 trained exclusively on high simple eval
positions
The L1-128 smallnet is:
- epoch 399 of a single-stage training from scratch
- trained only on positions from filtered data with high material
difference
- defined by abs(simple_eval) > 1000
```yaml
experiment-name: 128--S1-only-hse-v2
training-dataset:
- /data/hse/S3/dfrc99-16tb7p-eval-filt-v2.min.high-simple-eval-1k.binpack
- /data/hse/S3/leela96-filt-v2.min.high-simple-eval-1k.binpack
- /data/hse/S3/test80-apr2022-16tb7p.min.high-simple-eval-1k.binpack
- /data/hse/S7/test60-2020-2tb7p.v6-3072.high-simple-eval-1k.binpack
- /data/hse/S7/test60-novdec2021-12tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test77-nov2021-2tb7p.v6-3072.min.high-simple-eval-1k.binpack
- /data/hse/S7/test77-dec2021-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test77-jan2022-2tb7p.high-simple-eval-1k.binpack
- /data/hse/S7/test78-jantomay2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test79-apr2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test79-may2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
# T80 2022
- /data/hse/S7/test80-may2022-16tb7p.high-simple-eval-1k.binpack
- /data/hse/S7/test80-jun2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test80-jul2022-16tb7p.v6-dd.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-aug2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test80-sep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test80-oct2022-16tb7p.v6-dd.high-simple-eval-1k.binpack
- /data/hse/S7/test80-nov2022-16tb7p-v6-dd.min.high-simple-eval-1k.binpack
# T80 2023
- /data/hse/S7/test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test80-feb2023-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test80-mar2023-2tb7p.v6-sk16.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-apr2023-2tb7p-filter-v6-sk16.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-may2023-2tb7p.v6.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-jun2023-2tb7p.v6-3072.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-jul2023-2tb7p.v6-3072.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-aug2023-2tb7p.v6.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-sep2023-2tb7p.high-simple-eval-1k.binpack
- /data/hse/S7/test80-oct2023-2tb7p.high-simple-eval-1k.binpack
start-from-engine-test-net: False
nnue-pytorch-branch: linrock/nnue-pytorch/L1-128
engine-test-branch: linrock/Stockfish/L1-128-nolazy
engine-base-branch: linrock/Stockfish/L1-128
num-epochs: 500
lambda: 1.0
```
Experiment yaml configs converted to easy_train.sh commands with:
https://github.com/linrock/nnue-tools/b ... y_train.py
Binpacks interleaved at training time with:
https://github.com/official-stockfish/n ... h/pull/259
Data filtered for high simple eval positions with:
https://github.com/linrock/nnue-data/bl ... l_plain.py
https://github.com/linrock/Stockfish/bl ... #L626-L655
Training data can be found at:
https://robotmoon.com/nnue-training-data/
Local elo at 25k nodes per move of
L1-128 smallnet (nnue-only eval) vs. L1-128 trained on standard S1 data:
nn-epoch399.nnue : -318.1 +/- 2.1
Passed STC:
https://tests.stockfishchess.org/tests/ ... a1fcd49e3b
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 62432 W: 15875 L: 15521 D: 31036 Elo +1.97
Ptnml(0-2): 177, 7331, 15872, 7633, 203
Passed LTC:
https://tests.stockfishchess.org/tests/ ... cf40aaac6e
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 64830 W: 16118 L: 15738 D: 32974 Elo +2.04
Ptnml(0-2): 43, 7129, 17697, 7497, 49
closes https://github.com/official-stockfish/Stockfish/pulls
Bench: 1330050
Co-Authored-By: mstembera <5421953+>
see source
-
- Posts: 3318
- Joined: Wed Mar 08, 2006 8:15 pm
Re: Dual NNUE-new leaf in Stockfish dev
This dual net version is interesting. No speed difference in bench. But in won position nps speed suddenly increases 200-300 % !
Jouni
-
- Posts: 1224
- Joined: Wed Mar 08, 2006 8:28 pm
- Location: Florida, USA
Re: Dual NNUE-new leaf in Stockfish dev
If it’s only used when the absolute difference in simple material >1000, then I don’t see the interest in this approach. Surely the game score will never change. Is it just for resolving mate scores quicker?
What am I missing?
— Steve
What am I missing?
— Steve
http://www.chessprogramming.net - Maverick Chess Engine
-
- Posts: 4583
- Joined: Sun Mar 12, 2006 2:40 am
- Full name:
Re: Dual NNUE-new leaf in Stockfish dev
We can probably find the answer somewhere on GitHub for Stockfish, or on Discord. Pure on the fly speculation because of not having searched those; if an engine spends most of its time in qsearch, a faster net will pay itself back if it can speed up qsearch. No matter if a simpler HCE and even shortcut eval if material difference is big can also do it, Stockfish at least does not have that anymore. But that was simplified away, so why would this work? An alternative explanation/speculation is that normal NNUE is getting bigger and bigger, and this is because opening knowledge is so important. This small net could have some knowledge that is missing in the bigger nets, could be in it, but the opening knowledge is just more important.
There are the eval cases though where you have to have knowledge about material imbalance, and those are really interesting. The Stockfish material imbalance table from the time of Tord and Joona Kiiski, way back,to my knowledge was tuned with HCE, but, possibly, now a bit out of tune with all the NNUE changes since? This is all just 400% speculation NNUE would be perfect for a new material imbalance table. (I don't even know if that material imbalance table is still in Stockfish, the PSQT tables I think have gone, sorry for not being up to date with it. I can't find anything about material.cpp anymore in the source so it is I think long gone...)
There are the eval cases though where you have to have knowledge about material imbalance, and those are really interesting. The Stockfish material imbalance table from the time of Tord and Joona Kiiski, way back,to my knowledge was tuned with HCE, but, possibly, now a bit out of tune with all the NNUE changes since? This is all just 400% speculation NNUE would be perfect for a new material imbalance table. (I don't even know if that material imbalance table is still in Stockfish, the PSQT tables I think have gone, sorry for not being up to date with it. I can't find anything about material.cpp anymore in the source so it is I think long gone...)
Debugging is twice as hard as writing the code in the first
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
place. Therefore, if you write the code as cleverly as possible, you
are, by definition, not smart enough to debug it.
-- Brian W. Kernighan
-
- Posts: 122
- Joined: Tue Oct 29, 2019 4:14 pm
- Location: Canada
- Full name: Ron Doughie
Re: Dual NNUE-new leaf in Stockfish dev
material.cpp was removed back in July of 2023 when classical evaluation was discontinued.Eelco de Groot wrote: ↑Mon Jan 08, 2024 9:41 pm . . . I can't find anything about material.cpp anymore in the source so it is I think long gone...)
https://github.com/official-stockfish/S ... a15a902395
-
- Posts: 3318
- Joined: Wed Mar 08, 2006 8:15 pm
Re: Dual NNUE-new leaf in Stockfish dev
Obviously this code in evaluate.cpp speed up search?
No change in search.cpp.
Code: Select all
smallNet = std::abs(simpleEval) > 1100;
int nnueComplexity;
Value nnue = smallNet ? NNUE::evaluate<NNUE::Small>(pos, true, &nnueComplexity)
: NNUE::evaluate<NNUE::Big>(pos, true, &nnueComplexity);
Jouni
-
- Posts: 3318
- Joined: Wed Mar 08, 2006 8:15 pm
Re: Dual NNUE-new leaf in Stockfish dev
To my surprise this dualnet SF is worse than SF16 in all testsuites I tried! And also worse mate solver . 31.12. version was very good solver.
Jouni
-
- Posts: 74
- Joined: Wed Dec 04, 2019 11:25 am
- Full name: Prasanna Bandihole
Re: Dual NNUE-new leaf in Stockfish dev
It may serve better if they tune the small net as a problem solver (at least for lesser material situation) than to speed up the inevitable win process.
-
- Posts: 128
- Joined: Sun Oct 30, 2022 5:26 pm
- Full name: Conor Anstey
Re: Dual NNUE-new leaf in Stockfish dev
could it be that test suite performance is completely irrelevant to actual engine dev? shocked
all classical eval in stockfish has been removed, it's nnue all the wayEelco de Groot wrote: ↑Mon Jan 08, 2024 9:41 pm (I don't even know if that material imbalance table is still in Stockfish, the PSQT tables I think have gone, sorry for not being up to date with it. I can't find anything about material.cpp anymore in the source so it is I think long gone...)