NNUE one year retrospective

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

Madeleine Birchfield
Posts: 512
Joined: Tue Sep 29, 2020 4:29 pm
Location: Dublin, Ireland
Full name: Madeleine Birchfield

NNUE one year retrospective

Post by Madeleine Birchfield »

It has been a little over one year since the computer chess community at talkchess started expetimenting with nodchip's NNUE architecture:

forum3/viewtopic.php?f=2&t=74059

Looking back, what are some of the major changes in computer chess and in the computer chess community between an year ago and now?
smatovic
Posts: 3493
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: NNUE one year retrospective

Post by smatovic »

To be honest, if we look at this neural network stuff as an paradigm shift in computer chess, the community really does not perform that well on this.

I remember the heated discussions over the A0 paper and the SF match, I remember the GPU-hardware-advantage-stuff with Lc0, the narrative of an original engine vs. an original network, now with NNUE present for CPU AB-engines all developers are affected, and here you go with heat again...

--
Srdja
smatovic
Posts: 3493
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: NNUE one year retrospective

Post by smatovic »

One more thought...

we already had the division between chess advisor and chess programmer, now additionally there are the roles of the NN-trainer and NN-coder (training/inference).

I can not remember calling anybody someone's engine not original cos he took in an chess advisor into development, AFAIK Lc0 and SF have no issues with splitting these kind of roles across their contributors, and of course, there are authors who combine all these roles into one person...

--
Srdja
willmorton
Posts: 30
Joined: Thu Sep 17, 2020 9:19 pm
Full name: William Morton

Re: NNUE one year retrospective

Post by willmorton »

smatovic wrote: Thu Jul 15, 2021 8:19 am One more thought...

we already had the division between chess advisor and chess programmer, now additionally there are the roles of the NN-trainer and NN-coder (training/inference).

I can not remember calling anybody someone's engine not original cos he took in an chess advisor into development, AFAIK Lc0 and SF have no issues with splitting these kind of roles across their contributors, and of course, there are authors who combine all these roles into one person...

--
Srdja
Maybe computer chess tournaments will soon become just like computer shogi tournaments where AFAIK anyone starts with Stockfish and who makes the best search tweaks / nets wins. Maybe chess programmers and chess advisors won't be needed at all anymore and it will be just a contest among nn-trainers/nn-coders.
smatovic
Posts: 3493
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: NNUE one year retrospective

Post by smatovic »

willmorton wrote: Thu Jul 15, 2021 6:44 pm
smatovic wrote: Thu Jul 15, 2021 8:19 am One more thought...

we already had the division between chess advisor and chess programmer, now additionally there are the roles of the NN-trainer and NN-coder (training/inference).

I can not remember calling anybody someone's engine not original cos he took in an chess advisor into development, AFAIK Lc0 and SF have no issues with splitting these kind of roles across their contributors, and of course, there are authors who combine all these roles into one person...

--
Srdja
Maybe computer chess tournaments will soon become just like computer shogi tournaments where AFAIK anyone starts with Stockfish and who makes the best search tweaks / nets wins. Maybe chess programmers and chess advisors won't be needed at all anymore and it will be just a contest among nn-trainers/nn-coders.
That would be a interesting kind of tournament, same engine - different net*, but I guess in the long run neural networks will find place also for replacing HCSH (hand crafted search heuristics) for things like pruning, move selection, reduction and extension in the engines search part, it just a matter of the extra compute time needed for nn-inference during search, multiple/bigger nets vs. deeper search, a tradeoff, and our hardware is going to be more and more optimized for running neural network inference in future...

***edit***
* It seems Lc0 and SF already run such a 'tournamet' during their internal development process.

--
Srdja