And it is dead.it was a dead end
Time to rethink? If it was just a translation into Dutch, it would've been stronger.
Moderator: Ras
And it is dead.it was a dead end
How was Vondele's attempt "much weaker" when it produced nigh-identical results in H2H matches vs SFdev?Albert Silver wrote: ↑Sun Jul 04, 2021 12:21 am... the idea of converting it into NNUE usable data to train a network. He then rented 10 2080ti GPUs on Vast.ai for several months (at a cost of thousands of dollars) to generate the data needed to do this, since of course the FF1 data he had was completely insufficient as maybe only 300 thousand games were from the final net's full strength, the rest was much weaker, and therefore not of interest. Except to test the concept.Sopel wrote: ↑Sat Jul 03, 2021 1:44 pm1. He used leela (FF1) data because he hadsmatovic wrote: ↑Sat Jul 03, 2021 11:48 am Yo, into wasps' nest, I am not into the details, but I read that Stockfish uses now Lc0 data for training, and doubled their net size. Time to rethink what Albert Silver has done? As far as I got it he used FatFritz 1 (Lc0 derivative) data for training and doubled the net size of the FatFritz 2 (Stockfish derivative) network.
--
Srdja
PS: not interested in discussing the marketing of ChessBase for FF2.
A miracle!It just so happens that we independently discovered that lc0 data works well in our case.![]()
... demonstrated by someone else first.2. Trying to increase the net size is a no brainer once the training and testing procedure is established and
Quite true, and his result was some 50 Elo weaker. As a result, the popular belief was that it was a dead end as was repeatedly told anyone asking about larger net sizes in the SF Discord. Vondele's attempt to reproduce my result was also much weaker, but of course I was using higher quality data.This was pretty much known from the beginning, as jjosh was training larger nets well before most people even knew about NNUE.
Still, as I mentioned elsewhere, I felt it would inspire others to try and of course improve my ideas and the proof is there. I'm genuinely glad.
Are you really acting like there was some big movement of people going "No! Its not possible! You could not possibly increase the size of the Network", and then you, the bold, brave, defiant Alberto showed the world what they refused to believe? Delusional.Albert Silver wrote: ↑Sun Jul 04, 2021 12:21 am... the idea of converting it into NNUE usable data to train a network. He then rented 10 2080ti GPUs on Vast.ai for several months (at a cost of thousands of dollars) to generate the data needed to do this, since of course the FF1 data he had was completely insufficient as maybe only 300 thousand games were from the final net's full strength, the rest was much weaker, and therefore not of interest. Except to test the concept.Sopel wrote: ↑Sat Jul 03, 2021 1:44 pm1. He used leela (FF1) data because he hadsmatovic wrote: ↑Sat Jul 03, 2021 11:48 am Yo, into wasps' nest, I am not into the details, but I read that Stockfish uses now Lc0 data for training, and doubled their net size. Time to rethink what Albert Silver has done? As far as I got it he used FatFritz 1 (Lc0 derivative) data for training and doubled the net size of the FatFritz 2 (Stockfish derivative) network.
--
Srdja
PS: not interested in discussing the marketing of ChessBase for FF2.
A miracle!It just so happens that we independently discovered that lc0 data works well in our case.![]()
... demonstrated by someone else first.2. Trying to increase the net size is a no brainer once the training and testing procedure is established and
Quite true, and his result was some 50 Elo weaker. As a result, the popular belief was that it was a dead end as was repeatedly told anyone asking about larger net sizes in the SF Discord. Vondele's attempt to reproduce my result was also much weaker, but of course I was using higher quality data.This was pretty much known from the beginning, as jjosh was training larger nets well before most people even knew about NNUE.
Still, as I mentioned elsewhere, I felt it would inspire others to try and of course improve my ideas and the proof is there. I'm genuinely glad.