Somebody under this user name dkappe on this Forum from Rio de Janeiro decided to experiment with Komodo and NNUE and is NOT bad at all very promising with the NNUE evaluation, it just need at least 1000 more games to train it properly
Alguém com esse nome de usuário dkappe neste fórum do Rio de Janeiro decidiu experimentar o Komodo e o NNUE e NÃO é nada promissor com a avaliação do NNUE, são necessários pelo menos mais 1000 jogos para treiná-lo adequadamente
Alguien con el Nombre de usuario de dkappe decidio experimentar con Komodo 14 usanda el NET NNUE y es muy prometedor pero necesita por lo menos 1000 juegos mas de entrenamiento
dkappe wrote: ↑Wed Jul 15, 2020 10:08 pm
Just for fun I decided to train a NET with Komodo 14 evals at depth 8. Only 4 million positions. I was expecting something pretty weak, but it’s not half bad. Here with 30 threads vs sf10 (also with 30 threads). So far +3=9-0.
Chessqueen wrote: ↑Thu Jul 16, 2020 12:35 am
Somebody under this user name dkappe on this Forum from Rio de Janeiro decided to experiment with Komodo and NNUE and is NOT bad at all very promising with the NNUE evaluation, it just need at least 1000 more games to train it properly
A number of people verified for me that I wasn’t imagining things. This was run by a friend of mine. I only wish I lived in Rio.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
Chessqueen wrote: ↑Thu Jul 16, 2020 12:35 am
Somebody under this user name dkappe on this Forum from Rio de Janeiro decided to experiment with Komodo and NNUE and is NOT bad at all very promising with the NNUE evaluation, it just need at least 1000 more games to train it properly
A number of people verified for me that I wasn’t imagining things. This was run by a friend of mine. I only wish I lived in Rio.
Ask your friend to Join this forum and provide more data about his experiment with Komodo-NNUE, he can write in Portuguese and that is fine we can translate it using https://translate.google.com/
dkappe wrote: ↑Wed Jul 15, 2020 10:08 pm
Just for fun I decided to train a NET with Komodo 14 evals at depth 8. Only 4 million positions. I was expecting something pretty weak, but it’s not half bad. Here with 30 threads vs sf10 (also with 30 threads). So far +3=9-0.
Ask your friend to Join this forum and provide more data about his experiment with Komodo-NNUE, he can write in Portuguese and that is fine we can translate it using https://translate.google.com/
You misunderstand. I trained the net. He ran it on his machine to make sure I wasn’t imagining things.
Here another game, this time running on one of my home machines on only 1 thread.
It’s running about even with the latest sfdev, but the score, nps and pv are very different, so it’s not accidental sf eval.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
Ask your friend to Join this forum and provide more data about his experiment with Komodo-NNUE, he can write in Portuguese and that is fine we can translate it using https://translate.google.com/
You misunderstand. I trained the net. He ran it on his machine to make sure I wasn’t imagining things.
Here another game, this time running on one of my home machines on only 1 thread.
It’s running about even with the latest sfdev, but the score, nps and pv are very different, so it’s not accidental sf eval.
I thought that you used Komdo 14 with the NNUE Net, if that is the case and you did NOT used the Stockfish evaluation WHY do you call it LizardFish and NOT Lizard-NNUE
I generated training data with Komodo 14 (and a modest amount of python). I used a recent nnue binary to train a net using that data and am running that net using that binary (as are my helpful friends). So this is an approximation of the Komodo 14 eval at depth 8 running on a sf-nnue binary. The name? Nothing serious. It’s a marriage of Komodo and stockfish — LizardFish.
I’ll train it up some more, but I have mixed feelings about distributing a stronger version. It seems almost like a theft of Komodo’s intellectual property. The same sort of cloning (and I think this is much more “cloning” than the usual name calling on this forum) could be done with any uci engine and a modest amount of cpu.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
dkappe wrote: ↑Thu Jul 16, 2020 2:10 am
I generated training data with Komodo 14 (and a modest amount of python). I used a recent nnue binary to train a net using that data and am running that net using that binary (as are my helpful friends). So this is an approximation of the Komodo 14 eval at depth 8 running on a sf-nnue binary. The name? Nothing serious. It’s a marriage of Komodo and stockfish — LizardFish.
I’ll train it up some more, but I have mixed feelings about distributing a stronger version. It seems almost like a theft of Komodo’s intellectual property. The same sort of cloning (and I think this is much more “cloning” than the usual name calling on this forum) could be done with any uci engine and a modest amount of cpu.
It really is not, since while the NN is trained from games played by Komodo, it is still an NN. If studying Kasparov's games and trying to emulate him makes me his clone then..... my dreams have all come true!!
I am the mysterious tester (this is all dkappe's work), and ran it for 35 games before calling it quits. It was 35 games only (not 1000, sorry), with 30 threads each, for roughly 30+ Million nps for SF10 and 14-15 Million nps for Lizard. I would have played a later version of SF but was told to not be too optimistic, so this was only chosen to try to keep it competitive. A case of underestimating itself if ever one was seen.
dkappe wrote: ↑Thu Jul 16, 2020 2:10 am
I generated training data with Komodo 14 (and a modest amount of python). I used a recent nnue binary to train a net using that data and am running that net using that binary (as are my helpful friends). So this is an approximation of the Komodo 14 eval at depth 8 running on a sf-nnue binary. The name? Nothing serious. It’s a marriage of Komodo and stockfish — LizardFish.
I’ll train it up some more, but I have mixed feelings about distributing a stronger version. It seems almost like a theft of Komodo’s intellectual property. The same sort of cloning (and I think this is much more “cloning” than the usual name calling on this forum) could be done with any uci engine and a modest amount of cpu.
It really is not, since while the NN is trained from games played by Komodo, it is still an NN. If studying Kasparov's games and trying to emulate him makes me his clone then..... my dreams have all come true!!
I am the mysterious tester (this is all dkappe's work), and ran it for 35 games before calling it quits. It was 35 games only (not 1000, sorry), with 30 threads each, for roughly 30+ Million nps for SF10 and 14-15 Million nps for Lizard. I would have played a later version of SF but was told to not be too optimistic, so this was only chosen to try to keep it competitive. A case of underestimating itself if ever one was seen.
I consider Albert a good friend, but I must disagree a bit. Training a NN to match the eval and search output of a single program seems to be to be a way to clone that program. We might not understand exactly how the NN works compared with say an assembly dump of a programs eval and search functions, but it is a direct attempt to duplicate the program. Training on many sources (programs, human games, self play) is not trying to specifically duplicate another programs search and eval, so I think that wold be allowed. Training for personal use is fine. I am just speaking of training against a program (especially a commercial engine) and then releasing the NN without permission is wrong. I assume testing groups and tournaments would agree, but I would like to hear more opinions.
This is a new world, but the old cloning rules would still apply.
dkappe wrote: ↑Thu Jul 16, 2020 2:10 am
I generated training data with Komodo 14 (and a modest amount of python). I used a recent nnue binary to train a net using that data and am running that net using that binary (as are my helpful friends). So this is an approximation of the Komodo 14 eval at depth 8 running on a sf-nnue binary. The name? Nothing serious. It’s a marriage of Komodo and stockfish — LizardFish.
I’ll train it up some more, but I have mixed feelings about distributing a stronger version. It seems almost like a theft of Komodo’s intellectual property. The same sort of cloning (and I think this is much more “cloning” than the usual name calling on this forum) could be done with any uci engine and a modest amount of cpu.
It really is not, since while the NN is trained from games played by Komodo, it is still an NN. If studying Kasparov's games and trying to emulate him makes me his clone then..... my dreams have all come true!!
I am the mysterious tester (this is all dkappe's work), and ran it for 35 games before calling it quits. It was 35 games only (not 1000, sorry), with 30 threads each, for roughly 30+ Million nps for SF10 and 14-15 Million nps for Lizard. I would have played a later version of SF but was told to not be too optimistic, so this was only chosen to try to keep it competitive. A case of underestimating itself if ever one was seen.
I consider Albert a good friend, but I must disagree a bit. Training a NN to match the eval and search output of a single program seems to be to be a way to clone that program. We might not understand exactly how the NN works compared with say an assembly dump of a programs eval and search functions, but it is a direct attempt to duplicate the program. Training on many sources (programs, human games, self play) is not trying to specifically duplicate another programs search and eval, so I think that wold be allowed. Training for personal use is fine. I am just speaking of training against a program (especially a commercial engine) and then releasing the NN without permission is wrong. I assume testing groups and tournaments would agree, but I would like to hear more opinions.
This is a new world, but the old cloning rules would still apply.
Mark
Well as long as Larry Kaufman agree to take advantage of the NNUE NET, and the advantages of using NNUE NET benefit Komodo to the point that it becomes stronger than Komodo, that would NOT be any different than using StockfiNN to advance Stockfish search with a more efficient one
dkappe wrote: ↑Thu Jul 16, 2020 2:10 am
I generated training data with Komodo 14 (and a modest amount of python). I used a recent nnue binary to train a net using that data and am running that net using that binary (as are my helpful friends). So this is an approximation of the Komodo 14 eval at depth 8 running on a sf-nnue binary. The name? Nothing serious. It’s a marriage of Komodo and stockfish — LizardFish.
I’ll train it up some more, but I have mixed feelings about distributing a stronger version. It seems almost like a theft of Komodo’s intellectual property. The same sort of cloning (and I think this is much more “cloning” than the usual name calling on this forum) could be done with any uci engine and a modest amount of cpu.
It really is not, since while the NN is trained from games played by Komodo, it is still an NN. If studying Kasparov's games and trying to emulate him makes me his clone then..... my dreams have all come true!!
I am the mysterious tester (this is all dkappe's work), and ran it for 35 games before calling it quits. It was 35 games only (not 1000, sorry), with 30 threads each, for roughly 30+ Million nps for SF10 and 14-15 Million nps for Lizard. I would have played a later version of SF but was told to not be too optimistic, so this was only chosen to try to keep it competitive. A case of underestimating itself if ever one was seen.
I consider Albert a good friend, but I must disagree a bit. Training a NN to match the eval and search output of a single program seems to be to be a way to clone that program. We might not understand exactly how the NN works compared with say an assembly dump of a programs eval and search functions, but it is a direct attempt to duplicate the program. Training on many sources (programs, human games, self play) is not trying to specifically duplicate another programs search and eval, so I think that wold be allowed. Training for personal use is fine. I am just speaking of training against a program (especially a commercial engine) and then releasing the NN without permission is wrong. I assume testing groups and tournaments would agree, but I would like to hear more opinions.
This is a new world, but the old cloning rules would still apply.
Mark
Well as long as Larry Kaufman agree to take advantage of the NNUE NET, and the advantages of using NNUE NET benefit Komodo to the point that it becomes stronger than Komodo, that would NOT be any different than using StockfiNN to advance Stockfish search with a more efficient one
I.agree. it is fine to use your own program to make a better version of itself, just like in tuning a program.