lczero rating

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Dann Corbit, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
User avatar
George Tsavdaris
Posts: 1627
Joined: Thu Mar 09, 2006 11:35 am

Re: lczero rating

Post by George Tsavdaris » Mon Apr 02, 2018 9:34 pm

stavros wrote:
George Tsavdaris wrote:
stavros wrote:correct me if iam wrong but even google Alphazero progress saturated after 700000
i cant imagine lczero to match the latests top emgines.
already latest sd dv+cerebelum book is close to aplhazero
What is "steps"?
from :

"We trained a separate instance of
for each game. Training proceeded
for 700,000 steps (mini-batches of size 4,096)
So how these "steps"/"mini-batches" are compared to games?
After his son's birth they've asked him:
"Is it a boy or girl?"
YES! He replied.....

Posts: 143
Joined: Wed Jan 17, 2018 12:26 pm

Re: lczero rating

Post by jkiliani » Tue Apr 03, 2018 6:16 am

George Tsavdaris wrote:
jkiliani wrote: It will not be necessary to start from zero once the network stalls. Instead, a larger neural net can simply be trained from existing self-play games, afterward the net can continue to improve.
What is the ratio of time of generating self-play games to training from these games? If it is 10:1 for example then creating a bigger NN and training it again then no harm is done once you have the self-played games.

BUT since these self-played games have been played by a smaller(and weaker) NN, by training from them a bigger NN, doesn't this creates an non optimum procedure?
The ratio of computation power going into self-play to training is much larger than 10:1, more like 50:1 I think.

Bootstrapping a larger neural net from a smaller one has been tested with Leela Zero, and has been very successful there. So there's little cause for concern that this would negatively impact the network in any way.

Post Reply