Something goes wrong with lc0 since yesterday?

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

yanquis1972
Posts: 1766
Joined: Wed Jun 03, 2009 12:14 am

Re: Something goes wrong with lc0 since yesterday?

Post by yanquis1972 »

Laskos wrote: Wed Jul 11, 2018 12:23 pm
yanquis1972 wrote: Wed Jul 11, 2018 11:07 am did you have 395 at over 3350 (CCRL 40/4 scale)? i have 482 looking scary close to SF8 atm, but nowhere near 200 games.
I had ID395 at about 3300 CCRL 40/4' conditions. I am usually testing at shorter than 40/4' time control, and then extrapolate to 40/4'. What time control are you testing at? And still, 200 games each gauntlet is not much as Elo errors go, especially with low draw rate.
40/2 or equivalent, against single-core though. and it's regressed quite sharply since then. still i have leela @ ~3370.

i assume you know this btw, but for anyone that doesnt, CCRL is tested at a significantly faster pace; it's equivalent to 40/4 on an athlon..something. but athlon should give you some idea.
yanquis1972
Posts: 1766
Joined: Wed Jun 03, 2009 12:14 am

Re: Something goes wrong with lc0 since yesterday?

Post by yanquis1972 »

crem wrote: Wed Jul 11, 2018 12:29 pm
Laskos wrote: Wed Jul 11, 2018 8:27 am
As for bignet 10060s nets level, the improvement against AB engines seems very slow. 10060s are about 400 Elo points weaker than smallnet 9060s nets, about 2800 CCRL 40/4' Elo level. And the improvement seems to be only about 200 Elo points compared to early bignet ID10017 against AB engines. At this pace, I am very worried about the potential of the current bignet run.
Unlike previous training attempts where learning rate was reduced frequently (every 20 nets or so) and slowly (by dividing by 3), test10 tries to replicate what is believed DeepMind (change LR 2 times in total, by dividing by 10) did. With that in mind test10 did not reduce LR yet at all.

The first LR reduction will happen around network id10098 (probably testing will restart or moved to main server before that though). After LR change, progress should be fast again.

In general squeezing everything from one LR before switching to a next one is known to improve final quality of the next (at the cost of training speed).
just bc i'm curious, is that known from previous test runs, or someone elses work in the field?

is the final idea (i assume it is, since you say this run may not be reset) to start at 256? one thing i didn't get about the test runs was the lack of experimentation with the appropriate time to promote to a larger net...was the idea that 64x6 learning parameters could be carried over to the big nets?
nabildanial
Posts: 126
Joined: Thu Jun 05, 2014 5:29 am
Location: Malaysia

Re: Something goes wrong with lc0 since yesterday?

Post by nabildanial »

yanquis1972 wrote: Wed Jul 11, 2018 4:41 pm
crem wrote: Wed Jul 11, 2018 12:29 pm
Laskos wrote: Wed Jul 11, 2018 8:27 am
As for bignet 10060s nets level, the improvement against AB engines seems very slow. 10060s are about 400 Elo points weaker than smallnet 9060s nets, about 2800 CCRL 40/4' Elo level. And the improvement seems to be only about 200 Elo points compared to early bignet ID10017 against AB engines. At this pace, I am very worried about the potential of the current bignet run.
Unlike previous training attempts where learning rate was reduced frequently (every 20 nets or so) and slowly (by dividing by 3), test10 tries to replicate what is believed DeepMind (change LR 2 times in total, by dividing by 10) did. With that in mind test10 did not reduce LR yet at all.

The first LR reduction will happen around network id10098 (probably testing will restart or moved to main server before that though). After LR change, progress should be fast again.

In general squeezing everything from one LR before switching to a next one is known to improve final quality of the next (at the cost of training speed).
just bc i'm curious, is that known from previous test runs, or someone elses work in the field?

is the final idea (i assume it is, since you say this run may not be reset) to start at 256? one thing i didn't get about the test runs was the lack of experimentation with the appropriate time to promote to a larger net...was the idea that 64x6 learning parameters could be carried over to the big nets?
It is logical why they have done this way though. If they lower LR very late, they only make the progress go slow. But if they lower LR too early, they risk trapping the net in a local optimum, which can be detrimental for long-term growth.
yanquis1972
Posts: 1766
Joined: Wed Jun 03, 2009 12:14 am

Re: Something goes wrong with lc0 since yesterday?

Post by yanquis1972 »

i'm not questioning his logic, i'm genuinely curious where it came from. a frustrating aspect of the deepmind paper is the hyperselective amount of information provided. "here are some match results from 12 start positions, but we're not giving you any info about it except win/draw/loss with each color & opponent ID"; "here were are our learning rates at various points, but we're not telling you how we determined to lower them"

[tangent, but something i haven't seen anyone mention; in addition to the extreme draw rate, why was AZ +145 elo with the white pieces (40% winrate & 59% draw), but only +17 elo with black?? (8% winrate; 89% draw)
User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: Something goes wrong with lc0 since yesterday?

Post by Laskos »

yanquis1972 wrote: Wed Jul 11, 2018 4:37 pm
Laskos wrote: Wed Jul 11, 2018 12:23 pm
yanquis1972 wrote: Wed Jul 11, 2018 11:07 am did you have 395 at over 3350 (CCRL 40/4 scale)? i have 482 looking scary close to SF8 atm, but nowhere near 200 games.
I had ID395 at about 3300 CCRL 40/4' conditions. I am usually testing at shorter than 40/4' time control, and then extrapolate to 40/4'. What time control are you testing at? And still, 200 games each gauntlet is not much as Elo errors go, especially with low draw rate.
40/2 or equivalent, against single-core though. and it's regressed quite sharply since then. still i have leela @ ~3370.

i assume you know this btw, but for anyone that doesnt, CCRL is tested at a significantly faster pace; it's equivalent to 40/4 on an athlon..something. but athlon should give you some idea.
40/2' against single core is fine, but the most strictly correct simulation of CCRL conditions with lc0 on GPU would probably be to leave the lc0 at the same 40/4', as lc0 speed on GPU is very weakly dependent on CPU (2 threads), and for AB engines use 40/2' on a reasonable i7 core. Anyway, we get a picture, I use the same 40/2' bench for both lc0 and AB engines as you use. My GPU is Nvidia 1060 6GB.

If you are using the same GPU, I would be curious about your 3370 CCRL 40/4' result for a recent net, ID395 was probably a bit weaker.
yanquis1972
Posts: 1766
Joined: Wed Jun 03, 2009 12:14 am

Re: Something goes wrong with lc0 since yesterday?

Post by yanquis1972 »

i use a 1080, but i don't believe there's as much difference as one might think, especially per $. don't know what the benchmark says, tho. i've never tested 390/395; i do wonder if it was as brutalized in the sicilian as the current nets are. possibly moreso, if it's tactically worse.

the 3370 mark came from 3 100 games apiece against SF6-7-8 using a progressive combo of nets (late 460s-early 470s). haven't finished a full match since; performance ratings were ~3360-3380 in the 3 matches. i use the silver suite 50 (might move to the large one, but this one's attractive in terms of tracking evolution), which is arguably not flattering to leela. intending to do at least a short run at a longer TC of the 12 deepmind openings just to get an idea of how leela does there relative not only to AZ but a traditional set of openings; bizarrely i've never seen anyone do this (???).

fwiw, atm, single core SF7 is a near-perfect foil; leela's generally stronger but often not able to prove the win against it (lingering issue with seriously misevaluated endgames notwithstanding).
crem
Posts: 177
Joined: Wed May 23, 2018 9:29 pm

Re: Something goes wrong with lc0 since yesterday?

Post by crem »

yanquis1972 wrote: Wed Jul 11, 2018 4:41 pm
crem wrote: Wed Jul 11, 2018 12:29 pm
Unlike previous training attempts where learning rate was reduced frequently (every 20 nets or so) and slowly (by dividing by 3), test10 tries to replicate what is believed DeepMind (change LR 2 times in total, by dividing by 10) did. With that in mind test10 did not reduce LR yet at all.

The first LR reduction will happen around network id10098 (probably testing will restart or moved to main server before that though). After LR change, progress should be fast again.

In general squeezing everything from one LR before switching to a next one is known to improve final quality of the next (at the cost of training speed).
just bc i'm curious, is that known from previous test runs, or someone elses work in the field?

is the final idea (i assume it is, since you say this run may not be reset) to start at 256? one thing i didn't get about the test runs was the lack of experimentation with the appropriate time to promote to a larger net...was the idea that 64x6 learning parameters could be carried over to the big nets?
That's some common NN learning knowledge, e.g. there are arxiv papers about that.
As we never finished any of test runs, we cannot really compare any final results. Actually intuition kind of suggests opposite to be true ("Why change stepwise if constant smooth lowering of LR seems more natural?", "Every time learning slows down, we reduce LR and it helps, so we should do that.").
But it seems that it's an area where intuition is wrong.

As for the final idea, I guess the current plan is to start generating games at 256 blocks, but from the same games train networks of other sizes (64, 128 and 196) in parallel.
Most likely there will be a reset as many things will change (cpuct in training games will be 1.7 rather than 1.2, new network file format will have more metadata, there are some changes to training process which will make it more similar to alphazero's training).
crem
Posts: 177
Joined: Wed May 23, 2018 9:29 pm

Re: Something goes wrong with lc0 since yesterday?

Post by crem »

crem wrote: Wed Jul 11, 2018 12:29 pm The first LR reduction will happen around network id10098 (probably testing will restart or moved to main server before that though).
Turns out that I miscalculated and it will happen at id10076 (e.g. most likely tomorrow).
Henk
Posts: 7221
Joined: Mon May 27, 2013 10:31 am

Re: Something goes wrong with lc0 since yesterday?

Post by Henk »

One month 50 (real) Elo and 4 million training games later. 1 training game would keep my pc busy whole day. So 4 million days later would be 11000 years. If your pc runs 100 times faster then you only need 110 years to get this fantastic result of 50 Elo points.
User avatar
CMCanavessi
Posts: 1142
Joined: Thu Dec 28, 2017 4:06 pm
Location: Argentina

Re: Something goes wrong with lc0 since yesterday?

Post by CMCanavessi »

Henk wrote: Fri Jul 13, 2018 3:25 pm One month 50 (real) Elo and 4 million training games later. 1 training game would keep my pc busy whole day. So 4 million days later would be 11000 years. If your pc runs 100 times faster then you only need 110 years to get this fantastic result of 50 Elo points.
Still better than fishtest :mrgreen:
Follow my tournament and some Leela gauntlets live at http://twitch.tv/ccls