LCZero: Progress and Scaling. Relation to CCRL Elo

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

kasinp
Posts: 259
Joined: Sat Dec 02, 2006 10:47 pm
Location: Toronto
Full name: Peter Kasinski

Re: LCZero: Progress and Scaling. Relation to CCRL Elo

Post by kasinp »

Just finished a match between Deep Sjeng WC2008 x64 and LCZero v.0.7. Conditions:

128 games from 64 opening positions, covering a broad range of openings (A00-E99)
Blitz 5+3, ponder OFF
5-men TB position adjudication
otherwise no draws or resignations by GUI

Deep Sjeng WC2008 x64 used 8-CPUs on Xeon E5-2690 3.00GHz, 1GB hash
Leela 0.7 running on tuned 1080 ti, network 208
Sjeng used 5-men Nalimov TBs on fast SSD

Deep Sjeng WC2008 - Lczero v0.7 70:58 (+50,=40,- 38) for a 33 ELO difference .

Deep Sjeng version used is the 2008 Blitz World Champion (ahead of cluster Rybka). A 4-CPU version of this program is rated 2941 by CCRL at 40/4.

It is probably fair to estimate this performance of Leela at around 2950 CCRL.
I ran the full-tuner on it and during the match it used 5 threads of the GPU and it typically also used 3 CPUs on the Xeon.

PK
Werewolf
Posts: 2006
Joined: Thu Sep 18, 2008 10:24 pm

Re: LCZero: Progress and Scaling. Relation to CCRL Elo

Post by Werewolf »

nabildanial wrote:
Werewolf wrote:
Milos wrote:
980M is just a tad bit slower than regular 980 which is much more powerful card especially for ML compared to 1060.
Are you basing that on their GFLOPS processing power?

The 1060 is roughly 3800 GFLOPS and the 980 is 4600 GFLOPS.
I have both 970 and 1060, and their performance are the same for gaming, video rendering and for using Leela. 980 is around 20%-30% faster than a 970 and it should be the same against 1060 too.
That's interesting, but what I was asking Milos is what it is that makes the 980 faster.

Is it processing power measured in GFLOPS? Is that the yardstick we use to measure performance for LCZero prior to buying a card where we can actually test nps?
Nay Lin Tun
Posts: 710
Joined: Mon Jan 16, 2012 6:34 am

Re: LCZero: Progress and Scaling. Relation to CCRL Elo

Post by Nay Lin Tun »

Leela Milestones and future Goals
(It is time to celebrate and hype about leela)


A lot of milestones happened to Leela in 2 months, a lot more than a lot of people were expected and the future is brighter than 5 years old distributed network stockfish project( initial stockfish 1.3 is 10 years old now)

Goals!

1. To reach the level of A0 (estimated 3300 rating in GTX 1060)

2. To reach the level of latest stockfish on GTX 1060 ( estimated 3550+ rating on 8 cores desktop)

3. To surpass the rating of stockfish on GTX 1060.(getting 3600+)



Achievement milestones

1. Finished 10 millions games,

2. Reached 2800+, super GM level, on 1060 GTX.

3. Gofundme got €5000+ donation for the project.



Future Leela

1. Network will soon be expanded into 15x192 and may further expand into 20x256

( exactly as A0)

2. cuDNN or tensorflow implementation will increase the speed/ elo of Leela on NVIDIA cards (?50% ?100% ?200%), ( too bad for AMD cards though).

3. syzgy Tablebase

4. Auto resign will speed up training up to 30%.

Good luck Leela.
Jhoravi
Posts: 291
Joined: Wed May 08, 2013 6:49 am

Re: LCZero: Progress and Scaling. Relation to CCRL Elo

Post by Jhoravi »

Nay Lin Tun wrote: 4. Auto resign will speed up training up to 30%.

Good luck Leela.
How about disabling the time wasting three fold repetition during thinking?
Leo
Posts: 1104
Joined: Fri Sep 16, 2016 6:55 pm
Location: USA/Minnesota
Full name: Leo Anger

Re: LCZero: Progress and Scaling. Relation to CCRL Elo

Post by Leo »

Albert Silver wrote:
Leo wrote:It looks like LCzero is at 2500 Elo.
On your rig?
Steven Pohl ran a match vs a 2500 Elo engine.
Advanced Micro Devices fan.
Leo
Posts: 1104
Joined: Fri Sep 16, 2016 6:55 pm
Location: USA/Minnesota
Full name: Leo Anger

Re: LCZero: Progress and Scaling. Relation to CCRL Elo

Post by Leo »

Its just growing pains. It probably hasn't trained in those situations yet. Who is the genius who wrote the learning algorithm I wonder? ( I am not being sarcastic.)
Advanced Micro Devices fan.
Albert Silver
Posts: 3026
Joined: Wed Mar 08, 2006 9:57 pm
Location: Rio de Janeiro, Brazil

Re: LCZero: Progress and Scaling. Relation to CCRL Elo

Post by Albert Silver »

Leo wrote:Its just growing pains. It probably hasn't trained in those situations yet. Who is the genius who wrote the learning algorithm I wonder? ( I am not being sarcastic.)
The DeepMind team. There is no one name. It is in their paper.
"Tactics are the bricks and sticks that make up a game, but positional play is the architectural blueprint."
Albert Silver
Posts: 3026
Joined: Wed Mar 08, 2006 9:57 pm
Location: Rio de Janeiro, Brazil

Re: LCZero: Progress and Scaling. Relation to CCRL Elo

Post by Albert Silver »

Leo wrote:
Albert Silver wrote:
Leo wrote:It looks like LCzero is at 2500 Elo.
On your rig?
Steven Pohl ran a match vs a 2500 Elo engine.
Well, on my desktop, i5-2500K and GTX1060 (6GB), Leela NN202 plays around 2900-2950 CCRL. I have not tested the new larger NNs yet.
"Tactics are the bricks and sticks that make up a game, but positional play is the architectural blueprint."
Milos
Posts: 4190
Joined: Wed Nov 25, 2009 1:47 am

Re: LCZero: Progress and Scaling. Relation to CCRL Elo

Post by Milos »

Werewolf wrote:
nabildanial wrote:
Werewolf wrote:
Milos wrote:
980M is just a tad bit slower than regular 980 which is much more powerful card especially for ML compared to 1060.
Are you basing that on their GFLOPS processing power?

The 1060 is roughly 3800 GFLOPS and the 980 is 4600 GFLOPS.
I have both 970 and 1060, and their performance are the same for gaming, video rendering and for using Leela. 980 is around 20%-30% faster than a 970 and it should be the same against 1060 too.
That's interesting, but what I was asking Milos is what it is that makes the 980 faster.

Is it processing power measured in GFLOPS? Is that the yardstick we use to measure performance for LCZero prior to buying a card where we can actually test nps?
GFLOPS or TFLOPS are good indication but not necessarily totally reliable benchmark. It also depends on memory amount, memory bandwidth, etc. But for LC0 atm memory is not an issue since net is still very small compared to total GPU memory.
User avatar
CMCanavessi
Posts: 1142
Joined: Thu Dec 28, 2017 4:06 pm
Location: Argentina

Re: LCZero: Progress and Scaling. Relation to CCRL Elo

Post by CMCanavessi »

Werewolf wrote:
nabildanial wrote:
Werewolf wrote:
Milos wrote:
980M is just a tad bit slower than regular 980 which is much more powerful card especially for ML compared to 1060.
Are you basing that on their GFLOPS processing power?

The 1060 is roughly 3800 GFLOPS and the 980 is 4600 GFLOPS.
I have both 970 and 1060, and their performance are the same for gaming, video rendering and for using Leela. 980 is around 20%-30% faster than a 970 and it should be the same against 1060 too.
That's interesting, but what I was asking Milos is what it is that makes the 980 faster.

Is it processing power measured in GFLOPS? Is that the yardstick we use to measure performance for LCZero prior to buying a card where we can actually test nps?
The only thing that really matters for Leela (and deep learning in general) is the number of CUDA cores. The 970 has 1644, and the 980 has 2048. There's the performance difference.


Here's a table with all nvidia gpu specs:
https://www.studio1productions.com/Arti ... -Chart.htm
Follow my tournament and some Leela gauntlets live at http://twitch.tv/ccls