Official: Lc0 is the strongest engine :)

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

megamau
Posts: 37
Joined: Wed Feb 10, 2016 6:20 am
Location: Singapore

Re: Official: Lc0 is the strongest engine :)

Post by megamau »

Milos wrote: Mon Oct 15, 2018 10:32 pm
Werewolf wrote: Mon Oct 15, 2018 12:22 pm
Milos wrote: Sun Oct 14, 2018 10:55 pm
Werewolf wrote: Sun Oct 14, 2018 10:35 pm GPUs have been improving much faster than CPUs. If this continues, it'll get better and better for the GPU engines...
Ofc they were improving much faster when 9xx series was on 28nm, while Intel is already 4 generations on 14nm (10nm Cannon Lake is a total failure since Intel can't make a reliable and commercially viable 10nm CPU process, and TSMC can't either).
Once now 20xx series reached 12nm you are gonna witness the same thing that happened to Intel since 4 generations ago. Stagnation...
The performance improvement from Pascal (16nm) to Turing (12nm) was about 15% IIRC. Probably the next shrink will yield less.

But with cards they seem to be freer than CPUs to try stuff if it's A.I related. So for Leela the change from Pascal to Turning is HUGE. If, for example, Nvidia produced a card which did away with CUDA cores and focused exclusively on things related to A.I - we could see at least one more decent jump up, IMO.
There is even less than those 15% between Pascal and Turing architecture thanks to scaling. Tensor cores are not very useful and gain from them is minimal, at least for inference. The actual speed up above those 15% comes only from enabling FP16 in Turing that was intentionally crippled in Pascal and earlier architectures.
But this is one-trick pony. There is almost nothing more that Nvidia can bring architecturally in hardware. And as things look like now, we will wait at least 3 more years for working 10nm node.
I know ppl are dreamers but in this case this is totally unsound.
Milos,
after your arrogant and *WRONG* prediction on Turing architecture (which was 1 month away and which you failed to admit) , your far-future predictions are far less interesting.
megamau wrote: Fri Sep 21, 2018 2:25 pm
Milos wrote: Sun Sep 16, 2018 7:22 am
ankan wrote: Sun Sep 16, 2018 6:09 am
Milos wrote: Sun Sep 16, 2018 1:24 am Since FP16 is not enabled in 20xx cards the same it wasn't enabled in 10xx cards the only gain is those 15% in extra CUDA cores and higher frequency. Therefore 2080Ti will be faster than 1080Ti for exactly those 15%. Anyone who believes in some other magical speed-up is frankly speaking just daydreaming.
This is definitely not true. The fp16 path of lc0 uses tensor cores on Volta and they do help 3x3 convolutions. The reason you see only about 3x speedup at best (compared to 8x if you compare the peak fp16 tensor math vs regular fp32 throughput) is because fp32 path uses winograd algorithm which is 2-3x faster than regular implicit gmem algorithm used by fp16 path. As you said tensor cores just gives you 4x4 matrix multiplications and making them work with winograd algorithm is hard.

2080Ti should be almost as fast as a TitanV for lc0 (or ~3X faster than 1080Ti when using fp16 mode).
Well you might be one of thousands of other Indian guys writing drivers for Nvidia, but what you are writing is definitively false.
Titan V has FP16 working in CUDA, 2080Ti doesn't. ....
Since 2080Ti doesn't have FP16 working in CUDA cores, 2080Ti additional speed up can be only 5% thanks to Tensor cores (plus around 15% thanks to more CUDA cores). Your stories about 3x speed up for 2080Ti compared to 1080Ti are nothing but marketing of your company. You are simply biased since you have vested interest.
So Milos, as the cards and the benchmarks are now available, is it time to admit you were wrong ?
Astatos
Posts: 18
Joined: Thu Apr 10, 2014 5:20 pm

Re: Official: Lc0 is the strongest engine :)

Post by Astatos »

What's the problem anyway? You can have something like Tesla.
jp
Posts: 1470
Joined: Mon Apr 23, 2018 7:54 am

Re: Official: Lc0 is the strongest engine :)

Post by jp »

Astatos wrote: Tue Oct 16, 2018 10:33 pm What's the problem anyway? You can have something like Tesla.
Meaning what? The discussion is over how much they'll improve in the future.
tomitank
Posts: 276
Joined: Sat Mar 04, 2017 12:24 pm
Location: Hungary

Re: Official: Lc0 is the strongest engine :)

Post by tomitank »

"Official: Lc0 is the strongest engine :)"

I still don't trust in LC0:
http://legacy-tcec.chessdom.com/archive ... &di=4&ga=5
megamau
Posts: 37
Joined: Wed Feb 10, 2016 6:20 am
Location: Singapore

Re: Official: Lc0 is the strongest engine :)

Post by megamau »

While clearly LC0 is not the strongest engine (that title is currently with Stockfish), I don't understand why we shouldn't "trust" and engine which in 7 months became a member of the "big 4".
tomitank
Posts: 276
Joined: Sat Mar 04, 2017 12:24 pm
Location: Hungary

Re: Official: Lc0 is the strongest engine :)

Post by tomitank »

megamau wrote: Sat Oct 20, 2018 4:31 pm While clearly LC0 is not the strongest engine (that title is currently with Stockfish), I don't understand why we shouldn't "trust" and engine which in 7 months became a member of the "big 4".
In some positions, Leela make a huge mistake.
I do not think that a top engine should make such mistakes.
eg: Would you use it to analyze a professional match?

I do not trust in Leela now. Later, it will certainly improve, but he is not the best (now).
marsell
Posts: 106
Joined: Tue Feb 07, 2012 11:14 am

Re: Official: Lc0 is the strongest engine :)

Post by marsell »

Leela Fanbase claims LC0 as number one every day. But evidence is missing :roll:
User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: Official: Lc0 is the strongest engine :)

Post by Laskos »

marsell wrote: Sun Oct 21, 2018 6:35 pm Leela Fanbase claims LC0 as number one every day. But evidence is missing :roll:
What you are all blabbering there? Cannot read the OP and the correction post? In certain, fair price-wise conditions (not generally fair, as my CPU is a bit older than my GPU), Lc0 at blitz and probably longer TC, is the strongest engine in OPENINGS and MIDDLEGAMES (not too late middlegames). And especially if one corrects Lc0 say with Houdini Tactical for crass tactical blunders, which are obvious to Houdini Tactical in a very short amount of time. It's a useful information for those using a combination of engines.
marsell
Posts: 106
Joined: Tue Feb 07, 2012 11:14 am

Re: Official: Lc0 is the strongest engine :)

Post by marsell »

My answer was for the thread title (Official: Lc0 is the strongest engine :) and not your statement
The game does not end after the opening or after the middlegame, it ends when it ends.The best engine doesn't make tactical mistakes like LC0 does
cucumber
Posts: 144
Joined: Sun Oct 14, 2018 8:21 pm
Full name: JSmith

Re: Official: Lc0 is the strongest engine :)

Post by cucumber »

Werewolf wrote: Mon Oct 15, 2018 12:22 pm
Milos wrote: Sun Oct 14, 2018 10:55 pm
Werewolf wrote: Sun Oct 14, 2018 10:35 pm GPUs have been improving much faster than CPUs. If this continues, it'll get better and better for the GPU engines...
Ofc they were improving much faster when 9xx series was on 28nm, while Intel is already 4 generations on 14nm (10nm Cannon Lake is a total failure since Intel can't make a reliable and commercially viable 10nm CPU process, and TSMC can't either).
Once now 20xx series reached 12nm you are gonna witness the same thing that happened to Intel since 4 generations ago. Stagnation...
The performance improvement from Pascal (16nm) to Turing (12nm) was about 15% IIRC. Probably the next shrink will yield less.

But with cards they seem to be freer than CPUs to try stuff if it's A.I related. So for Leela the change from Pascal to Turning is HUGE. If, for example, Nvidia produced a card which did away with CUDA cores and focused exclusively on things related to A.I - we could see at least one more decent jump up, IMO.
Tensor cores as is are just matmul ASICs. Matrix multiplication makes up a large part of convolutions, but it is totally possible to get even more application specific, should you want another large jump up. Currently, matmul ASICS are limited by data movement, which puts an upper bound on latency, which is a big Leela killer. That's probably what will be benefiting Leela the most. Whether or not that will be enough to fix Leela in calculation heavy endgames is open for debate.