That's a possibility I didn't realise lc0 could exploit - is it hard to get it to run on 2 GPUs? Presumably if money was no object one could run 2 x 1080ti and get something faster than a Titan V (without its Tensor cores) ?Milos wrote: ↑Wed Jun 06, 2018 11:34 pmRunning 2x 1060 even 3GB version is at least 20% stronger than running 1080ti and 2x1060 3GB cost at least 30% less than single 1080Ti .
Latest lc0-win-20180604-cuda92-cudnn714-experimental EXCELLENT
Moderators: hgm, Rebel, chrisw
-
- Posts: 1796
- Joined: Thu Sep 18, 2008 10:24 pm
Re: Latest lc0-win-20180604-cuda92-cudnn714-experimental EXCELLENT
-
- Posts: 1339
- Joined: Fri Nov 02, 2012 9:43 am
- Location: New Delhi, India
Re: Latest lc0-win-20180604-cuda92-cudnn714-experimental EXCELLENT
Almost certainly. If not now, within 1-2 weeks.
One problem is that even in 1 minute/move,it very rarely uses the full one minute, and makes its move within 25-30s on an average.
i7 5960X @ 4.1 Ghz, 64 GB G.Skill RipJaws RAM, Twin Asus ROG Strix OC 11 GB Geforce 2080 Tis
-
- Posts: 1339
- Joined: Fri Nov 02, 2012 9:43 am
- Location: New Delhi, India
Re: Latest lc0-win-20180604-cuda92-cudnn714-experimental EXCELLENT
I'd simply plonk in another 1080 Ti when Albert enables Multi-GPU support and still be ahead of you, didn't that occur to a smart guy like you, Milos ?
i7 5960X @ 4.1 Ghz, 64 GB G.Skill RipJaws RAM, Twin Asus ROG Strix OC 11 GB Geforce 2080 Tis
-
- Posts: 4190
- Joined: Wed Nov 25, 2009 1:47 am
Re: Latest lc0-win-20180604-cuda92-cudnn714-experimental EXCELLENT
It's fairly trivial. You just need multiplexing backend, i.e. something like:Werewolf wrote: ↑Thu Jun 07, 2018 9:26 amThat's a possibility I didn't realise lc0 could exploit - is it hard to get it to run on 2 GPUs? Presumably if money was no object one could run 2 x 1080ti and get something faster than a Titan V (without its Tensor cores) ?
--backend=multiplexing "--backend-opts=(backend=cudnn,gpu=0,threads=2),(backend=cudnn,gpu=1,threads=2)"
Btw. Titan V is already using Tensor cores, but the thing is that Tensor cores are not even remotely efficient as NVIDIA advertises them. In some workloads (mainly training) one can get 2.5x 1080Ti performance, in others like Lc0-cudnn inference it is only 1.3x.
-
- Posts: 1535
- Joined: Sun Oct 25, 2009 2:30 am
Re: Latest lc0-win-20180604-cuda92-cudnn714-experimental EXCELLENT
I guess they have leftovers to get rid of, from the crypto-bubble.Werewolf wrote: ↑Thu Jun 07, 2018 9:24 amInteresting! That contradicts this report
https://www.pcgamer.com/nvidia-ceo-says ... orce-gpus/
I actually watched a video where he says "it'll be *gestures with his hand in a dismissive way* a long time away"
-
- Posts: 4190
- Joined: Wed Nov 25, 2009 1:47 am
Re: Latest lc0-win-20180604-cuda92-cudnn714-experimental EXCELLENT
Your trollish behaviour is really stepping over the line. So you are going to my ignore list.
This is not pissing context and who has a bigger one. It seems Indian ppl are really full of complexes related to material stuff. I use quite old GTX770 card and it is perfectly fine for my needs. If I need (for work) I have access to the cluster with 10 TitanV's and 50 1080Ti's. And if I wanted to I could buy 20 1080Ti's just from monthly salary since [moderation: removed remark insulting for Indians in general]
-
- Posts: 1796
- Joined: Thu Sep 18, 2008 10:24 pm
Re: Latest lc0-win-20180604-cuda92-cudnn714-experimental EXCELLENT
Brilliant thanks.Milos wrote: ↑Thu Jun 07, 2018 11:18 amIt's fairly trivial. You just need multiplexing backend, i.e. something like:
--backend=multiplexing "--backend-opts=(backend=cudnn,gpu=0,threads=2),(backend=cudnn,gpu=1,threads=2)"
Btw. Titan V is already using Tensor cores, but the thing is that Tensor cores are not even remotely efficient as NVIDIA advertises them. In some workloads (mainly training) one can get 2.5x 1080Ti performance, in others like Lc0-cudnn inference it is only 1.3x.
-
- Posts: 3019
- Joined: Wed Mar 08, 2006 9:57 pm
- Location: Rio de Janeiro, Brazil
Re: Latest lc0-win-20180604-cuda92-cudnn714-experimental EXCELLENT
I doubt it. They refused to ramp up the production to meet the demand, which is what led to the absurd price gouging many retailers began charging.Ozymandias wrote: ↑Thu Jun 07, 2018 11:25 amI guess they have leftovers to get rid of, from the crypto-bubble.Werewolf wrote: ↑Thu Jun 07, 2018 9:24 amInteresting! That contradicts this report
https://www.pcgamer.com/nvidia-ceo-says ... orce-gpus/
I actually watched a video where he says "it'll be *gestures with his hand in a dismissive way* a long time away"
"Tactics are the bricks and sticks that make up a game, but positional play is the architectural blueprint."
-
- Posts: 1796
- Joined: Thu Sep 18, 2008 10:24 pm
Re: Latest lc0-win-20180604-cuda92-cudnn714-experimental EXCELLENT
I'm trying to understand the improvement lc0 is making when the weights here
http://lczero.org/networks
don't seem to be making any progress at all. Is it that the network is saturated but other settings to do with CUDA still offer room for improvement?
http://lczero.org/networks
don't seem to be making any progress at all. Is it that the network is saturated but other settings to do with CUDA still offer room for improvement?
-
- Posts: 565
- Joined: Thu Nov 13, 2014 12:03 pm
Re: Latest lc0-win-20180604-cuda92-cudnn714-experimental EXCELLENT
a quick linear regression from ID323 to ID379 shows a 0.551 average ELO gain per ID (p-value ~0%, but R^2 = 0.35 only...)
evidence hints at some climbing still going on, but a lot of noise as well.
evidence hints at some climbing still going on, but a lot of noise as well.