corres wrote: ↑Sun Apr 21, 2019 7:15 pm
corres wrote: ↑Sat Apr 20, 2019 1:06 am
Basing on the common test data before we can make a list of RTX 2000 line GPUs.
The common parameters are:
NET: 11250
Backend: cudnn-fp16
Minibatchsize: 512
NNcachesize: 2000000
Other parameters are default
The list:
RTX 2060 OC max nps = 28646 (corres)
RTX 2070 non-OC max nps = 29357 (Laskos)
RTX 2080 max nps = 36300 (Albert Silver)
RTX 2080 Ti max nps = 43297 (Albert Silver)
DUAL RTX 2060 OC max nps = 53789 (corres)
RTX 2080 Ti + RTX 2080 max nps = 77435 (Albert Silver)
You didn't specify the time or nodes at which those were measured, and I don't remember using NN cache of 2000000. So, it's probably another useless list for Leela, as are most circulating around. Again, could you specify all the parameters, as I posted since the start of this thread:
setoption name Backend value cudnn-fp16
setoption name MinibatchSize value 512
setoption name NNCacheSize value 10000000
setoption name WeightsFile value .\11250.txt.gz
go
My now OC-ed 2070 GPU with 2 threads of 3.8GHz i7 CPU gives:
info depth 14 seldepth 43 time 283565 nodes 10050124 score cp 23 hashfull 426
nps 35442 tbhits 0 pv d2d4
after 10 million nodes
info depth 16 seldepth 50 time 409072 nodes 15183130 score cp 25 hashfull 606
nps 37116 tbhits 0 pv d2d4
after 15 million nodes
LTC like 5 minutes is probably better than very short runs, and hashfull is better to be about half. Even longer TC (say 15 minutes) would be good for checking the throttling. TCEC14 Leela machine (an i5) seemed to me to suffer, I guess many have problems over long runs.