Milos wrote:4 hours my ass (pardon my french). Try training it on state-of-the-art 1080.
Fully trained network requres 12h on 5000 gen1 TPUs for self-games and 64 gen2 TPUs for training itself.
Gen1 TPU is like 30x K80 which is like 5x 1080 in performance.
So you'd need like 375k training days with 1080, which is like 1000 years!!!
Your math is wrong. I think it is doable with a distributed effort smaller than what was used for Stockfish.
Care to elaborate, add any substance beyond your one-liner childish reply?
5000*5*12/(24*365) = 34.25 years
So you claim single gen1 TPU is just only like 5x stronger than Nvidia 1080 GPU????
Gee, you really have no clue about hardware, do you?
TPU is like 92 TOPS, 1080 is 0.3TFLOPS. Please educate yourself before trying to run discussions with one-liner replies.
This really seems revolutionary !
Beating Stockfish 10-0 is no joke.
I wonder when such a Program would be made available to customers at a reasonable price ?
1st gen TPU is 92 TOPS and an OP is an 8bit int multiplication.
Lets cut this crap of comparing apples and oranges. Please take a look at: https://arxiv.org/abs/1704.04760
The actual comparison (not apples and oranges stuff you mention) you can see in Table 6 where typical ML application are compared (MLP and CNN).
Factor between first gen TPU and K80 (that is 3-5x faster for ML compared to 1080) is between 15 and 60 averaging around 25x.
1st gen TPU is 92 TOPS and an OP is an 8bit int multiplication.
Lets cut this crap of comparing apples and oranges. Please take a look at: https://arxiv.org/abs/1704.04760
The actual comparison (not apples and oranges stuff you mention) you can see in Table 6 where typical ML application are compared (MLP and CNN).
Factor between first gen TPU and K80 (that is 3-5x faster for ML compared to 1080) is between 15 and 60 averaging around 25x.
The GTX 1080 should be faster than a K80. For instance, this is a deep learning benchmark where it is 4x faster: https://medium.com/initialized-capital/ ... bd85fe5d58
They have roughly the same number of cores, but the clock speed of the 1080 is 3x the clock speed of the K80. 16nm vs 28 nm technology. The 1080 is definitely faster.
The reason I used 5x in my initial formula is that I believed you meant in your message that a 1080 is 5x slower than a TPU (5x slower than a K80 cannot be correct).
Anyway, whether a TPU is 5x or 10x faster than a 1080 does not change much to the fact that the experiment of DeepMind can be replicated in a few months of distributed computation with ~100 participants, which should be less than the effort that was used by Stockfish so far.
Rémi Coulom wrote:Anyway, whether a TPU is 5x or 10x faster than a 1080 does not change much to the fact that the experiment of DeepMind can be replicated in a few months of distributed computation with ~100 participants, which should be less than the effort that was used by Stockfish so far.
It took leelazero 1 month to get the same games as AG0 got in 3 hours, that with constant 1000 volunteers.
What makes you think you could do the same in chess with only 100 participants?
Minimum time to train network to SF8 level would be at least a year with constant 100 volunteers.
And in terms of power burned I really don't think it wouldn't be anywhere near to fishtest but much higher. Power per core of modern CPU is 10-15W. 1080 is like 250W.
Most of ppl in fishtest donate just a few cores and most ppl don't have 10 series GTX cards but older which are far less powerful and far more power hungry.
shrapnel wrote:This really seems revolutionary !
Beating Stockfish 10-0 is no joke.
I wonder when such a Program would be made available to customers at a reasonable price ?
Never.
It will go the Deep Blue path, serving just commercial interests.
And is of comparable strength as Deep Blue then, that is, far from the top.
I don't know why it is so difficult to understand it is all hardware.
1st gen TPU is 92 TOPS and an OP is an 8bit int multiplication.
Lets cut this crap of comparing apples and oranges. Please take a look at: https://arxiv.org/abs/1704.04760
The actual comparison (not apples and oranges stuff you mention) you can see in Table 6 where typical ML application are compared (MLP and CNN).
Factor between first gen TPU and K80 (that is 3-5x faster for ML compared to 1080) is between 15 and 60 averaging around 25x.
The GTX 1080 should be faster than a K80. For instance, this is a deep learning benchmark where it is 4x faster: https://medium.com/initialized-capital/ ... bd85fe5d58
They have roughly the same number of cores, but the clock speed of the 1080 is 3x the clock speed of the K80. 16nm vs 28 nm technology. The 1080 is definitely faster.
The reason I used 5x in my initial formula is that I believed you meant in your message that a 1080 is 5x slower than a TPU (5x slower than a K80 cannot be correct).
Anyway, whether a TPU is 5x or 10x faster than a 1080 does not change much to the fact that the experiment of DeepMind can be replicated in a few months of distributed computation with ~100 participants, which should be less than the effort that was used by Stockfish so far.
Only thing is they are still tuning at a much lower level, quite probably around 2900 or even lower.
It will not be that easy going forward, as optimal lines get subtler and subtler.
Stockfish also averaged around 150 elo in the first year.
At that level, it is easy, let's see what they do from now on, and my prediction is: very little.
Far fewer transistors and joules were used training AlphaZero than have been used training Stockfish. You can soon rent those TPUs on Google's cloud, or apply for free access now, so stop complaining. Furthermore it's an experimental project in early days and performance is obviously not optimal, so all the 'but-but-but 30 Elo because they used SF 8 instead of SF 8.00194' sounds really dumb.
Days of alpha-beta engines have come to an abrupt end.
-Carl
Oops, are not they doing alpha-beta too?
There is a single approach to playing chess, picking the best move, and whether you call it alpha-beta, Monte Carlo or Las Vegas does not matter at all.