Page 2 of 8

Re: So Alpha Zero was a hoax?

Posted: Fri Mar 16, 2018 10:08 pm
by Vizvezdenec
Anyone who is working in anything remotedly close to science knows that you will never use smth like "it took ONLY 3 hours to..." and statements like that in any scientific-related paper unless you want to get fired.
So seeing statement like this used in pre-print constantly, using old fish, providing cherrypicked games, etc. makes any reasonable person who is not a bandwagoner to think about A0 paper how you really should - good PR project.
Sure, NNs are incredible, but will they make a revolution in chess engine development? My guess is no, at least with current level of NNs. And by the time NNs will need reasonable time to train on pure community enthusiasm, IMHO, alpha-beta engines will anyway play perfect chess.

Re: So Alpha Zero was a hoax?

Posted: Fri Mar 16, 2018 10:24 pm
by jhellis3
I'll take that bet. How much?

Re: So Alpha Zero was a hoax?

Posted: Fri Mar 16, 2018 11:13 pm
by David Xu
The issue with that interpretation is that DeepMind is not a PR company. How, concretely, does publishing a misleading preprint benefit them? They are a research group, and their funding is reliant on shareholder approval, not public opinion--do you think that making claims they can't back up is a sustainable long-term strategy?

As far as the efficacy of machine learning techniques is concerned, no one knows for sure which tasks they work well on and which they don't, which is precisely why experimentation is necessary. It's not clear to me why you (Michael), Bojun, and so many others seem to spurn said experimentation, to the point of postulating what essentially amounts to a conspiracy theory.

At this point it isn't even about AlphaZero. I'm honestly curious: is it that inconceivable to you that a reinforcement learning based approach could outstrip the decades-old approach of conventional chess engines? I'm honestly not seeing where you and Bojun are pulling all of this confidence from; it seems entirely unfounded to me.

EDIT: I see that Bojun mentioned something about FineArt and overfitting in the LCZero thread; I'm replying to that here in order to condense things. Overfitting is a known issue in machine learning of all types, not just this specific case, and is generally addressable by tuning the training hyperparamters until the net no longer overfits. I'm not sure why Bojun is touting this as some kind of evidence against the effectiveness of neural networks.

Re: So Alpha Zero was a hoax?

Posted: Sat Mar 17, 2018 12:40 am
by CheckersGuy
Vizvezdenec wrote:Anyone who is working in anything remotedly close to science knows that you will never use smth like "it took ONLY 3 hours to..." and statements like that in any scientific-related paper unless you want to get fired.
So seeing statement like this used in pre-print constantly, using old fish, providing cherrypicked games, etc. makes any reasonable person who is not a bandwagoner to think about A0 paper how you really should - good PR project.
Sure, NNs are incredible, but will they make a revolution in chess engine development? My guess is no, at least with current level of NNs. And by the time NNs will need reasonable time to train on pure community enthusiasm, IMHO, alpha-beta engines will anyway play perfect chess.
So if it really did take 3 hours, they should have lied about it ? :D

Re: So Alpha Zero was a hoax?

Posted: Sat Mar 17, 2018 1:02 am
by Dann Corbit
CheckersGuy wrote:
Vizvezdenec wrote:Anyone who is working in anything remotedly close to science knows that you will never use smth like "it took ONLY 3 hours to..." and statements like that in any scientific-related paper unless you want to get fired.
So seeing statement like this used in pre-print constantly, using old fish, providing cherrypicked games, etc. makes any reasonable person who is not a bandwagoner to think about A0 paper how you really should - good PR project.
Sure, NNs are incredible, but will they make a revolution in chess engine development? My guess is no, at least with current level of NNs. And by the time NNs will need reasonable time to train on pure community enthusiasm, IMHO, alpha-beta engines will anyway play perfect chess.
So if it really did take 3 hours, they should have lied about it ? :D
For highly parallel tasks one TPU does 92 Teraops/sec.
They used 4 TPUs.
92 x 10^12 ops/sec * 4 TPUs * 3600 seconds/hour * 3 hours = 3,974,400,000,000,000,000 operations.
If they spent four times that long the count of operations would overflow unsigned long long. That's a pretty big number.

So, given 3 hours and 4 TPUs, you can do a lot of calculating.

Re: So Alpha Zero was a hoax?

Posted: Sat Mar 17, 2018 1:09 am
by jhellis3
They used considerably more than 4 TPUs (5000 gen 1) for playing the training games, and 64 gen2 to do the training.

The final version of A0 then ran on 4 TPUs (gen2) for its match vs SF.

Re: So Alpha Zero was a hoax?

Posted: Sat Mar 17, 2018 1:11 am
by Cardoso
I believe in results against strong opponents, in this case SF8 is a very strong opponent.
The games made available speak for themselves.
Sure I would like an independent and more refined test match but for now those 10 games and the technical paper are all we have.
But even as little as this is, still this data is very convincing to me.

Re: So Alpha Zero was a hoax?

Posted: Sat Mar 17, 2018 1:15 am
by Dann Corbit
jhellis3 wrote:They used considerably more than 4 TPUs (5000 gen 1) for playing the training games, and 64 gen2 to do the training.

The final version of A0 then ran on 4 TPUs (gen2) for its match vs SF.
Well then, no wonder it kicked the stuffings out of SF.
I would love to see the formulas that the training produced.

It seems to me that you could use the data produced by the TPU NN in an ordinary GPU to play a whale of a game of chess.

Re: So Alpha Zero was a hoax?

Posted: Sat Mar 17, 2018 1:30 am
by jhellis3
Yeah, 1 Titan V should get you at least 75% of the performance of 1 TPU gen2.

Currently, consumers can buy up to 2 Vs at 3k a pop direct from Nvidia.

Or I think they will sell you a Workstation with 4 slightly better versions of the card for ~50-70k.

Still pretty prohibitive for the average consumer, but if I were Magnus or the winner of the Candidates, I would be hitting them up for sure. Even if A0 is not available, I imagine LCZ will be strong enough by the time of the WC that its opinion will definitely merit consideration.

Re: So Alpha Zero was a hoax?

Posted: Sat Mar 17, 2018 1:43 am
by noobpwnftw
David Xu wrote:The issue with that interpretation is that DeepMind is not a PR company. How, concretely, does publishing a misleading preprint benefit them? They are a research group, and their funding is reliant on shareholder approval, not public opinion--do you think that making claims they can't back up is a sustainable long-term strategy?

As far as the efficacy of machine learning techniques is concerned, no one knows for sure which tasks they work well on and which they don't, which is precisely why experimentation is necessary. It's not clear to me why you (Michael), Bojun, and so many others seem to spurn said experimentation, to the point of postulating what essentially amounts to a conspiracy theory.

At this point it isn't even about AlphaZero. I'm honestly curious: is it that inconceivable to you that a reinforcement learning based approach could outstrip the decades-old approach of conventional chess engines? I'm honestly not seeing where you and Bojun are pulling all of this confidence from; it seems entirely unfounded to me.

EDIT: I see that Bojun mentioned something about FineArt and overfitting in the LCZero thread; I'm replying to that here in order to condense things. Overfitting is a known issue in machine learning of all types, not just this specific case, and is generally addressable by tuning the training hyperparamters until the net no longer overfits. I'm not sure why Bojun is touting this as some kind of evidence against the effectiveness of neural networks.
I fail to see why would I care if this particular preprint from the research group is more of a PR approach to please their shareholders in the first place?

And I don't see any conspiracy theory here, if they want to do the experiments then do the experiments and based on what the preprint said I find it hard to be convinced that the comparisons made between the outcome of their experiment and SF was properly measured.

In order to properly prove that NNs perform better than SF in general, which is the fundamental thing about the experiments, more matches should be done while not necessarily to publish all details, just the stats are enough, and a book needs to be used on the SF side to introduce diversity, yet, for them being such a research group I feel disturbed to see such claims being made so unscientifically.

I had kept saying that NNs has a potential in certain parts of chess programs in general even before A-whatsoever, you can do a search of my posts here if you want, but how do you define "conventional"? As far as I can tell, automatic parameter tuning is more or less training on a fixed model.

If one know anything about programming then he probably wouldn't draw such a line and label them as "decades old", if you are just picking up on the PVS search algorithm, care to tell me the age of MCTS used in your new reinforcement learning based approach?

I brought the topic up about overfitting because it is a common issue with NNs when they don't get the zillions of samples they need, and I name a particular case with evidence while you pass it with "it can be solved in general". If it is just that easy to solve then why it become a "known issue in machine learning of all types"?

Also are you suggesting that should we ignore all the problems NNs may have just because they are more "effective"?