Why would I state something that is highly improbable?syzygy wrote:You are spouting baseless claims but what is new. I'm not sure why your are not simply stating that they faked all their games, which would be easy enough to do.
Are you capable of having an argument that is not straw-man?
They simply intentionally presented results in the way (false one) that would give them best PR impact, because they were not writing a scientific paper but an advertising leaflet for the power of their TPU cloud which they shortly after offered as a service. The PR is quite perfidious and is targeting scientific community already working with NN. Therefore, all those theatrics with that quasi scientific preprint ("leaking" it during London classics to get a lot of feedback from GMs, press coverage, etc, etc).
Another straw-man. Scaling of alpha-beta or MCTS is totally irrelevant for the argument about training time when using reinforcement learning.It is not easy at all to throw massively parallel hardware at a task. Alpha-beta does not seem to scale at all beyond 64 threads, to give just one example.
Self-playing games scale perfectly no matter what kind of hardware or chess program you use. If you can run 10 games per minute on a single machine you will be able to run 100 games per minute on 10 machines, no matter if you are running self-playing A0, SF or some nameless random-mover engine that doesn't even perform search.
This is such a trivial thing that it is simply impossible that you don't understand it. You seem to be just playing dumb for the sake of trolling.