Well, probably they should have give same FLOPS budget to both, that seems like the most fair you can get, given the inefficiency of switching hardware for either side.
Here is a scary thought, though: what would be the performance of AlphaZero if they used more of its training cluster for execution?
I don't know how well it scales on more TPUs and it might need tuning for that but throwing even more processors on it could put it even more even with Stockfish, if not above it.
They could possibly enter the WCCC with that.
--Jon
As far as I know the first generation TPU's, which were used for training, are for training only. The second generation TPU'S can do both training and inference.
However, google/deepMind probably has enough hardware to use many many more second generation TPU's. What I am intrested in is, when the AI stop's improving in the training process since they only trained for 4 hours.
[b]Equal budget[/b] would be a fairer comparison since AlphaZero and Stockfish take advantage of different types of hardware (GPU vs CPU).
If you look at the scaling graph of thinking time vs performance, it suggests that Stockfish is still ahead at fast time controls but that at longer time controls AlphaZero dominates. It would be interesting to see this graph as a function of money resources.
[/quote]
I am afraid that Stockfish does not get significant help from a much more expensive hardware than it was used for demonstration of AlphaZero.
Supposing the 64 cores used by them are physical cores and not logical cores the increase of cores number to 128, 256,.. give some ten Elo only.
Rein Halbersma wrote:
Equal budget would be a fairer comparison since AlphaZero and Stockfish take advantage of different types of hardware (GPU vs CPU).
If you look at the scaling graph of thinking time vs performance, it suggests that Stockfish is still ahead at fast time controls but that at longer time controls AlphaZero dominates. It would be interesting to see this graph as a function of money resources.
I am afraid that Stockfish does not get significant help from a much more expensive hardware than it was used for demonstration of AlphaZero.
Supposing the 64 cores used by them are physical cores and not logical cores the increase of cores number to 128, 256,.. give some ten Elo only.
I think that this a limit of alphabeta algorithm, not a limit of Stockfish itself. The limit of alphabeta is derived by the nature of the game, of course, that grows exponentially at any ply. A smarter approach that uses neural networks and/or other AI algorithms could potentially gives better performance than alphabeta (as AphaZero seems to "demonstrate"). of course AI requires more computational power than alphabeta based algorithms but could eventually scale better at time/power increasing.
I think working on a similar engine to AlphaZero would be really intresting. One will obviously not get the same performance as AlphaZero but it would still be intresting to see, how well the algorithm scales (with additional hardware and time) compared to current state of the art engines.
Have there been any attempts to use MCTS just for tuning of the eval weights of a "regular" chess engine?
Tuning the eval to predict the outcome of an alpha-beta search is a bit hopeless because of all the tactics that can't be encoded in a typical evaluation function. MCTS might average out the tactics sufficiently that its results can directly be used for tuning. The engine would then use alpha-beta for playing games.
syzygy wrote:Have there been any attempts to use MCTS just for tuning of the eval weights of a "regular" chess engine?
Tuning the eval to predict the outcome of an alpha-beta search is a bit hopeless because of all the tactics that can't be encoded in a typical evaluation function. MCTS might average out the tactics sufficiently that its results can directly be used for tuning. The engine would then use alpha-beta for playing games.
I dont think that would work particularly well. No matter what optimization algorithm one uses we only have a linear funciton of evaluation parameters which I doubt can encode tactics (very well).
Neural networks aren't like that and can even learn non-linear functions well.
As for tuning the "regular" evaluation parameters and dont quite understand what you mean by "using mcts to train eval parameters"