No, I simply meant the innovation in MCTS when MCTS was introduced (which seems to have been in the 1970s, but that does not matter here). The core element of MCTS is the random element. Without it, it makes no sense, to me, to still call it a Monte Carlo method. It is a best-first tree search algorithm, not a Monte Carlo tree search algorithm.CheckersGuy wrote:You seem to forget that the "innovation" is mostly the neural network they use.syzygy wrote:The innovation in MCTS, at least to me, lied solely in the random playouts (which should not work very well for chess).

Taking the MC out of MCTS was probably necessary to make the AlphaZero approach work for chess.There isn't anything special about mcts but apparently using a neural network + reinforcement learning instead of random playouts seems to work quite well.

I am still surprised that it works.

The neural network obviously gives quite good evaluations for the type of positions on which it was trained, but it seems it really needs enough tree search to do well against Stockfish. So the tree search may be part of the trick. Did Google simply discover that a book-building type algorithm for controlling the top of the tree outperforms alpha-beta when running on a highly parallel system? This is something that we can test. (The idea is not new, Marcel Kervinck mentioned it to me some years ago, but I don't know if anyone has tried it out on big enough hardware.)

Another idea is to try to get more out of self-play. Would it make sense to tune eval parameters such that, say, the results of a 10-ply search get closer to the results of a 15-ply search? Maybe this is only feasible with a ridiculously complex function (which then needs Google's hardware to be of any use).