Google's AlphaGo team has been working on chess

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

AlvaroBegue
Posts: 931
Joined: Tue Mar 09, 2010 3:46 pm
Location: New York
Full name: Álvaro Begué (RuyDos)

Re: Google's AlphaGo team has been working on chess

Post by AlvaroBegue »

syzygy wrote:[...] And the randomness of the selection is also quite limited (if not completely absent - the paper states that the edge is selected that maximizes an "upper confidence bound").
Randomness has never been a requirement in the tree part of the search. The earliest description I remember of this type of algorithm with MoGo's UCT search, which picked the move that maximizes a formula called UCB1 ("UCB" stands for "Upper Confidence Bound", just like in the recent paper). Actually I imagine most implementations don't have any random element in the tree part of the playout.

AlphaGo initially combined the results of playouts with the evaluation at the leaf of the tree (that was probably its biggest innovation), and then AlphaGo Zero and AlphaZero do away with the playout altogether, using only an evaluation function. Still, the whole mechanism for growing the tree and for backing up values is the same, so it feels like the MCTS label fits.
pilgrimdan
Posts: 405
Joined: Sat Jul 02, 2011 10:49 pm

Re: Google's AlphaGo team has been working on chess

Post by pilgrimdan »

AlvaroBegue wrote:
syzygy wrote:[...] And the randomness of the selection is also quite limited (if not completely absent - the paper states that the edge is selected that maximizes an "upper confidence bound").
Randomness has never been a requirement in the tree part of the search. The earliest description I remember of this type of algorithm with MoGo's UCT search, which picked the move that maximizes a formula called UCB1 ("UCB" stands for "Upper Confidence Bound", just like in the recent paper). Actually I imagine most implementations don't have any random element in the tree part of the playout.

AlphaGo initially combined the results of playouts with the evaluation at the leaf of the tree (that was probably its biggest innovation), and then AlphaGo Zero and AlphaZero do away with the playout altogether, using only an evaluation function. Still, the whole mechanism for growing the tree and for backing up values is the same, so it feels like the MCTS label fits.
I found this as a (somewhat) explanation ...

QML 35 days ago [-]
What makes this different from a minimax algorithm with alpha-beta pruning?

gwern 35 days ago [-]

Aside from MCTS being a different tree search method, there is no 'closing of the loop'. In regular MCTS, it is far from unheard of to do the random playouts with instead some 'heavier' heuristic to make the playouts a little better estimators of the node value, but the heavy playouts do not do any kind of learning, the heuristic you start with is what you end with; what makes this analogous to policy iteration (hence the names for the Zero algorithm of 'tree iteration' or 'expert iteration') is that the refined estimates from the multiple heavy playouts are then used to improve the heavy playout heuristic (ie. a NN which can be optimized via backpropagation in a supervised learning of board position -> value). Then in a self-play setting, the MCTS continually refines its heavy heuristic (the NN) until it's so good that the NN+MCTS is superhuman. Then at play time you can drop the MCTS entirely and just use the heavy heuristic to do a very simple tree search to choose a move (which I think might actually be a minimax with a fixed depth but I forget).

https://news.ycombinator.com/item?id=15627340
Henk
Posts: 7218
Joined: Mon May 27, 2013 10:31 am

Re: Google's AlphaGo team has been working on chess

Post by Henk »

brianr wrote:
Henk wrote:That's fast show us some games.
??

If this was meant for me, it would take thousands of cpu/gpu (both) hours to even just begin to train any reasonable NN. It has been estimated that AlphaGo Zero would take something like about 1,700 years on typical consumer hardware. For chess, I only got it to start running, and am still trying to understand it. There is a Connect4 game he did also that is a simpler starting point. The author is considering a distributed approach, I think.
If alpha zero solution only works on hardware that is current after say thirty years then it is not much worth for us. Then I don't understand what the fuzz is all about.
CheckersGuy
Posts: 273
Joined: Wed Aug 24, 2016 9:49 pm

Re: Google's AlphaGo team has been working on chess

Post by CheckersGuy »

Henk wrote:
brianr wrote:
Henk wrote:That's fast show us some games.
??

If this was meant for me, it would take thousands of cpu/gpu (both) hours to even just begin to train any reasonable NN. It has been estimated that AlphaGo Zero would take something like about 1,700 years on typical consumer hardware. For chess, I only got it to start running, and am still trying to understand it. There is a Connect4 game he did also that is a simpler starting point. The author is considering a distributed approach, I think.
If alpha zero solution only works on hardware that is current after say thirty years then it is not much worth for us. Then I don't understand what the fuzz is all about.
30 years ? :lol: I bet you a lot of money that we will have similiar performance (4tpu's) within 5 years
Henk
Posts: 7218
Joined: Mon May 27, 2013 10:31 am

Re: Google's AlphaGo team has been working on chess

Post by Henk »

CheckersGuy wrote:
Henk wrote:
brianr wrote:
Henk wrote:That's fast show us some games.
??

If this was meant for me, it would take thousands of cpu/gpu (both) hours to even just begin to train any reasonable NN. It has been estimated that AlphaGo Zero would take something like about 1,700 years on typical consumer hardware. For chess, I only got it to start running, and am still trying to understand it. There is a Connect4 game he did also that is a simpler starting point. The author is considering a distributed approach, I think.
If alpha zero solution only works on hardware that is current after say thirty years then it is not much worth for us. Then I don't understand what the fuzz is all about.
30 years ? :lol: I bet you a lot of money that we will have similiar performance (4tpu's) within 5 years
He talked about "1,700 years on typical consumer hardware". [At this moment I don't know what is the difference between cpu/gpu/tpu]
syzygy
Posts: 5566
Joined: Tue Feb 28, 2012 11:56 pm

Re: Google's AlphaGo team has been working on chess

Post by syzygy »

AlvaroBegue wrote:
syzygy wrote:[...] And the randomness of the selection is also quite limited (if not completely absent - the paper states that the edge is selected that maximizes an "upper confidence bound").
Randomness has never been a requirement in the tree part of the search.
Yes, it seems the randomness refers just to the random playouts. But without any randomness left I don't see what the M and the C are still doing in AlphaZero's MCTS.
The earliest description I remember of this type of algorithm with MoGo's UCT search, which picked the move that maximizes a formula called UCB1 ("UCB" stands for "Upper Confidence Bound", just like in the recent paper). Actually I imagine most implementations don't have any random element in the tree part of the playout.
The "move probabilities" term in the paper initially suggested to me that the selection of the next node followed those probabilities. Apparently it does not, so why they are called probabilities is still rather unclear to me. (Probably it has something to do with the "confidence" in UCB.)
AlphaGo initially combined the results of playouts with the evaluation at the leaf of the tree (that was probably its biggest innovation), and then AlphaGo Zero and AlphaZero do away with the playout altogether, using only an evaluation function. Still, the whole mechanism for growing the tree and for backing up values is the same, so it feels like the MCTS label fits.
To me it feels more like the tree-expansion of proof-number search or of book-building algorithms. The innovation in MCTS, at least to me, lied solely in the random playouts (which should not work very well for chess).
CheckersGuy
Posts: 273
Joined: Wed Aug 24, 2016 9:49 pm

Re: Google's AlphaGo team has been working on chess

Post by CheckersGuy »

syzygy wrote:
AlvaroBegue wrote:
syzygy wrote:[...] And the randomness of the selection is also quite limited (if not completely absent - the paper states that the edge is selected that maximizes an "upper confidence bound").
Randomness has never been a requirement in the tree part of the search.
Yes, it seems the randomness refers just to the random playouts. But without any randomness left I don't see what the M and the C are still doing in AlphaZero's MCTS.
The earliest description I remember of this type of algorithm with MoGo's UCT search, which picked the move that maximizes a formula called UCB1 ("UCB" stands for "Upper Confidence Bound", just like in the recent paper). Actually I imagine most implementations don't have any random element in the tree part of the playout.
The "move probabilities" term in the paper initially suggested to me that the selection of the next node followed those probabilities. Apparently it does not, so why they are called probabilities is still rather unclear to me. (Probably it has something to do with the "confidence" in UCB.)
AlphaGo initially combined the results of playouts with the evaluation at the leaf of the tree (that was probably its biggest innovation), and then AlphaGo Zero and AlphaZero do away with the playout altogether, using only an evaluation function. Still, the whole mechanism for growing the tree and for backing up values is the same, so it feels like the MCTS label fits.
To me it feels more like the tree-expansion of proof-number search or of book-building algorithms. The innovation in MCTS, at least to me, lied solely in the random playouts (which should not work very well for chess).
You seem to forget that the "innovation" is mostly the neural network they use. There isn't anything special about mcts but apparently using a neural network + reinforcement learning instead of random playouts seems to work quite well.
Standard mcts doesn't really work for chess because random playouts aren't any good. The move probabilites (or whatever you wanna call it) from the NN provide a "soft-pruning". Completly stupid moves will have a very small probability and therefore won't be visited very often. Contrary to standard mcts that may visits those nodes way more often
syzygy
Posts: 5566
Joined: Tue Feb 28, 2012 11:56 pm

Re: Google's AlphaGo team has been working on chess

Post by syzygy »

CheckersGuy wrote:
syzygy wrote:The innovation in MCTS, at least to me, lied solely in the random playouts (which should not work very well for chess).
You seem to forget that the "innovation" is mostly the neural network they use.
No, I simply meant the innovation in MCTS when MCTS was introduced (which seems to have been in the 1970s, but that does not matter here). The core element of MCTS is the random element. Without it, it makes no sense, to me, to still call it a Monte Carlo method. It is a best-first tree search algorithm, not a Monte Carlo tree search algorithm.
There isn't anything special about mcts but apparently using a neural network + reinforcement learning instead of random playouts seems to work quite well.
Taking the MC out of MCTS was probably necessary to make the AlphaZero approach work for chess.

I am still surprised that it works.
The neural network obviously gives quite good evaluations for the type of positions on which it was trained, but it seems it really needs enough tree search to do well against Stockfish. So the tree search may be part of the trick. Did Google simply discover that a book-building type algorithm for controlling the top of the tree outperforms alpha-beta when running on a highly parallel system? This is something that we can test. (The idea is not new, Marcel Kervinck mentioned it to me some years ago, but I don't know if anyone has tried it out on big enough hardware.)

Another idea is to try to get more out of self-play. Would it make sense to tune eval parameters such that, say, the results of a 10-ply search get closer to the results of a 15-ply search? Maybe this is only feasible with a ridiculously complex function (which then needs Google's hardware to be of any use).
Jesse Gersenson
Posts: 593
Joined: Sat Aug 20, 2011 9:43 am

Re: Google's AlphaGo team has been working on chess

Post by Jesse Gersenson »

If I'm overlooking something simple, please point it out. My laymen calculation is:

part of the computer time Google used, translated into a single core of an Intel Xeon 2697 v4 (35 gflops)
5000 1st gen TPUs running 9 hours = 5000 * 96 teraflops * 9 hours =
4320000 teraflop-hours

4320000 teraflop-hours * 1000 =
4320000000 gigaflop-hours

4320000000 gigaflop-hours / 35 gigaflop =
123428571 hours
123428571 hours / 24 =
5142857 days / 365 = 14090 years
Joost Buijs
Posts: 1563
Joined: Thu Jul 16, 2009 10:47 am
Location: Almere, The Netherlands

Re: Google's AlphaGo team has been working on chess

Post by Joost Buijs »

I don't think it will take that long. Google is talking about 8 bit teraops which is not the same as teraflops. According to Google their 1st gen TPU runs approximately 35 times faster as the Xeon E5-2699 v3. (using all cores?)
5000 * 9 * 35 == 1575000 hours. 1575000/24/365 == 180 years.
Still a considerable amount of time though.