Google's AlphaGo team has been working on chess

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

CheckersGuy
Posts: 273
Joined: Wed Aug 24, 2016 9:49 pm

Re: Google's AlphaGo team has been working on chess

Post by CheckersGuy »

hgm wrote:
syzygy wrote:They are not playing out any games "to the very end". And the randomness of the selection is also quite limited (if not completely absent - the paper states that the edge is selected that maximizes an "upper confidence bound").
Where do you read that? When I was glancing the paper, I got the impression that what they do is normal MCTS, with the caveat that 'random' in the move selection for the playouts does not mean 'homogeneously distributed probabilities', but probabilities according to the policy of their NN. In absence of scoring at the game ends (e.g. declaring every game a draw) that would make the MCTS a self-fulfilling prophecy, generating a tree that has exactly the same statistics as the NN prediction. But the actual game results will steer the MCTS away from poor moves, and will make it focus on good moves. And the NN will then be tuned to already produce such focusing on good moves for the initial visiting probabilities, etc.

So you get progressively more realistic 'random' playouts, which will define more and more narrow MCTS trees in every position of the test games.
They dont do random playouts which means if you are at a leafNode you give the position to the NN to get a winProbability instead of doing a random playout to the very end of the game
User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Google's AlphaGo team has been working on chess

Post by hgm »

So what do they mean by this?
At the end of the game, the terminal position sT is scored according to the rules of the game to compute the game outcome z: −1 for a loss, 0 for
a draw, and +1 for a win.
Albert Silver
Posts: 3019
Joined: Wed Mar 08, 2006 9:57 pm
Location: Rio de Janeiro, Brazil

Re: Google's AlphaGo team has been working on chess

Post by Albert Silver »

Milos wrote:
hgm wrote:
syzygy wrote:They are not playing out any games "to the very end". And the randomness of the selection is also quite limited (if not completely absent - the paper states that the edge is selected that maximizes an "upper confidence bound").
Where do you read that? When I was glancing the paper, I got the impression that what they do is normal MCTS, with the caveat that 'random' in the move selection for the playouts does not mean 'homogeneously distributed probabilities', but probabilities according to the policy of their NN. In absence of scoring at the game ends (e.g. declaring every game a draw) that would make the MCTS a self-fulfilling prophecy, generating a tree that has exactly the same statistics as the NN prediction. But the actual game results will steer the MCTS away from poor moves, and will make it focus on good moves. And the NN will then be tuned to already produce such focusing on good moves for the initial visiting probabilities, etc.

So you get progressively more realistic 'random' playouts, which will define more and more narrow MCTS trees in every position of the test games.
There are no playouts. Did you even read AGZ Nature paper???
Compared to the MCTS in AlphaGo Fan and AlphaGo Lee, the principal differences are that AlphaGo Zero does not use any rollouts; it uses a single neural network instead of separate policy and value networks; leaf nodes are always expanded, rather than using dynamic expansion; each search thread simply waits for the neural network evaluation, rather than performing evaluation and backup asynchronously; and there is no tree policy.
Here is a link to the Nature article (given at DeepMind site):

https://goo.gl/4SbJh1
"Tactics are the bricks and sticks that make up a game, but positional play is the architectural blueprint."
Michel
Posts: 2272
Joined: Mon Sep 29, 2008 1:50 am

Re: Google's AlphaGo team has been working on chess

Post by Michel »

hgm wrote:So what do they mean by this?
At the end of the game, the terminal position sT is scored according to the rules of the game to compute the game outcome z: −1 for a loss, 0 for
a draw, and +1 for a win.
Maybe just that they obviously don't use their NN to score positions where the game has ended?
Ideas=science. Simplification=engineering.
Without ideas there is nothing to simplify.
CheckersGuy
Posts: 273
Joined: Wed Aug 24, 2016 9:49 pm

Re: Google's AlphaGo team has been working on chess

Post by CheckersGuy »

Michel wrote:
hgm wrote:So what do they mean by this?
At the end of the game, the terminal position sT is scored according to the rules of the game to compute the game outcome z: −1 for a loss, 0 for
a draw, and +1 for a win.
Maybe just that they obviously don't use their NN to score positions where the game has ended?
This. Maybe HG forgot that this paper isn't only about chess but any board game. Therefore terminal positions are scored according to the rules of the game. This quote was about the self-played games which obviously need to be scored !!!
Sven
Posts: 4052
Joined: Thu May 15, 2008 9:57 pm
Location: Berlin, Germany
Full name: Sven Schüle

Re: Google's AlphaGo team has been working on chess

Post by Sven »

CheckersGuy wrote:
Michel wrote:
hgm wrote:So what do they mean by this?
At the end of the game, the terminal position sT is scored according to the rules of the game to compute the game outcome z: −1 for a loss, 0 for
a draw, and +1 for a win.
Maybe just that they obviously don't use their NN to score positions where the game has ended?
This. Maybe HG forgot that this paper isn't only about chess but any board game. Therefore terminal positions are scored according to the rules of the game. This quote was about the self-played games which obviously need to be scored !!!
Strictly spoken, the Nature article is only about Go. But apart from that nitpicking you are right. The phrase quoted by HG is clearly related to terminal positions of the self-play games, not to leaf nodes of MCTS search. It is crucial to understand the difference: training was done by playing a huge number of self-play games ("iterations"), and at each position of those games MCTS was used to calculate move probabilites which in turn were used to improve the NN. According to the section "Methods" an MCTS leaf node is reached after L "time-steps" which sounds like some fixed search depth. AlphaGo Zero (and thus also AlphaZero for chess) does not use MonteCarlo "playouts" or "rollouts". So even though they call their method MCTS-based, it is not like a standard MCTS due to the complete lack of random playouts.
CheckersGuy
Posts: 273
Joined: Wed Aug 24, 2016 9:49 pm

Re: Google's AlphaGo team has been working on chess

Post by CheckersGuy »

Sven wrote:
CheckersGuy wrote:
Michel wrote:
hgm wrote:So what do they mean by this?
At the end of the game, the terminal position sT is scored according to the rules of the game to compute the game outcome z: −1 for a loss, 0 for
a draw, and +1 for a win.
Maybe just that they obviously don't use their NN to score positions where the game has ended?
This. Maybe HG forgot that this paper isn't only about chess but any board game. Therefore terminal positions are scored according to the rules of the game. This quote was about the self-played games which obviously need to be scored !!!
Strictly spoken, the Nature article is only about Go. But apart from that nitpicking you are right. The phrase quoted by HG is clearly related to terminal positions of the self-play games, not to leaf nodes of MCTS search. It is crucial to understand the difference: training was done by playing a huge number of self-play games ("iterations"), and at each position of those games MCTS was used to calculate move probabilites which in turn were used to improve the NN. According to the section "Methods" an MCTS leaf node is reached after L "time-steps" which sounds like some fixed search depth. AlphaGo Zero (and thus also AlphaZero for chess) does not use MonteCarlo "playouts" or "rollouts". So even though they call their method MCTS-based, it is not like a standard MCTS due to the complete lack of random playouts.
Yes.
I meant the alphaZero paper and not the AlphaGoZero-nature paper. I guess we will find out more about how A0 zero works once they publish the entire paper
syzygy
Posts: 5557
Joined: Tue Feb 28, 2012 11:56 pm

Re: Google's AlphaGo team has been working on chess

Post by syzygy »

Sven wrote:According to the section "Methods" an MCTS leaf node is reached after L "time-steps" which sounds like some fixed search depth.
As I understand it, they basically start with a tree consisting of just the root node. This node is at the same time the single leaf node and is expanded by running it through the NN. The move probabilities returned by the NN are assigned to the edges from the root node to the new leaf nodes. In the next step, one of these edges is chosen by applying the UCB1 strategy. We are then in a leaf node, which is expanded, etc. After every expansion, the move probabilities of the new leaf nodes are backed up to the root. So with 800 "simulations" (but you skip the simulations) you end up with a tree of 801 nodes. The most promising root move is then chosen and played.

One of the papers (probably the Nature paper) explains that the subtree for the move played is carried over to the next move, which obviously makes sense.

So there is no fixed search depth, but there is a fixed tree size (at least in self-play games for training).
AlphaGo Zero (and thus also AlphaZero for chess) does not use MonteCarlo "playouts" or "rollouts". So even though they call their method MCTS-based, it is not like a standard MCTS due to the complete lack of random playouts.
It is MCTS without MC...
User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Google's AlphaGo team has been working on chess

Post by hgm »

OK, I hadn't seen the nature paper; just the one on Alpha Zero, and they seem to have this part left out of there. This is why I asked.

I guess it makes sense: in fact the whole collection of training games at one setting of the NN parameters can be considered a samplig of a giant MCTS. The test games themselves are the random playouts of that search.

For playing a game with the trained network they just simulate the MCTS using the sampling probabilities. Which should not have any detrimetal effects; it just spares you from the sampling noise you would otherwise have. Nodes a few ply below the leaves will average their score over so many leaves, that replacing an actual playout by the average outcome of such a playout would be justified, even though the individual outcome predictions in leaves will be pretty unreliable. (Because the NN will have difficulty seeing tactics, and only has the task to indicate what is useful to look at, because it has sufficient potential to be good.)

This whole exercise proves that a search strategy that is highly selective (but takes great care in what it selects) can be competitive with brute-force search. (Which we of course already knew, because it is how humans play Chess.) The current reserach gives an upper bound on the complexity of a NN that can achieve the required accuracy.

Now we know that neural networks are a quite inefficient way to solve problems. Their only virtue is that they can be trained to do the job, which is very helpful when the programmer doesn't know how to do the job himself. Usually you need a large overkill in terms of neurons, to make it reasonably sure that the connections you turn out to need will accidentally be in the network. So it is very likely that the neural network that came out can eventually be simplified a few orders of magnitude by explicitly programmed calculation of intermediate results it seems to use in its calculation, and now derives only at great cost. (Imagine how many neurons you would need to recognize a Knight attacks a Queen somewhere on the board, and suggest making the capture is a good idea, and how many operations it would require to 'run' such a network, compared to how long a program written to do just that would need for it.)

So it is quite conceivable that such a highly selective search could in fact run on a normal PC, provided the NN can be replaced by dedicated code doing the same thing with much less effort, now that the training has revealed what it actually is that needs to be done.
CheckersGuy
Posts: 273
Joined: Wed Aug 24, 2016 9:49 pm

Re: Google's AlphaGo team has been working on chess

Post by CheckersGuy »

syzygy wrote:
Sven wrote:According to the section "Methods" an MCTS leaf node is reached after L "time-steps" which sounds like some fixed search depth.
As I understand it, they basically start with a tree consisting of just the root node. This node is at the same time the single leaf node and is expanded by running it through the NN. The move probabilities returned by the NN are assigned to the edges from the root node to the new leaf nodes. In the next step, one of these edges is chosen by applying the UCB1 strategy. We are then in a leaf node, which is expanded, etc. After every expansion, the move probabilities of the new leaf nodes are backed up to the root. So with 800 "simulations" (but you skip the simulations) you end up with a tree of 801 nodes. The most promising root move is then chosen and played.

One of the papers (probably the Nature paper) explains that the subtree for the move played is carried over to the next move, which obviously makes sense.

So there is no fixed search depth, but there is a fixed tree size (at least in self-play games for training).
AlphaGo Zero (and thus also AlphaZero for chess) does not use MonteCarlo "playouts" or "rollouts". So even though they call their method MCTS-based, it is not like a standard MCTS due to the complete lack of random playouts.
It is MCTS without MC...
It's not the move probabilities that are backed up but the winning probability of the leaf-node according to the neural network.
The NN has two outputs. If you are in a state S the neural network outputs a probability distribution for all the moves a in state S. Additionally, the NN also outputs a winning probability for the state S !!!
The winning probability takes the place of the random playout in normal mcts and is therefore propagated up the root. The move probabilities provide a bias in the selection phase of the mcts.