Page 1 of 2

A Simple Alpha(Go) Zero Tutorial

Posted: Sat Dec 30, 2017 1:31 am
by BeyondCritics

Re: A Simple Alpha(Go) Zero Tutorial

Posted: Sat Dec 30, 2017 2:27 pm
by brianr
Great find. Thank you.

For follow-up, note links (from above):
https://github.com/suragnair/alpha-zero-general

Downloadable paper:
https://github.com/suragnair/alpha-zero ... riteup.pdf

Re: A Simple Alpha(Go) Zero Tutorial

Posted: Sun Dec 31, 2017 6:03 pm
by TommyTC
"It assumes basic familiarity with machine learning and reinforcement learning concepts, and should be accessible if you understand neural network basics and Monte Carlo Tree Search. "

I guess "simple" is in the mind of the beholder :)

Re: A Simple Alpha(Go) Zero Tutorial

Posted: Wed Jan 03, 2018 4:51 pm
by Henk
I'm using -1 + 2 / (1 + Exp(-sum) in the output layer to get v(s) value's between [-1, 1] but Exp is now consuming most of all processing time.

Are there faster alternatives ?

Re: A Simple Alpha(Go) Zero Tutorial

Posted: Wed Jan 03, 2018 4:56 pm
by Daniel Shawul
ReLU is used after the convolution steps -- which leads to faster convergence and also faster computation time, but you are bound to sigmoid or tanh in the fully connecteld layers

Re: A Simple Alpha(Go) Zero Tutorial

Posted: Wed Jan 03, 2018 4:59 pm
by hgm
Henk wrote:I'm using -1 + 2 / (1 + Exp(-sum) in the output layer to get v(s) value's between [-1, 1] but Exp is now consuming most of all processing time.

Are there faster alternatives ?
Just tabulate the function, so that it requires only an array access.

Re: A Simple Alpha(Go) Zero Tutorial

Posted: Wed Jan 03, 2018 5:05 pm
by Henk
Daniel Shawul wrote:ReLU is used after the convolution steps -- which leads to faster convergence and also faster computation time, but you are bound to sigmoid or tanh in the fully connecteld layers
I haven't started with implementing convolution steps. Might be that these layers make it even much slower. So better not optimize yet.

Re: A Simple Alpha(Go) Zero Tutorial

Posted: Wed Jan 03, 2018 5:07 pm
by Henk
hgm wrote:
Henk wrote:I'm using -1 + 2 / (1 + Exp(-sum) in the output layer to get v(s) value's between [-1, 1] but Exp is now consuming most of all processing time.

Are there faster alternatives ?
Just tabulate the function, so that it requires only an array access.
Argument is a double. Or you mean (make it discrete) first convert it into an integer and then do a lookup to get an approximation.

Re: A Simple Alpha(Go) Zero Tutorial

Posted: Wed Jan 03, 2018 5:52 pm
by hgm
For inference 8-bit integers seem to be enough for cell outputs. This at least is what the Google gen-1 TPUs use. Only for the back-propagation during training a better precision is needed. Of course this means that the weight x output products can be 16 bit, and a number of those will be summed to act as input to the sigmoid layer.

But it should not cause any problems to do a piece-wise linear approximation of the sigmoid. E.g. quantize the input in 256 intervals, and tabulate both the function and its derivative in each interval.

The coursest approximation of the sigmoid would be to just clip f(x) = x at -1 and +1. Even that might work.

Re: A Simple Alpha(Go) Zero Tutorial

Posted: Fri Jan 05, 2018 4:22 pm
by Henk
In Monte Carlo Tree search I'm using PUCT. But I don't know what would be a reasonable value for exploration constant C ( degree of exploration). Would it be more like 0.9 or 0.1 or something else ?