A Simple Alpha(Go) Zero Tutorial

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

BeyondCritics
Posts: 396
Joined: Sat May 05, 2012 2:48 pm
Full name: Oliver Roese

A Simple Alpha(Go) Zero Tutorial

Post by BeyondCritics »

brianr
Posts: 536
Joined: Thu Mar 09, 2006 3:01 pm

Re: A Simple Alpha(Go) Zero Tutorial

Post by brianr »

Great find. Thank you.

For follow-up, note links (from above):
https://github.com/suragnair/alpha-zero-general

Downloadable paper:
https://github.com/suragnair/alpha-zero ... riteup.pdf
TommyTC
Posts: 38
Joined: Thu Mar 30, 2017 8:52 am

Re: A Simple Alpha(Go) Zero Tutorial

Post by TommyTC »

"It assumes basic familiarity with machine learning and reinforcement learning concepts, and should be accessible if you understand neural network basics and Monte Carlo Tree Search. "

I guess "simple" is in the mind of the beholder :)
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: A Simple Alpha(Go) Zero Tutorial

Post by Henk »

I'm using -1 + 2 / (1 + Exp(-sum) in the output layer to get v(s) value's between [-1, 1] but Exp is now consuming most of all processing time.

Are there faster alternatives ?
Daniel Shawul
Posts: 4185
Joined: Tue Mar 14, 2006 11:34 am
Location: Ethiopia

Re: A Simple Alpha(Go) Zero Tutorial

Post by Daniel Shawul »

ReLU is used after the convolution steps -- which leads to faster convergence and also faster computation time, but you are bound to sigmoid or tanh in the fully connecteld layers
User avatar
hgm
Posts: 27788
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: A Simple Alpha(Go) Zero Tutorial

Post by hgm »

Henk wrote:I'm using -1 + 2 / (1 + Exp(-sum) in the output layer to get v(s) value's between [-1, 1] but Exp is now consuming most of all processing time.

Are there faster alternatives ?
Just tabulate the function, so that it requires only an array access.
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: A Simple Alpha(Go) Zero Tutorial

Post by Henk »

Daniel Shawul wrote:ReLU is used after the convolution steps -- which leads to faster convergence and also faster computation time, but you are bound to sigmoid or tanh in the fully connecteld layers
I haven't started with implementing convolution steps. Might be that these layers make it even much slower. So better not optimize yet.
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: A Simple Alpha(Go) Zero Tutorial

Post by Henk »

hgm wrote:
Henk wrote:I'm using -1 + 2 / (1 + Exp(-sum) in the output layer to get v(s) value's between [-1, 1] but Exp is now consuming most of all processing time.

Are there faster alternatives ?
Just tabulate the function, so that it requires only an array access.
Argument is a double. Or you mean (make it discrete) first convert it into an integer and then do a lookup to get an approximation.
User avatar
hgm
Posts: 27788
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: A Simple Alpha(Go) Zero Tutorial

Post by hgm »

For inference 8-bit integers seem to be enough for cell outputs. This at least is what the Google gen-1 TPUs use. Only for the back-propagation during training a better precision is needed. Of course this means that the weight x output products can be 16 bit, and a number of those will be summed to act as input to the sigmoid layer.

But it should not cause any problems to do a piece-wise linear approximation of the sigmoid. E.g. quantize the input in 256 intervals, and tabulate both the function and its derivative in each interval.

The coursest approximation of the sigmoid would be to just clip f(x) = x at -1 and +1. Even that might work.
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: A Simple Alpha(Go) Zero Tutorial

Post by Henk »

In Monte Carlo Tree search I'm using PUCT. But I don't know what would be a reasonable value for exploration constant C ( degree of exploration). Would it be more like 0.9 or 0.1 or something else ?