Page 1 of 1

ELF OpenGo: An Open Reimplementation of AlphaZero

Posted: Wed Feb 13, 2019 5:04 pm
by clumma
What is Leela doing wrong?

https://arxiv.org/abs/1902.04522

-Carl

Re: ELF OpenGo: An Open Reimplementation of AlphaZero

Posted: Wed Feb 13, 2019 6:19 pm
by Steppenwolf
Great, there is alreday a binary available: https://facebook.ai/developers/tools/elf-opengo
Waiting for Porting this ELF to chess...

Re: ELF OpenGo: An Open Reimplementation of AlphaZero

Posted: Wed Feb 13, 2019 6:29 pm
by Guenther
Steppenwolf wrote:
Wed Feb 13, 2019 6:19 pm
Great, there is alreday a binary available: https://facebook.ai/developers/tools/elf-opengo
Waiting for Porting this ELF to chess...
This is even more interesting for the programmers section
https://github.com/pytorch/ELF

Re: ELF OpenGo: An Open Reimplementation of AlphaZero

Posted: Wed Feb 13, 2019 6:45 pm
by Steppenwolf
Now are Volunteers needed for ELF OpenChess!

I just found: https://github.com/pytorch/ELF/issues/8

Re: ELF OpenGo: An Open Reimplementation of AlphaZero

Posted: Thu Feb 14, 2019 4:03 am
by Daniel Shawul
Notes I took from glancing at the paper

a) CPUCT = 1.5

b) Virtual loss = 1

c) Ladders ( tactics in GO) is hard to learn

d) Batch normalization moment staleness. Some technical issue i don't fully understand but for which they have provided a plugin in torch

e) Value head only, which is something I used to do, gives a weak engine. They accidentally found this out when they fixed policy weight to 1/362
by mistake. The full quote:
Dominating value gradients We performed an unintentional
ablation study in which we set the cross entropy coefficient
to 1/362 during backpropogation. This change will
train the value network much faster than the policy network.
We observe that ELF OpenGo can still achieve a strength
of around amateur dan level. Further progress is extremely
slow, likely due to the minimal gradient from policy network.
This suggests that any MCTS augmented with only a
value heuristic has a relatively low skill ceiling in Go.
f) Game resignation during selfplay training is important. It will focus the net to learn the opening/middlegame (most important parts of the game) faster

Re: ELF OpenGo: An Open Reimplementation of AlphaZero

Posted: Thu Feb 14, 2019 9:41 am
by vijadhav321
Great information

Re: ELF OpenGo: An Open Reimplementation of AlphaZero

Posted: Mon Mar 04, 2019 7:57 am
by tocnaza
Nice post!