Today is a big day in computer chess:
https://arxiv.org/abs/1712.01815
https://arxiv.org/pdf/1712.01815.pdf
Google's AlphaGo team has been working on chess
Moderators: hgm, Rebel, chrisw
-
- Posts: 38
- Joined: Thu Mar 09, 2006 2:19 am
-
- Posts: 3550
- Joined: Thu Jun 07, 2012 11:02 pm
Re: Google's AlphaGo team has been working on chess
Incredible:
In chess, AlphaZero outperformed Stockfish after just 4 hours
-
- Posts: 127
- Joined: Sat Jan 22, 2011 7:14 pm
- Location: Lille, France
Re: Google's AlphaGo team has been working on chess
Time is misleading in DeepMind's papers, as they use thousands of "computers" (not even commercially available). Money would be a better measure.Modern Times wrote:Incredible:
In chess, AlphaZero outperformed Stockfish after just 4 hours
-
- Posts: 2
- Joined: Tue Aug 25, 2015 6:05 pm
Re: Google's AlphaGo team has been working on chess
The AlphaZero training system costed $ 4 millions of hardware. (figures given for alpha go zero, don't have source under hand)Money would be a better measure.
-
- Posts: 2658
- Joined: Wed Mar 10, 2010 10:18 pm
- Location: Hamburg, Germany
- Full name: Srdja Matovic
Re: Google's AlphaGo team has been working on chess
Or maybe games for training.....Money would be a better measure.
NeuroChess 120 000
Giraffe (est.): 10 000 000
AlphaZero Chess: 44 000 000
--
Srdja
-
- Posts: 24
- Joined: Wed Nov 05, 2014 11:28 am
- Location: Italy
Re: Google's AlphaGo team has been working on chess
Evaluation speed:
AlphaZero 80K
Stockfish 70.000K
What?!
AlphaZero 80K
Stockfish 70.000K
What?!
-
- Posts: 395
- Joined: Fri Aug 12, 2016 8:43 pm
Re: Google's AlphaGo team has been working on chess
"Instead of a handcrafted evaluation function and move ordering heuristics, AlphaZero utilises a deep neural network (p,v) = fθ(s) with parameters θ.pkappler wrote:Today is a big day in computer chess:
https://arxiv.org/abs/1712.01815
https://arxiv.org/pdf/1712.01815.pdf
This neural network takes the board position s as an input and outputs a vector of move probabilities p with components pa = Pr(a|s) for each action a, and a scalar value v estimating the expected outcome z from position s"
This seems normal to me.
"Instead of an alpha-beta search with domain-specific enhancements, AlphaZero uses a general-purpose Monte-Carlo tree search (MCTS) algorithm. Each search consists of a series of simulated games of self-play that traverse a tree from root to leaf. Each simulation proceeds by selecting in each state a move with low visit count, high move probability and high value" [emphasis mine]
This is interesting. If I understand it correctly, it basically goes deeper only after reaching a high level of hash table hits.
"AlphaZero vs Stockfish: 25 win for AlphaZero, 25 draw, 0 loss (each program was given 1 minute of thinking time per move, strongest skill level using 64 threads and a hash size of 1GB)"
This is sci-fi. I do not have a 64 core machine but on my pc Stockfish do not sacrifice a Knight for 2 pawns:
1.e4 e5 2.Nf3 Nc6 3.Bb5 Nf6 4.d3 Bc5 5.Bxc6 dxc6 6.O-O Nd7 7.Nbd2 O-O 8.Qe1 f6 9.Nc4 Rf7 10.a4 Bf8 11.Kh1 Nc5 12.a5 Ne6 13.Ncxe5?
-
- Posts: 4607
- Joined: Wed Oct 01, 2008 6:33 am
- Location: Regensburg, Germany
- Full name: Guenther Simon
Re: Google's AlphaGo team has been working on chess
The paper is very interesting. Nevertheless selecting only wins and stripping off all game infos from the pgn might do for non-chess scientists,Fulvio wrote:"Instead of a handcrafted evaluation function and move ordering heuristics, AlphaZero utilises a deep neural network (p,v) = fθ(s) with parameters θ.pkappler wrote:Today is a big day in computer chess:
https://arxiv.org/abs/1712.01815
https://arxiv.org/pdf/1712.01815.pdf
This neural network takes the board position s as an input and outputs a vector of move probabilities p with components pa = Pr(a|s) for each action a, and a scalar value v estimating the expected outcome z from position s"
This seems normal to me.
"Instead of an alpha-beta search with domain-specific enhancements, AlphaZero uses a general-purpose Monte-Carlo tree search (MCTS) algorithm. Each search consists of a series of simulated games of self-play that traverse a tree from root to leaf. Each simulation proceeds by selecting in each state a move with low visit count, high move probability and high value" [emphasis mine]
This is interesting. If I understand it correctly, it basically goes deeper only after reaching a high level of hash table hits.
"AlphaZero vs Stockfish: 25 win for AlphaZero, 25 draw, 0 loss (each program was given 1 minute of thinking time per move, strongest skill level using 64 threads and a hash size of 1GB)"
This is sci-fi. I do not have a 64 core machine but on my pc Stockfish do not sacrifice a Knight for 2 pawns:
1.e4 e5 2.Nf3 Nc6 3.Bb5 Nf6 4.d3 Bc5 5.Bxc6 dxc6 6.O-O Nd7 7.Nbd2 O-O 8.Qe1 f6 9.Nc4 Rf7 10.a4 Bf8 11.Kh1 Nc5 12.a5 Ne6 13.Ncxe5?
but here it is quite useless and remains doubtful.
I hope there is more to come with more details for the games and the setup.
https://rwbc-chess.de
trollwatch:
Talkchess nowadays is a joke - it is full of trolls/idiots/people stuck in the pleistocene > 80% of the posts fall into this category...
trollwatch:
Talkchess nowadays is a joke - it is full of trolls/idiots/people stuck in the pleistocene > 80% of the posts fall into this category...
-
- Posts: 2559
- Joined: Fri Nov 26, 2010 2:00 pm
- Location: Czech Republic
- Full name: Martin Sedlak
Re: Google's AlphaGo team has been working on chess
While this is indeed incredible, show me how it beats SF dev with good book and syzygy on equal hardware in a 1000 game match.
Alternatively winning next TCEC should do
Alternatively winning next TCEC should do
-
- Posts: 4185
- Joined: Tue Mar 14, 2006 11:34 am
- Location: Ethiopia
Re: Google's AlphaGo team has been working on chess
Most of us here suspected that this could happen once Giraffe showed it can beat Stockfish's eval.
Just the fact that the new approch to chess programming worked incredibly well is fantastic even if it didn't beat the best.
Daniel
Just the fact that the new approch to chess programming worked incredibly well is fantastic even if it didn't beat the best.
Daniel