Google's AlphaGo team has been working on chess

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

mar
Posts: 2554
Joined: Fri Nov 26, 2010 2:00 pm
Location: Czech Republic
Full name: Martin Sedlak

Re: Google's AlphaGo team has been working on chess

Post by mar »

EvgeniyZh wrote:
mar wrote:While this is indeed incredible, show me how it beats SF dev with good book and syzygy on equal hardware in a 1000 game match.

Alternatively winning next TCEC should do :wink:
You suppose to run Stockfish on GPU?)
mar wrote:They are scientists so it would be nice to compare apples to apples.
AlphaZero din't used neither book nor syzygy, neither did stockfish. That sounds like apples to apples.
Obviously I'd like to see AlphaZero running on a CPU (because running SF on a TPU won't happen) and still beating SF, while allowing SF to use every means to play the best chess it can, leaving zero doubt.

I wonder if they could do it, maybe not at the moment but probably soon.

Considering the hardware at their disposal, a 100 game match seems rather short.

I'm shocked what they could accomplish without alphabeta though.
EvgeniyZh
Posts: 43
Joined: Fri Sep 19, 2014 4:54 pm
Location: Israel

Re: Google's AlphaGo team has been working on chess

Post by EvgeniyZh »

mar wrote:
EvgeniyZh wrote:
mar wrote:While this is indeed incredible, show me how it beats SF dev with good book and syzygy on equal hardware in a 1000 game match.

Alternatively winning next TCEC should do :wink:
You suppose to run Stockfish on GPU?)
mar wrote:They are scientists so it would be nice to compare apples to apples.
AlphaZero din't used neither book nor syzygy, neither did stockfish. That sounds like apples to apples.
Obviously I'd like to see AlphaZero running on a CPU (because running SF on a TPU won't happen) and still beating SF, while allowing SF to use every means to play the best chess it can, leaving zero doubt.

I wonder if they could do it, maybe not at the moment but probably soon.

Considering the hardware at their disposal, a 100 game match seems rather short.

I'm shocked what they could accomplish without alphabeta though.
Well, probably they should have give same FLOPS budget to both, that seems like the most fair you can get, given the inefficiency of switching hardware for either side.

Winning against latest Stockfish with opening book and endgame tables would be definitely even more impressive.
jorose
Posts: 358
Joined: Thu Jan 22, 2015 3:21 pm
Location: Zurich, Switzerland
Full name: Jonathan Rosenthal

Re: Google's AlphaGo team has been working on chess

Post by jorose »

Very cool! I am especially surprised they still relied on a MCTS approach in chess. I don't think anybody can actually reproduce these results at the moment with hardware at home but this certainly marks a significant shift in how computer chess will develop.

I am curious what kind of performance their program would be able to achieve on sub 2k off the shelf commercial hardware. Considering the power of their TPUs I imagine the penalty would be pretty huge. Regardless, commercial hardware is a question of when, and not if. Perhaps someone will improve their approach specifically for chess in some way?

I am curious if the same amount of people will work on the tinkering form of chess programming.
Rémi Coulom
Posts: 438
Joined: Mon Apr 24, 2006 8:06 pm

Re: Google's AlphaGo team has been working on chess

Post by Rémi Coulom »

xcombelle wrote:
Money would be a better measure.
The AlphaZero training system costed $ 4 millions of hardware. (figures given for alpha go zero, don't have source under hand)
The paper says they use 5,000 first-generation TPUs, and 64 second-generation TPUs. Such hardware is not available for sale, but might be similar to a V100 in terms of computing power. A single PCI V100 costs about 10,000 Euros in Europe. But if you buy 5,000, you can certainly get a much cheaper price. Of course you also need the computers that host them, and the power supply (250W*5,000 = 1.25 MW).

This being said, I would not be surprised if their trained network could still beat Stockfish on ordinary hardware. And I expect deep-learning hardware will become much cheaper and commonplace in the future. Even cell-phones are starting to have deep-learning hardware now.

A distributed open-source effort might be enough to produce a super-strong network in a few months. This is what Gian-Carlo has started with Leela in Go. Maybe he'll do it for chess and shogi, too.
mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 9:17 pm

Re: Google's AlphaGo team has been working on chess

Post by mcostalba »

I have read the paper: result is impressive!

Honestly I didn't think it was possible because my understanding was that chess is more "computer friendly" than Go....I was wrong.

It is true, SF is not meant to play at its best without a book and especially 1 fixed minute per move cuts out the whole time management, it would be more natural to play with tournament conditions, but nevertheless I think these are secondary aspects, what has been accomplished is huge.
Michel
Posts: 2272
Joined: Mon Sep 29, 2008 1:50 am

Re: Google's AlphaGo team has been working on chess

Post by Michel »

Rémi Coulom wrote:
xcombelle wrote:
Money would be a better measure.
The AlphaZero training system costed $ 4 millions of hardware. (figures given for alpha go zero, don't have source under hand)
The paper says they use 5,000 first-generation TPUs, and 64 second-generation TPUs. Such hardware is not available for sale, but might be similar to a V100 in terms of computing power. A single PCI V100 costs about 10,000 Euros in Europe. But if you buy 5,000, you can certainly get a much cheaper price. Of course you also need the computers that host them, and the power supply (250W*5,000 = 1.25 MW).

This being said, I would not be surprised if their trained network could still beat Stockfish on ordinary hardware. And I expect deep-learning hardware will become much cheaper and commonplace in the future. Even cell-phones are starting to have deep-learning hardware now.

A distributed open-source effort might be enough to produce a super-strong network in a few months. This is what Gian-Carlo has started with Leela in Go. Maybe he'll do it for chess and shogi, too.
I have a question that perhaps you can answer right away.

Almost a 1000 CPU years went into tuning SF until today....

Would you say that the training of AlphaGo required less or more resources than this?
Ideas=science. Simplification=engineering.
Without ideas there is nothing to simplify.
Rémi Coulom
Posts: 438
Joined: Mon Apr 24, 2006 8:06 pm

Re: Google's AlphaGo team has been working on chess

Post by Rémi Coulom »

Michel wrote:I have a question that perhaps you can answer right away.

Almost a 1000 CPU years went into tuning SF until today....

Would you say that the training of AlphaGo required less or more resources than this?
According to the paper, they trained for 9 hours, over 5000 TPUs.

5000 * 9 / 24 = 1875 TPU-days

A TPU is a bit like a super-powerful GPU. A very rough estimate may be that 10 GTX 1080 ti may have the power of a TPU. So if you get 100 people volunteering their GPU full time, that would take about 6 months. That looks doable.
jdart
Posts: 4366
Joined: Fri Mar 10, 2006 5:23 am
Location: http://www.arasanchess.org

Re: Google's AlphaGo team has been working on chess

Post by jdart »

Stockfish does such heavy pruning that it is throwing away most of the nodes in its search trees. But the ones it does search, it searches very deeply. I see a lot of high-level computer games won by tactics or by endgame play that requires deep search. Shannon Type II (selective search) has never worked well in any of the past 5-6 decades. But maybe this effort is showing that eval is more important than has been thought, and search less important.

--Jon
Henk
Posts: 7217
Joined: Mon May 27, 2013 10:31 am

Re: Google's AlphaGo team has been working on chess

Post by Henk »

What I understand is that the neural network predicts the winning probabilities for each valid move in a position.

Don't understand that these predictions will be good if it doesn't do a search but only simulation.

So how is it possible that monte carlo simulation is better than an alpha beta search.
clumma
Posts: 186
Joined: Fri Oct 10, 2014 10:05 pm
Location: Berkeley, CA

Re: Google's AlphaGo team has been working on chess

Post by clumma »

Henk wrote:What I understand is that the neural network predicts the winning probabilities for each valid move in a position.

Don't understand that these predictions will be good if it doesn't do a search but only simulation.

So how is it possible that monte carlo simulation is better than an alpha beta search.
The trick is to stop thinking in terms of tactics and search, and start thinking in terms of learning a really complex evaluation function. As the paper explains, alpha-beta can amplify any error in the evaluation function, whereas MCTS (plus a little noise) averages it out. So tuning alpha-beta is, in a sense, harder.

-Carl