Lee Sedol vs. AlphaGo [link to live feed]

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Dann Corbit, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Daniel Shawul
Posts: 4103
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Daniel Shawul » Thu Mar 17, 2016 10:37 pm

I think the advantage that alphago has over crazystone are the deep neural networks (just stating the obvious here), which are used to guide both the in-tree and rollouts of MCTS, and correcting the rollout evaluation with the value network. I have not seen the latter used once MCTS was introduced in 2006 for evaluation, but many have been using hand-crafted rules or even pattern matching to guide the search in both parts of the tree. The value network brings about 500 elos accoring to Figure 4b of their paper! Reading tactics is difficult in MCTS in general so i can not categorically say alphago has an advantage in tactics -- definately has better intuition to the game though. For improving tactics, some have used a tactic reader with alpha-beta at the root or an MCTS search adapted for evaluating capturing sequences. For example, if you give high priority to moves that put the opponent in atari, even the rollouts part of the MCTS can detect ladders. When the tactical sequence is long, you might not be able to expand the tree part of MCTS quickly enough to see the result there. The deep neural networks probably help to detect tactics better, but in general MCTS is slow in reading tactics, hence i won't be surprised in alphago misevaluates some long tactics too.

Zenmastur
Posts: 919
Joined: Sat May 31, 2014 6:28 am

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Zenmastur » Fri Mar 18, 2016 8:42 pm

bob wrote:
matthewlai wrote:
bob wrote: The only downside is that Go "patterns" are far simpler than chess. black/white/empty is far easier to pattern-match and train with than the plethora of different piece types that interact in different ways that chess has. So maybe it will work, and maybe it won't. We will get to wait and see.

I've seen good results here on projects that are trying to analyze radar images, or photographs, and such. but how many layers are needed to handle chess? Handle it well? That's an open question.
I believe you are the only person I know who thinks Go patterns are simpler than chess. Do you play Go by any chance?

Yes, input encoding is a bit more complicated for chess, but there are still many ways to map chess positions to smooth feature spaces. I explored one in Giraffe (using piece lists and coordinates instead of bitmaps), and got good results. That took a few hours of thinking to come up with, and anyone skilled in machine learning should have been able to do that as easily. That's really the easy part.

The hard part is finding actual useful patterns on top of your input encoding.

Compared to Go, patterns in chess are trivial.

No one has been able to write a reasonably good Go evaluation function by hand, while we have had reasonable chess evaluation functions for decades already.

In chess, if you have material + pcsq, you already have a quite reasonable evaluation function. There is nothing anywhere near that simple in Go. Patterns are larger, and within those regions, you have to determine if small differences make the pattern invalid or not. Then you have to look at interactions between those patterns on the whole board. Material and pcsq mean absolutely nothing in Go. There is no simple heuristic that gets you 70% of the way there (the way material does in chess).

Machine learning hasn't revolutionized chess yet not because chess is too difficult to learn. The evaluation function in Giraffe is already close to state of the art.

It's because chess is too easy. So easy that even humans can hand-write reasonably good evaluation functions, and handwritten functions are almost always much faster, so they beat learned functions even if learned functions produce a bit better quality values.

It makes a lot of sense to do machine learning in Go because patterns in Go are very difficult. It makes no sense to do machine learning in tic-tac-toe because patterns in tic-tac-toe are very easy. Chess is somewhere in-between.
My comment about "simple" has to do with pattern-matching specifically. black/white/empty is a simpler pattern to recognize than the 13 states a square can have in chess (actually more than that thanks to castling, 50 moves, etc). Didn't say go was a simpler game. Just that the patterns are easier to match. And there is little difference between squares, which is also unlike chess.

As to have I played go? Yes. Am I very good at it? Absolutely not. There are simply things in chess that don't matter in go. IE you can't mirror a pattern in chess easily because the kingside and queenside are different, where in go that's not the case. Ditto other forms of symmetry that are valid in go but not in chess (pawns only move in one direction for example). To me this makes ANN analysis quite reasonable in go, where it has not done so well in chess.

The patterns in go are much more complex. Think of it this way, if you group 4 go points into a single chess square equivalent, you end up with approximately 90 equivalent squares each of which can take on 3^4= 81 different values. 81 ^ 90 is greater than 13 ^ 64 by about 100 orders of magnitude.

I think you're totally wrong about the difference in squares. At the start of the game some squares may have the same values due to board symmetry. i.e. there are only 55 different positions. But once symmetry is broken they will all have different values. Not only that, but the values to each player may be different so to calculate its “true” game value you calculate it's value to each player and then calculate their difference. The moves are then ranked by these values to give the most profitable order of play for each side. The trick is to get accurate evaluations of ALL available points include those occupied by stones that can be captured. Determining if a stone or group of stones can be captured is a hard problem in itself.

Regards,

Zen
Only 2 defining forces have ever offered to die for you.....Jesus Christ and the American Soldier. One died for your soul, the other for your freedom.

Isaac
Posts: 265
Joined: Sat Feb 22, 2014 7:37 pm

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Isaac » Fri Mar 18, 2016 11:38 pm

Crazystone is now using Deep Learning and reached 7 dan level on kgs. Probably going to be commercialized in 2016 although Rémi doesn't know when exactly.

Dirt
Posts: 2851
Joined: Wed Mar 08, 2006 9:01 pm
Location: Irvine, CA, USA

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Dirt » Sat Mar 19, 2016 3:54 pm

Isaac wrote:Crazystone is now using Deep Learning and reached 7 dan level on kgs. Probably going to be commercialized in 2016 although Rémi doesn't know when exactly.
Crazy Stone has been commercial for awhile. You must mean a new release is coming.
Deasil is the right way to go.

Milos
Posts: 4064
Joined: Wed Nov 25, 2009 12:47 am

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Milos » Sat Mar 19, 2016 11:35 pm

Dirt wrote:
Isaac wrote:Crazystone is now using Deep Learning and reached 7 dan level on kgs. Probably going to be commercialized in 2016 although Rémi doesn't know when exactly.
Crazy Stone has been commercial for awhile. You must mean a new release is coming.
The version with NN will be commercialized in 2016 was what he meant.

IanO
Posts: 487
Joined: Wed Mar 08, 2006 8:45 pm
Location: Portland, OR
Contact:

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by IanO » Mon Mar 21, 2016 5:11 pm

Isaac wrote:Crazystone is now using Deep Learning and reached 7 dan level on kgs. Probably going to be commercialized in 2016 although Rémi doesn't know when exactly.
Indeed! Now there are three engines in the top 100 on KGS: Crazystone 7d, Zen19X 7d also with NN, and the previous Zen19 6d.

http://www.gokgs.com/top100.jsp

This deep learning on convolutional networks is proving to be as great an advance for computer Go as Monte Carlo search from a decade ago. And just as the previous step was enabled by commodity SMP systems and clusters, this step (neural net training and processing) seems to be enabled by the massive parallelism found in modern graphics card architectures.

Daniel Shawul
Posts: 4103
Joined: Tue Mar 14, 2006 10:34 am
Location: Ethiopia
Contact:

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Daniel Shawul » Mon Mar 21, 2016 5:56 pm

IanO wrote:
Isaac wrote:Crazystone is now using Deep Learning and reached 7 dan level on kgs. Probably going to be commercialized in 2016 although Rémi doesn't know when exactly.
Indeed! Now there are three engines in the top 100 on KGS: Crazystone 7d, Zen19X 7d also with NN, and the previous Zen19 6d.

http://www.gokgs.com/top100.jsp

This deep learning on convolutional networks is proving to be as great an advance for computer Go as Monte Carlo search from a decade ago. And just as the previous step was enabled by commodity SMP systems and clusters, this step (neural net training and processing) seems to be enabled by the massive parallelism found in modern graphics card architectures.
These engines don't use GPUs yet, i.e. the DCNN are evaluated on the CPU. Some people (David Fotland?) have reported no benefit from the DCNN when used entirely on the CPU. Even alphaGo do not use the GPUs for conducting parallel Monte-Carlo Tree Search, but just for evaluating the neural networks which can be orders of magnitude slower (3ms for the policy network IIRC). There is a long way to go for these engines to reach 9p with standard hardware.

Isaac
Posts: 265
Joined: Sat Feb 22, 2014 7:37 pm

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Isaac » Tue Mar 22, 2016 1:40 am

IanO wrote:
Isaac wrote:Crazystone is now using Deep Learning and reached 7 dan level on kgs. Probably going to be commercialized in 2016 although Rémi doesn't know when exactly.
Indeed! Now there are three engines in the top 100 on KGS: Crazystone 7d, Zen19X 7d also with NN, and the previous Zen19 6d.

http://www.gokgs.com/top100.jsp

This deep learning on convolutional networks is proving to be as great an advance for computer Go as Monte Carlo search from a decade ago. And just as the previous step was enabled by commodity SMP systems and clusters, this step (neural net training and processing) seems to be enabled by the massive parallelism found in modern graphics card architectures.
I asked Zen's author (Yamato), one day ago whether Zen uses deep learning and his answer was no. So it reaches 7 dan level without any deep learning.

IanO
Posts: 487
Joined: Wed Mar 08, 2006 8:45 pm
Location: Portland, OR
Contact:

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by IanO » Wed Mar 23, 2016 4:27 pm

Isaac wrote:
IanO wrote:
Isaac wrote:Crazystone is now using Deep Learning and reached 7 dan level on kgs. Probably going to be commercialized in 2016 although Rémi doesn't know when exactly.
Indeed! Now there are three engines in the top 100 on KGS: Crazystone 7d, Zen19X 7d also with NN, and the previous Zen19 6d.

http://www.gokgs.com/top100.jsp

This deep learning on convolutional networks is proving to be as great an advance for computer Go as Monte Carlo search from a decade ago. And just as the previous step was enabled by commodity SMP systems and clusters, this step (neural net training and processing) seems to be enabled by the massive parallelism found in modern graphics card architectures.
I asked Zen's author (Yamato), one day ago whether Zen uses deep learning and his answer was no. So it reaches 7 dan level without any deep learning.
Even more impressive that it got to 7-dan on KGS!

Zen also just won the UEC Cup ahead of dark forest (Amazon's deep learning Go project). They each won the right to a challenge match against veteran professional Koichi Kobayachi at a three stone handicap. Darkforest lost a slightly passive game, but Zen won in fine attacking style!

To show the influence of deep learning, in this 32-entrant tournament, 8 of the programs were using deep learning, and six of them made it into the top 8.

Werewolf
Posts: 1351
Joined: Thu Sep 18, 2008 8:24 pm

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Werewolf » Sun Mar 27, 2016 12:39 pm

This might help Alpha Go & all neural nets:

http://www.tomshardware.co.uk/ibm-chip- ... 52670.html

Post Reply