Lee Sedol vs. AlphaGo [link to live feed]

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

syzygy
Posts: 5566
Joined: Tue Feb 28, 2012 11:56 pm

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by syzygy »

peter wrote:Hi!
Lee Sedol has won his first game in the match.
3-1
Regression! 8-) 8-)
User avatar
Leto
Posts: 2071
Joined: Thu May 04, 2006 3:40 am
Location: Dune

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Leto »

syzygy wrote:
peter wrote:Hi!
Lee Sedol has won his first game in the match.
3-1
Regression! 8-) 8-)
Lee Sedol has never lost four times in a row, he's only lost three times in a row three times, once in 2002, once in 2012, and of course in his current match with AlphaGo.

Unfortunately AlphaGo was not able to break the record and if Lee Sedol wins game 5 as well it brings into question how strong AlphaGo really is and it could indicate that Lee Sedol was able to adapt to AlphaGo, and I think a rematch would be a good idea. However if Lee Sedol loses game 5 it might indicate that he did not figure out AlphaGo and thus a match with the highest rated player, the 19 year old Chinese phenom Ke Jie, would probably make more sense for AlphaGo.
User avatar
George Tsavdaris
Posts: 1627
Joined: Thu Mar 09, 2006 12:35 pm

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by George Tsavdaris »

matthewlai wrote: Yes, that's exactly what I meant. Handwritten evaluation functions in chess are good enough that switching to a neural network (slightly better outputs but much slower) is a net negative.

I really hope someone skilled in ML steps in and take it as far as it can go and see what happens.

Even if the AlphaGo/Giraffe approach doesn't work, there are still many possible machine learning approaches that can be explored.

The situation is easy as i see it:

In GO the huge branching factor and number of moves for the average game is so huge, that no computer or human can base or improve his play, improving that. I.e improving search in GO does not do anything(compared to evaluation function).
So evaluation is THE MOST crucial factor by far!
Humans had a standard quality at this.
Computers sucked big time, as there is no way to hand-write step by step a good evaluations function.
Monte Carlo helped to search deeper and in a way to create an artificial evaluation function, better from the previous, but that was not enough.
So deep learning with neural networks gave an incredibly good evaluation function so AlphaGO went on par with humans.


In Chess branching factor and the average length of games is way way shorter and even more importantly, the reality of the frequent existence of some 5-15 ply tactics with reaching a position that can show a clean winner, e.g a checkmate, a loss/win of a piece for nothing with no compensation, etc, i.e all things that end the game after 10-20 plies instantly or even give a significant advantage to one side(where nothing like that, in 5-10 moves, never happens in GO generally as in Chess), so improving the depth that a computer looks ahead is extremely crucial to its strength in Chess.
Evaluation function is important of course, but not at the same level of importance compared to search techniques. Not even close.
Even more, writing an evaluation function is infinitely easier in Chess than in GO. So even if current hand-written evaluation functions we have for Chess are not optimal, they are very good.

So improving the evaluation function will not do that much of a difference.

The breakthrough in computer Chess will happen, when something like what you said in your old quote will happen. When computers will start thinking in patterns/recognize "situations". And i guess machine learning is the obvious way to try to do this.
So i'm holding my breath to your next statement: :D
" I have some pretty concrete ideas on how to do this based on non-public information. A lot of that will become public at ICML in a few months. "



I think it's dangerous to say that chess nowadays is close to perfect. It's an easy illusion to have when we don't know any better. People thought the same with Rybka many years ago, and nowadays top engines are a few hundred ELOs stronger than early Rybka.
It's not dangerous, it's plain wrong. :D
After his son's birth they've asked him:
"Is it a boy or girl?"
YES! He replied.....
Jesse Gersenson
Posts: 593
Joined: Sat Aug 20, 2011 9:43 am

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Jesse Gersenson »

Laskos wrote:Anyway, pretty happy to have a tool like Crazy Stone that can give oftentimes the correct analysis, although it analyzes players almost 2000 ELO points stronger.
Kai, CrazyStone couldn't find a lot of the moves played in this game. That is what makes is -2000 elo. But, give it a position and ask it, "Hey, what do you think about this position?" and it returns a reasonable assessment.

Chess engines probably behave the same way.
peter wrote:Hi!
Lee Sedol has won his first game in the match.
3-1
That's awsome!!
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by bob »

matthewlai wrote:
bob wrote: The only downside is that Go "patterns" are far simpler than chess. black/white/empty is far easier to pattern-match and train with than the plethora of different piece types that interact in different ways that chess has. So maybe it will work, and maybe it won't. We will get to wait and see.

I've seen good results here on projects that are trying to analyze radar images, or photographs, and such. but how many layers are needed to handle chess? Handle it well? That's an open question.
I believe you are the only person I know who thinks Go patterns are simpler than chess. Do you play Go by any chance?

Yes, input encoding is a bit more complicated for chess, but there are still many ways to map chess positions to smooth feature spaces. I explored one in Giraffe (using piece lists and coordinates instead of bitmaps), and got good results. That took a few hours of thinking to come up with, and anyone skilled in machine learning should have been able to do that as easily. That's really the easy part.

The hard part is finding actual useful patterns on top of your input encoding.

Compared to Go, patterns in chess are trivial.

No one has been able to write a reasonably good Go evaluation function by hand, while we have had reasonable chess evaluation functions for decades already.

In chess, if you have material + pcsq, you already have a quite reasonable evaluation function. There is nothing anywhere near that simple in Go. Patterns are larger, and within those regions, you have to determine if small differences make the pattern invalid or not. Then you have to look at interactions between those patterns on the whole board. Material and pcsq mean absolutely nothing in Go. There is no simple heuristic that gets you 70% of the way there (the way material does in chess).

Machine learning hasn't revolutionized chess yet not because chess is too difficult to learn. The evaluation function in Giraffe is already close to state of the art.

It's because chess is too easy. So easy that even humans can hand-write reasonably good evaluation functions, and handwritten functions are almost always much faster, so they beat learned functions even if learned functions produce a bit better quality values.

It makes a lot of sense to do machine learning in Go because patterns in Go are very difficult. It makes no sense to do machine learning in tic-tac-toe because patterns in tic-tac-toe are very easy. Chess is somewhere in-between.
My comment about "simple" has to do with pattern-matching specifically. black/white/empty is a simpler pattern to recognize than the 13 states a square can have in chess (actually more than that thanks to castling, 50 moves, etc). Didn't say go was a simpler game. Just that the patterns are easier to match. And there is little difference between squares, which is also unlike chess.

As to have I played go? Yes. Am I very good at it? Absolutely not. There are simply things in chess that don't matter in go. IE you can't mirror a pattern in chess easily because the kingside and queenside are different, where in go that's not the case. Ditto other forms of symmetry that are valid in go but not in chess (pawns only move in one direction for example). To me this makes ANN analysis quite reasonable in go, where it has not done so well in chess.
Werewolf
Posts: 1796
Joined: Thu Sep 18, 2008 10:24 pm

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Werewolf »

Could a neural net be used in the search as well as the eval? if so that could really change things and compensate for the speed loss.
matthewlai
Posts: 793
Joined: Sun Aug 03, 2014 4:48 am
Location: London, UK

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by matthewlai »

Werewolf wrote:Could a neural net be used in the search as well as the eval? if so that could really change things and compensate for the speed loss.
Yes. That's what the policy network does in AlphaGo. I tried something similar (but much more primitive) in Giraffe, but didn't get very good results.
Disclosure: I work for DeepMind on the AlphaZero project, but everything I say here is personal opinion and does not reflect the views of DeepMind / Alphabet.
Joost Buijs
Posts: 1563
Joined: Thu Jul 16, 2009 10:47 am
Location: Almere, The Netherlands

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Joost Buijs »

Werewolf wrote:Could a neural net be used in the search as well as the eval? if so that could really change things and compensate for the speed loss.
Several companies are working on neural net chips implemented in hardware, when these chips become available using a neural net in search will be feasible.

The way I see it is that AlphaGo with his 1920 CPU's and 280 GPU's is now at the level that Deep Blue had against Kasparov in 1999, so the best has yet to come.
Albert Silver
Posts: 3019
Joined: Wed Mar 08, 2006 9:57 pm
Location: Rio de Janeiro, Brazil

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Albert Silver »

Joost Buijs wrote:
Werewolf wrote:Could a neural net be used in the search as well as the eval? if so that could really change things and compensate for the speed loss.
Several companies are working on neural net chips implemented in hardware, when these chips become available using a neural net in search will be feasible.

The way I see it is that AlphaGo with his 1920 CPU's and 280 GPU's is now at the level that Deep Blue had against Kasparov in 1999, so the best has yet to come.
Based on AlphaGo's own ratings, with Crazy Stone at roughly 2000 Elo, it is interesting to see that even a single machine AlphaGo with 40 cores and 2 GPUs (not 280) is still estimated at nearly 2800 Elo. In fact, jumping from one GPU to two GPUs was worth over 600 Elo.
"Tactics are the bricks and sticks that make up a game, but positional play is the architectural blueprint."
User avatar
Guenther
Posts: 4607
Joined: Wed Oct 01, 2008 6:33 am
Location: Regensburg, Germany
Full name: Guenther Simon

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Guenther »

peter wrote:Hi!
Lee Sedol has won his first game in the match.
3-1

An interesting article about game 4 with a lot of inside comments from top players at gogameguru.
https://gogameguru.com/lee-sedol-defeat ... ck-game-4/