Lee Sedol vs. AlphaGo [link to live feed]

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

Dirt
Posts: 2851
Joined: Wed Mar 08, 2006 10:01 pm
Location: Irvine, CA, USA

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Dirt »

Henk wrote:I thought this website was about computer chess. If not I will soon publish my latest monopoly games here.
This is also very important for AI, which is popular here. Perhaps this could be posted in CTF, but a lot of interested people don't go there.
Deasil is the right way to go.
User avatar
Harvey Williamson
Posts: 2010
Joined: Sun May 25, 2008 11:12 pm
Location: Whitchurch. Shropshire, UK.
Full name: Harvey Williamson

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Harvey Williamson »

Dirt wrote:
Henk wrote:I thought this website was about computer chess. If not I will soon publish my latest monopoly games here.
This is also very important for AI, which is popular here. Perhaps this could be posted in CTF, but a lot of interested people don't go there.
I think it is totally on topic and this forum is the right place for it. However, one of, my posts here was from one of the major Computer Chess tournaments in Leiden where I was a participant and it was deleted - who knows how the moderators will react!
melajara
Posts: 213
Joined: Thu Dec 16, 2010 4:39 pm

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by melajara »

This deep learning success is a far more important milestone in AI than the kludge and ad hoc hardware Deep Blue IBM contraption was for the advancement of computer chess.

It is about time, IMHO, to revisit the enhanced minimax brute force search so successful in chess to steer it towards a more human like approach to mechanically play the game with long range dependencies mastery as some of AlphaGo startling moves suggest.

Matthew Lai and his solitary work on Giraffe succeeded in producing a MI level class original program in a few months without even calibrating the evaluation neural network used with top class opponents (e.g. Stockfish).

DeepMind has already published the general architecture used for AlphaGo in their Nature's paper.

Demis Hassibis ambitions are way beyond Go. Actually he is aiming at producing ASAP, AI assistants useful to perform scientific research by themselves. General mastery of games from first principles i.e. inputs correlated to corresponding "scores" or objective measure of efficacy of the policy used to "move" in the game steered by reinforcement learning up to devising seemingly superhuman strategies is reminiscent of Samuel's 1957 seminal approach and initial success with Checkers.
It has been very successful to crack Atari games, it is up to the task for Go. Soon it will applied to 3D maze games as some recent papers suggest, then probably to even more complex (online) games like e.g. Starcraft.

Then DeepMind will start to move beyond supervised learning and make forays into robotics and the real world toward the envisioned AI scientific research assistants.

Back to the classic board games. Now by using deep enough neural networks and partitioning the architecture in 2 complementary networks supplemented by classical local search, suddenly the king of the board games, Go, became tractable to computers.

I can't wait to see this general purpose machine learning approach applied retroactively to chess.

I sincerely hope DeepMind will somewhat make their AlphaGo code open source and that someone will translate the methodology back to chess, e.g. starting from Lai's Giraffe program.

Actually, Matthew Lai could do it himself provided his new employer, DeepMind (such a coincidence;-) is allowing him to do so.

AlphaGo is a HUGE milestone because the methodology used is so general and ultimately (with enough training and self play) powerful.

It will spur a flurry of successful applications e.g. in natural language modelling, ensuring better chatbots, tanguage translators, if not ultimately OS reminiscent of the AI from the Her movie.
Per ardua ad astra
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by bob »

melajara wrote:This deep learning success is a far more important milestone in AI than the kludge and ad hoc hardware Deep Blue IBM contraption was for the advancement of computer chess.

It is about time, IMHO, to revisit the enhanced minimax brute force search so successful in chess to steer it towards a more human like approach to mechanically play the game with long range dependencies mastery as some of AlphaGo startling moves suggest.

Matthew Lai and his solitary work on Giraffe succeeded in producing a MI level class original program in a few months without even calibrating the evaluation neural network used with top class opponents (e.g. Stockfish).

DeepMind has already published the general architecture used for AlphaGo in their Nature's paper.

Demis Hassibis ambitions are way beyond Go. Actually he is aiming at producing ASAP, AI assistants useful to perform scientific research by themselves. General mastery of games from first principles i.e. inputs correlated to corresponding "scores" or objective measure of efficacy of the policy used to "move" in the game steered by reinforcement learning up to devising seemingly superhuman strategies is reminiscent of Samuel's 1957 seminal approach and initial success with Checkers.
It has been very successful to crack Atari games, it is up to the task for Go. Soon it will applied to 3D maze games as some recent papers suggest, then probably to even more complex (online) games like e.g. Starcraft.

Then DeepMind will start to move beyond supervised learning and make forays into robotics and the real world toward the envisioned AI scientific research assistants.

Back to the classic board games. Now by using deep enough neural networks and partitioning the architecture in 2 complementary networks supplemented by classical local search, suddenly the king of the board games, Go, became tractable to computers.

I can't wait to see this general purpose machine learning approach applied retroactively to chess.

I sincerely hope DeepMind will somewhat make their AlphaGo code open source and that someone will translate the methodology back to chess, e.g. starting from Lai's Giraffe program.

Actually, Matthew Lai could do it himself provided his new employer, DeepMind (such a coincidence;-) is allowing him to do so.

AlphaGo is a HUGE milestone because the methodology used is so general and ultimately (with enough training and self play) powerful.

It will spur a flurry of successful applications e.g. in natural language modelling, ensuring better chatbots, tanguage translators, if not ultimately OS reminiscent of the AI from the Her movie.
The only downside is that Go "patterns" are far simpler than chess. black/white/empty is far easier to pattern-match and train with than the plethora of different piece types that interact in different ways that chess has. So maybe it will work, and maybe it won't. We will get to wait and see.

I've seen good results here on projects that are trying to analyze radar images, or photographs, and such. but how many layers are needed to handle chess? Handle it well? That's an open question.
User avatar
MikeB
Posts: 4889
Joined: Thu Mar 09, 2006 6:34 am
Location: Pen Argyl, Pennsylvania

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by MikeB »

Harvey Williamson wrote:
Dirt wrote:
Henk wrote:I thought this website was about computer chess. If not I will soon publish my latest monopoly games here.
This is also very important for AI, which is popular here. Perhaps this could be posted in CTF, but a lot of interested people don't go there.
I think it is totally on topic and this forum is the right place for it. However, one of, my posts here was from one of the major Computer Chess tournaments in Leiden where I was a participant and it was deleted - who knows how the moderators will react!
That has happened to me as well. Apparently they want chess computer tournaments to be posted in the tournaments forum. It can be a fine line...
User avatar
MikeB
Posts: 4889
Joined: Thu Mar 09, 2006 6:34 am
Location: Pen Argyl, Pennsylvania

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by MikeB »

Henk wrote:I thought this website was about computer chess. If not I will soon publish my latest monopoly games here.
The top human GO player in the world playing a computer - no question it's on topic. Your latest Monopoly games - I don't think so - not even close.
Henk
Posts: 7216
Joined: Mon May 27, 2013 10:31 am

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Henk »

MikeB wrote:
Henk wrote:I thought this website was about computer chess. If not I will soon publish my latest monopoly games here.
The top human GO player in the world playing a computer - no question it's on topic. Your latest Monopoly games - I don't think so - not even close.
I don't see much resemblance between Chess and Go. Otherwise start a new website AI Club. But I would like to keep computer chess in a separate division.
Rochester
Posts: 55
Joined: Sat Feb 20, 2016 6:11 am

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Rochester »

They same game like all world people under same God.
User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by Laskos »

Laskos wrote:
Jesse Gersenson wrote:
Uri Blass wrote: The number is the only thing that I can understand because I understand nothing about the game.

I only know the rules.

Not having the number means that watching is not interesting to 99% of the people who do not know go or hardly know the rules of go.
The commentator gives his assessment of the game position, once every 10 or so moves.

Surely the assessment of a 9-dan pro is better than the computer eval.
Uri referred to the specific eval of AlphaGo, which is much more relevant than the opinions given during the game by both 9p pro Go commentators I saw in two different live feeds (DeepMind and AGA). The usual live feed with the American 9p commentator was especially in need of some clearer assessments. But it seems many 9p players are too confused to even say who has clear advantage until very close to the end of the game.

I observed a curiosity: strong amateur level Crazy Stone and Zenith saw clear advantage of AlphaGo at least from middlegame in these 2 games with Lee Sedol. Also, they saw the pretty early clear advantage of AlphaGo in games against Fan Hui, all 5 of them. In these games I used a post-mortem analysis, but based on that, if I can manage, I will check a bit with a real-time analysis using these "weak" tools (they are anyway much better than me). Why these "weak" engines seem to see better the outcome than some 9p pros is a mystery to me.

I expect Lee Sedol to try to make a difference in opening/early midgame, as it seems (again from engine analysis, not human pros) that it's the only game stage where he stands a chance. Maybe he will play some weird opening, let's see.
Further, in the press conference google guy said they weren't confident of the machine's eval function, which would be another reason not to show the value.

I don't know much about go either but watching today's commentary improved my play 200 points?!
Here is how the games look according to Crazy Stone evaluation. The vertical axis denotes the probability of win for Black. The lines are for Black and White respectively. From my experience with this quantity in Crazy Stone, anything above 60% or below 40% shows a clear advantage. It seems that Crazy Stone sees a clear advantage for AlphaGo even in early stages of the game, especially in hard-fought opening of the first game. It also spots some clear to Crazy Stone mistakes, like that of Lee Sedol in the late opening of game 2 (the same errors are spotted by Zentih Go software). I don't know why 1600 ELO engines like Crazy Stone and Zentih show a plausible advantage fairly early in the game of 3500+ ELO opponents, while 3000-3400 ELO points 9 dan pro commentators seem to be so confused. Maybe these MCTS "weak"engines only seem to be correct, but for wrong reasons.

Game 1:
Image

Game 2:
Image
User avatar
towforce
Posts: 11542
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Lee Sedol vs. AlphaGo [link to live feed]

Post by towforce »

Laskos wrote:Here is how the games look according to Crazy Stone evaluation. The vertical axis denotes the probability of win for Black. The lines are for Black and White respectively. From my experience with this quantity in Crazy Stone, anything above 60% or below 40% shows a clear advantage. It seems that Crazy Stone sees a clear advantage for AlphaGo even in early stages of the game, especially in hard-fought opening of the first game. It also spots some clear to Crazy Stone mistakes, like that of Lee Sedol in the late opening of game 2 (the same errors are spotted by Zentih Go software). I don't know why 1600 ELO engines like Crazy Stone and Zentih show a plausible advantage fairly early in the game of 3500+ ELO opponents, while 3000-3400 ELO points 9 dan pro commentators seem to be so confused. Maybe these MCTS "weak"engines only seem to be correct, but for wrong reasons.

Game 1:
Image

Game 2:
Image
My experience of playing human chess masters is that I think I'm doing better than I expected, then suddenly a win for the opponent emerges.

If Crazy Stone genuinely had a good evaluation, it would be able to beat human opponents. Maybe it is weak at evaluating the "frameworks" that will eventually become territory?
Writing is the antidote to confusion.
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!