Regression!peter wrote:Hi!
Lee Sedol has won his first game in the match.
3-1


Moderator: Ras
Regression!peter wrote:Hi!
Lee Sedol has won his first game in the match.
3-1
Lee Sedol has never lost four times in a row, he's only lost three times in a row three times, once in 2002, once in 2012, and of course in his current match with AlphaGo.syzygy wrote:Regression!peter wrote:Hi!
Lee Sedol has won his first game in the match.
3-1![]()
matthewlai wrote: Yes, that's exactly what I meant. Handwritten evaluation functions in chess are good enough that switching to a neural network (slightly better outputs but much slower) is a net negative.
I really hope someone skilled in ML steps in and take it as far as it can go and see what happens.
Even if the AlphaGo/Giraffe approach doesn't work, there are still many possible machine learning approaches that can be explored.
It's not dangerous, it's plain wrong.I think it's dangerous to say that chess nowadays is close to perfect. It's an easy illusion to have when we don't know any better. People thought the same with Rybka many years ago, and nowadays top engines are a few hundred ELOs stronger than early Rybka.
Kai, CrazyStone couldn't find a lot of the moves played in this game. That is what makes is -2000 elo. But, give it a position and ask it, "Hey, what do you think about this position?" and it returns a reasonable assessment.Laskos wrote:Anyway, pretty happy to have a tool like Crazy Stone that can give oftentimes the correct analysis, although it analyzes players almost 2000 ELO points stronger.
That's awsome!!peter wrote:Hi!
Lee Sedol has won his first game in the match.
3-1
My comment about "simple" has to do with pattern-matching specifically. black/white/empty is a simpler pattern to recognize than the 13 states a square can have in chess (actually more than that thanks to castling, 50 moves, etc). Didn't say go was a simpler game. Just that the patterns are easier to match. And there is little difference between squares, which is also unlike chess.matthewlai wrote:I believe you are the only person I know who thinks Go patterns are simpler than chess. Do you play Go by any chance?bob wrote: The only downside is that Go "patterns" are far simpler than chess. black/white/empty is far easier to pattern-match and train with than the plethora of different piece types that interact in different ways that chess has. So maybe it will work, and maybe it won't. We will get to wait and see.
I've seen good results here on projects that are trying to analyze radar images, or photographs, and such. but how many layers are needed to handle chess? Handle it well? That's an open question.
Yes, input encoding is a bit more complicated for chess, but there are still many ways to map chess positions to smooth feature spaces. I explored one in Giraffe (using piece lists and coordinates instead of bitmaps), and got good results. That took a few hours of thinking to come up with, and anyone skilled in machine learning should have been able to do that as easily. That's really the easy part.
The hard part is finding actual useful patterns on top of your input encoding.
Compared to Go, patterns in chess are trivial.
No one has been able to write a reasonably good Go evaluation function by hand, while we have had reasonable chess evaluation functions for decades already.
In chess, if you have material + pcsq, you already have a quite reasonable evaluation function. There is nothing anywhere near that simple in Go. Patterns are larger, and within those regions, you have to determine if small differences make the pattern invalid or not. Then you have to look at interactions between those patterns on the whole board. Material and pcsq mean absolutely nothing in Go. There is no simple heuristic that gets you 70% of the way there (the way material does in chess).
Machine learning hasn't revolutionized chess yet not because chess is too difficult to learn. The evaluation function in Giraffe is already close to state of the art.
It's because chess is too easy. So easy that even humans can hand-write reasonably good evaluation functions, and handwritten functions are almost always much faster, so they beat learned functions even if learned functions produce a bit better quality values.
It makes a lot of sense to do machine learning in Go because patterns in Go are very difficult. It makes no sense to do machine learning in tic-tac-toe because patterns in tic-tac-toe are very easy. Chess is somewhere in-between.
Yes. That's what the policy network does in AlphaGo. I tried something similar (but much more primitive) in Giraffe, but didn't get very good results.Werewolf wrote:Could a neural net be used in the search as well as the eval? if so that could really change things and compensate for the speed loss.
Several companies are working on neural net chips implemented in hardware, when these chips become available using a neural net in search will be feasible.Werewolf wrote:Could a neural net be used in the search as well as the eval? if so that could really change things and compensate for the speed loss.
Based on AlphaGo's own ratings, with Crazy Stone at roughly 2000 Elo, it is interesting to see that even a single machine AlphaGo with 40 cores and 2 GPUs (not 280) is still estimated at nearly 2800 Elo. In fact, jumping from one GPU to two GPUs was worth over 600 Elo.Joost Buijs wrote:Several companies are working on neural net chips implemented in hardware, when these chips become available using a neural net in search will be feasible.Werewolf wrote:Could a neural net be used in the search as well as the eval? if so that could really change things and compensate for the speed loss.
The way I see it is that AlphaGo with his 1920 CPU's and 280 GPU's is now at the level that Deep Blue had against Kasparov in 1999, so the best has yet to come.
peter wrote:Hi!
Lee Sedol has won his first game in the match.
3-1