Page 2 of 3

Re: Deep Pink: a chess engine using deep learning

Posted: Mon Feb 06, 2017 6:59 pm
by Karlo Bala
nkg114mc wrote:Just found this blog post about deep learning in chess, by Erik Bernhardsson

http://blog.yhat.com/posts/deep-learning-chess.html

And the source code:

https://github.com/erikbern/deep-pink

It mentioned an engine "Deep-Pink", which includes a trained neural network model (for evaluation function). Maybe some one would be interested.

What do you think about this work?
It is very difficult to extract material from the PST. If, for example, there is no position with white knight on H8 in the training set, then evaluation of similar positions will be undefined.

Re: Deep Pink: a chess engine using deep learning

Posted: Mon Feb 06, 2017 7:02 pm
by matthewlai
Karlo Bala wrote:
nkg114mc wrote:Just found this blog post about deep learning in chess, by Erik Bernhardsson

http://blog.yhat.com/posts/deep-learning-chess.html

And the source code:

https://github.com/erikbern/deep-pink

It mentioned an engine "Deep-Pink", which includes a trained neural network model (for evaluation function). Maybe some one would be interested.

What do you think about this work?
It is very difficult to extract material from the PST. If, for example, there is no position with white knight on H8 in the training set, then evaluation of similar positions will be undefined.
Yes, that's a limitation in the feature representation they used. See the Giraffe paper for an alternative representation that doesn't suffer from this.

Re: Deep Pink: a chess engine using deep learning

Posted: Mon Feb 06, 2017 7:59 pm
by Gerd Isenberg
I wonder what is new compared to Bernhardsson's November 2014 blog post?
https://erikbern.com/2014/11/29/deep-le ... for-chess/

Re: Deep Pink: a chess engine using deep learning

Posted: Mon Feb 06, 2017 9:04 pm
by nkg114mc
Hi Gerd, just realize that it is an old post from 2014. I saw it on Kaggle's facebook page, which was rebloged by Yhat three days ago. I guess the only thing "new" might be on the source code, where the author made some minor change until last spring.

Re: Deep Pink: a chess engine using deep learning

Posted: Tue Feb 07, 2017 8:31 am
by Gerd Isenberg
Hi Chao,
often some revised versions of papers or pages appear - but this one does not seem to belong to that sort - only a copy of an old post. Anyway, an interesting topic ...

Giraffe and despite no engine available, DeepChess are more promising approaches ...
http://www.cs.tau.ac.il/~wolf/papers/deepchess.pdf

Gerd

Re: Deep Pink: a chess engine using deep learning

Posted: Tue Feb 07, 2017 9:01 pm
by nkg114mc
Hi Gerd, thank your for the suggestions and links! Yes, I agree with you. After making this post I found that there are some more refined work like Matthew's Giraffe and DeepChess that has existed. The work on Deep Pink is still in an early stage IMO.

Also thanks for all the comments and discussion made above in this post. I will follow up Matthew's Giraffe project, and try to explore more base on the exist work.

I would come back to make an update in this post if I found something interesting~

Re: Deep Pink: a chess engine using deep learning

Posted: Tue Feb 21, 2017 9:26 pm
by thomasahle
jdart wrote:"Players will choose an optimal or near-optimal move" is a bad assumption for the FICS dataset. Players will choose better than random moves, which is part of the theory.
I think this should depend on how robust your model and training is to noise. Say 10% of moves are blunders (more than 50cp off from the optimal move), then robust statistics should be able to completely ignore those as outliers. I think typical deep learning methods have some robustness, but certainly not as much as what the theory predict is possible.

Deep-pink appears to play pretty well in the opening and midgame, and then starts to majorly blunder in the end-game. It does things like give away a rook for free. I wonder if this is because end games have more blunders, and thus noise, or the training set just didn't have enough hard cases in this regime.

In any case, since deep-pink only tries to learn an evaluation function, and then uses standard search on top of that, it should certainly be able to play better than the players whos games it was trained on.

Re: Deep Pink: a chess engine using deep learning

Posted: Fri Mar 03, 2017 9:54 am
by jorose
I've considered doing this. In fact it might be possible to greatly compress an endgame tablebase since it is quite possible we could fit (I feel like saying over-fit, but technically that would be wrong) to the degree that it would make no errors, with its variables consuming less space than current databases.

Re: Deep Pink: a chess engine using deep learning

Posted: Fri Mar 03, 2017 10:08 am
by jorose
matthewlai wrote: Basically we want an evaluation function that returns not just a mean, but also a certainty/confidence.
I've been thinking about this for the passed couple weeks as well, but not for what the evaluation function returns, but for move ordering functions.

Specifically when searching in an alpha-beta framework in a null-window, we are not interested in the move which will the highest score, but rather the move which is most likely to return a value greater equal to beta. I have considered training a neural network for move probabilities similar to in Giraffe, but including the beta bound as a feature, which you didn't do iirc.

I have also had several ideas related to trying to integrate UCB algorithms into move ordering, but it is hard as most UCB algorithms involve counts which would not be available in alpha-beta. If I come up with a good idea here I'm hoping one of my professors lets me do a semester project on this.

Re: Deep Pink: a chess engine using deep learning

Posted: Mon Mar 06, 2017 12:52 am
by noobpwnftw
Fulvio wrote:
jdart wrote: 2. "Players will choose an optimal or near-optimal move" is a bad assumption for the FICS dataset.
I wonder if anyone has ever tried to use tablebases to train a deep learning engine. After all the code to create all the positions and correct moves already exists; it is simply a matter of adding the training algorithm to the process...
Yes, training NN against WDL tablebase is possible by exploiting overlearning effect, one set of NN for one piece combination. But considering the size of WDL tablebases are generally acceptable and the results from NN is less accurate and it is slower than just probing the tablebase, it is not very promising to do so.

However, using NN for near-root move sieving or reductions looks like the way to go, as search depth goes higher we can gradually loose the threshold.