Deep Pink: a chess engine using deep learning

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Karlo Bala
Posts: 373
Joined: Wed Mar 22, 2006 10:17 am
Location: Novi Sad, Serbia
Full name: Karlo Balla

Re: Deep Pink: a chess engine using deep learning

Post by Karlo Bala »

nkg114mc wrote:Just found this blog post about deep learning in chess, by Erik Bernhardsson

http://blog.yhat.com/posts/deep-learning-chess.html

And the source code:

https://github.com/erikbern/deep-pink

It mentioned an engine "Deep-Pink", which includes a trained neural network model (for evaluation function). Maybe some one would be interested.

What do you think about this work?
It is very difficult to extract material from the PST. If, for example, there is no position with white knight on H8 in the training set, then evaluation of similar positions will be undefined.
Best Regards,
Karlo Balla Jr.
matthewlai
Posts: 793
Joined: Sun Aug 03, 2014 4:48 am
Location: London, UK

Re: Deep Pink: a chess engine using deep learning

Post by matthewlai »

Karlo Bala wrote:
nkg114mc wrote:Just found this blog post about deep learning in chess, by Erik Bernhardsson

http://blog.yhat.com/posts/deep-learning-chess.html

And the source code:

https://github.com/erikbern/deep-pink

It mentioned an engine "Deep-Pink", which includes a trained neural network model (for evaluation function). Maybe some one would be interested.

What do you think about this work?
It is very difficult to extract material from the PST. If, for example, there is no position with white knight on H8 in the training set, then evaluation of similar positions will be undefined.
Yes, that's a limitation in the feature representation they used. See the Giraffe paper for an alternative representation that doesn't suffer from this.
Disclosure: I work for DeepMind on the AlphaZero project, but everything I say here is personal opinion and does not reflect the views of DeepMind / Alphabet.
Gerd Isenberg
Posts: 2250
Joined: Wed Mar 08, 2006 8:47 pm
Location: Hattingen, Germany

Re: Deep Pink: a chess engine using deep learning

Post by Gerd Isenberg »

I wonder what is new compared to Bernhardsson's November 2014 blog post?
https://erikbern.com/2014/11/29/deep-le ... for-chess/
nkg114mc
Posts: 74
Joined: Sat Dec 18, 2010 5:19 pm
Location: Tianjin, China
Full name: Chao M.

Re: Deep Pink: a chess engine using deep learning

Post by nkg114mc »

Hi Gerd, just realize that it is an old post from 2014. I saw it on Kaggle's facebook page, which was rebloged by Yhat three days ago. I guess the only thing "new" might be on the source code, where the author made some minor change until last spring.
Gerd Isenberg
Posts: 2250
Joined: Wed Mar 08, 2006 8:47 pm
Location: Hattingen, Germany

Re: Deep Pink: a chess engine using deep learning

Post by Gerd Isenberg »

Hi Chao,
often some revised versions of papers or pages appear - but this one does not seem to belong to that sort - only a copy of an old post. Anyway, an interesting topic ...

Giraffe and despite no engine available, DeepChess are more promising approaches ...
http://www.cs.tau.ac.il/~wolf/papers/deepchess.pdf

Gerd
nkg114mc
Posts: 74
Joined: Sat Dec 18, 2010 5:19 pm
Location: Tianjin, China
Full name: Chao M.

Re: Deep Pink: a chess engine using deep learning

Post by nkg114mc »

Hi Gerd, thank your for the suggestions and links! Yes, I agree with you. After making this post I found that there are some more refined work like Matthew's Giraffe and DeepChess that has existed. The work on Deep Pink is still in an early stage IMO.

Also thanks for all the comments and discussion made above in this post. I will follow up Matthew's Giraffe project, and try to explore more base on the exist work.

I would come back to make an update in this post if I found something interesting~
thomasahle
Posts: 94
Joined: Thu Feb 27, 2014 8:19 pm

Re: Deep Pink: a chess engine using deep learning

Post by thomasahle »

jdart wrote:"Players will choose an optimal or near-optimal move" is a bad assumption for the FICS dataset. Players will choose better than random moves, which is part of the theory.
I think this should depend on how robust your model and training is to noise. Say 10% of moves are blunders (more than 50cp off from the optimal move), then robust statistics should be able to completely ignore those as outliers. I think typical deep learning methods have some robustness, but certainly not as much as what the theory predict is possible.

Deep-pink appears to play pretty well in the opening and midgame, and then starts to majorly blunder in the end-game. It does things like give away a rook for free. I wonder if this is because end games have more blunders, and thus noise, or the training set just didn't have enough hard cases in this regime.

In any case, since deep-pink only tries to learn an evaluation function, and then uses standard search on top of that, it should certainly be able to play better than the players whos games it was trained on.
jorose
Posts: 358
Joined: Thu Jan 22, 2015 3:21 pm
Location: Zurich, Switzerland
Full name: Jonathan Rosenthal

Re: Deep Pink: a chess engine using deep learning

Post by jorose »

I've considered doing this. In fact it might be possible to greatly compress an endgame tablebase since it is quite possible we could fit (I feel like saying over-fit, but technically that would be wrong) to the degree that it would make no errors, with its variables consuming less space than current databases.
jorose
Posts: 358
Joined: Thu Jan 22, 2015 3:21 pm
Location: Zurich, Switzerland
Full name: Jonathan Rosenthal

Re: Deep Pink: a chess engine using deep learning

Post by jorose »

matthewlai wrote: Basically we want an evaluation function that returns not just a mean, but also a certainty/confidence.
I've been thinking about this for the passed couple weeks as well, but not for what the evaluation function returns, but for move ordering functions.

Specifically when searching in an alpha-beta framework in a null-window, we are not interested in the move which will the highest score, but rather the move which is most likely to return a value greater equal to beta. I have considered training a neural network for move probabilities similar to in Giraffe, but including the beta bound as a feature, which you didn't do iirc.

I have also had several ideas related to trying to integrate UCB algorithms into move ordering, but it is hard as most UCB algorithms involve counts which would not be available in alpha-beta. If I come up with a good idea here I'm hoping one of my professors lets me do a semester project on this.
noobpwnftw
Posts: 560
Joined: Sun Nov 08, 2015 11:10 pm

Re: Deep Pink: a chess engine using deep learning

Post by noobpwnftw »

Fulvio wrote:
jdart wrote: 2. "Players will choose an optimal or near-optimal move" is a bad assumption for the FICS dataset.
I wonder if anyone has ever tried to use tablebases to train a deep learning engine. After all the code to create all the positions and correct moves already exists; it is simply a matter of adding the training algorithm to the process...
Yes, training NN against WDL tablebase is possible by exploiting overlearning effect, one set of NN for one piece combination. But considering the size of WDL tablebases are generally acceptable and the results from NN is less accurate and it is slower than just probing the tablebase, it is not very promising to do so.

However, using NN for near-root move sieving or reductions looks like the way to go, as search depth goes higher we can gradually loose the threshold.