How far away are we from deep learning Stockfish, Komodo,...

Discussion of chess software programming and technical issues.

Moderators: bob, hgm, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Hai
Posts: 455
Joined: Sun Aug 04, 2013 11:19 am

Re: How far away are we from deep learning Stockfish, Komodo

Post by Hai » Mon May 22, 2017 7:47 am

Laskos wrote:
Hai wrote:2r2rk1/1bpR1p2/1pq1pQp1/p3P2p/P1PR3P/5N2/2P2PPK/8 w - - 0 32
In the position above, Stockfish with 6 cores and depth 40 can't find Kg3 :roll:.
Giraffe with only one core found Kg3 at depth 22 :lol:.
=Much better result and much faster.

It looks like no matter how strong Stockfish will be, he can outcalculated something but never find genius moves which were based on correct evaluation.

How can I change the cores in Giraffe?
In my experiments I found Giraffe eval significantly weaker than that of Fruit 2.1, which is fairly basic. The main strength of Giraffe, paradoxically, is the regular search adopted from other regular chess engines. I also tried the experiment of Sungorus engine, which has only broken PST (piece/square tables with fourfold symmetry made engine's play really painful to watch) and basic material with Andscacs search, and it outperformed Giraffe in game-play. Sure, Andscacs search is much stronger than that of Giraffe, but still Giraffe eval is probably not much above PST+MAT.

In foreseeable future, there is no way for Deep Learning in chess engines, the eval is too good in Chess even with PST+MAT, not talking of the likes of good engines. On the other hand, Giraffe eval is still better than that of a human patzer, which is the goal of the Deep Learning, to beat humans at human-like tasks. Chess at 3400 Elo level is not a very human-like task.
The first chess engines were much weaker than Giraffe
and Giraffe is the first deep learning chess engine.
I think we are at the beginning of improving/developing deep learning chess engines.

Would it be for example possible to take Stockfishs code and change only the evaluation function into an deep learning evaluation function?

And also use cpu + gpu for different tasks?

Maybe a code mix of Giraffe and Deep Pink and some improvements would be a better deep learning engine?

Hai
Posts: 455
Joined: Sun Aug 04, 2013 11:19 am

Re: How far away are we from deep learning Stockfish, Komodo

Post by Hai » Mon May 22, 2017 7:54 am

noobpwnftw wrote:My idea, if we can use NN to filter out X% from all possible moves with Y% overall accuracy while slowing down the engine by no more than Z% and manage to achieve (1 - X) * Y > Z, that can be an improvement.

I just wonder isn't that what we've always been working on, null move, probcut, history heuristics, etc. How much more do you expect from NN, considering we will effectively interpret the same position more than once per eval, resulting in a big number Z.

One other thing is that we can exploit over-fitting and train NNs to replace WDL tablebases, especially bigger ones. So far they are small in size and probing is fast enough. Let's just trick Ronald de Man into writing 7+ man generator first. :twisted:
Just install the 7-man tb without pawns :D

noobpwnftw
Posts: 360
Joined: Sun Nov 08, 2015 10:10 pm

Re: How far away are we from deep learning Stockfish, Komodo

Post by noobpwnftw » Mon May 22, 2017 8:20 am

Hai wrote: Would it be for example possible to take Stockfishs code and change only the evaluation function into an deep learning evaluation function?

And also use cpu + gpu for different tasks?

Maybe a code mix of Giraffe and Deep Pink and some improvements would be a better deep learning engine?
Why are you guys all addicted to the evaluation function... any NN that is fast and more accurate than the null move or history would be an improvement, and these are what NN's best for: heuristics.

Cardoso
Posts: 293
Joined: Thu Mar 16, 2006 6:39 pm

Re: How far away are we from deep learning Stockfish, Komodo

Post by Cardoso » Mon May 22, 2017 8:48 am

About as far as we are from sending a manned mission to mars.
Sorry I mean't Pluto.

Cardoso
Posts: 293
Joined: Thu Mar 16, 2006 6:39 pm

Re: How far away are we from deep learning Stockfish, Komodo

Post by Cardoso » Mon May 22, 2017 5:38 pm

Hai wrote:2r2rk1/1bpR1p2/1pq1pQp1/p3P2p/P1PR3P/5N2/2P2PPK/8 w - - 0 32
In the position above, Stockfish with 6 cores and depth 40 can't find Kg3 :roll:.
Giraffe with only one core found Kg3 at depth 22 :lol:.
=Much better result and much faster.

It looks like no matter how strong Stockfish will be, he can outcalculated something but never find genius moves which were based on correct evaluation.

How can I change the cores in Giraffe?
I've got a different experiment for you.
How about a match SF vs Giraffe, 200 games, LTC, using as much cores as available? I think that would clear your doubts. Game results is what matters.
(This is with all due respect to Giraffe's author and his work. wich I think should continue)

Uri Blass
Posts: 8605
Joined: Wed Mar 08, 2006 11:37 pm
Location: Tel-Aviv Israel

Re: How far away are we from deep learning Stockfish, Komodo

Post by Uri Blass » Mon May 22, 2017 8:00 pm

Hai wrote:2r2rk1/1bpR1p2/1pq1pQp1/p3P2p/P1PR3P/5N2/2P2PPK/8 w - - 0 32
In the position above, Stockfish with 6 cores and depth 40 can't find Kg3 :roll:.
Giraffe with only one core found Kg3 at depth 22 :lol:.
=Much better result and much faster.

It looks like no matter how strong Stockfish will be, he can outcalculated something but never find genius moves which were based on correct evaluation.

How can I change the cores in Giraffe?
Stockfish does not need to find Kg3 to win with white.
I believe based on stockfish's evaluation that Qf4 instead of Kg3 is also winning.

Dann Corbit
Posts: 10192
Joined: Wed Mar 08, 2006 7:57 pm
Location: Redmond, WA USA
Contact:

Re: How far away are we from deep learning Stockfish, Komodo

Post by Dann Corbit » Mon May 22, 2017 8:22 pm

I don't think anyone knows the answer to that question.

The approach of Giraffe is hard for an ordinary CPU based system to keep up, because so much floating point math is involved.

But, with unified memory, and using a GPU, it might prove exceptionally capable. Nobody tried that yet, because unified memory GPU systems are not up to par yet. Copying stuff to and from a GPU is a very high cost, not to mention that they don't handle recursion very well yet and the video RAM is limited (though I have seen some with 13 GB video RAM).

But I think that the new AMD GPUs may communicate directly with the new AMD CPUs. That might make a real formidable chess proposition for deep learning style computation.

I also think that even with current GPUs and CPUs, the GPU could be used as a coprocessor to do mate search. With a box full of cards, 30 ply mate searches might be feasible because the move generator on GPUs is so incredibly fast (Perft 11 in one hour, and we would expand the promising nodes first).

Something along the lines of:
"Parallel Depth First Proof Number Search" by Tomoyuki Kaneko
(example: df-pn+ solves Shogi mate in 1525 ply in 3 minutes.)

At some point, the GPU cards will have full strength recursion and memory that is transparent to the CPU. When that happens, I think that not only will the Giraffe type approach be feasible, it will be formidable.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.

jorose
Posts: 269
Joined: Thu Jan 22, 2015 2:21 pm
Location: Zurich, Switzerland
Full name: Jonathan Rosenthal

Re: How far away are we from deep learning Stockfish, Komodo

Post by jorose » Mon May 22, 2017 11:33 pm

Hai wrote:
Lyudmil Tsvetkov wrote:
ZirconiumX wrote:
Milos wrote:
ZirconiumX wrote:
noobpwnftw wrote:After some garage experiments, I think NN-based eval for chess is pretty much a joke even compared to current PST+MAT eval alone.

At most, it might be good for move pruning and sorting, prove me wrong.
Gladly.

I don't think the answer to "Are neural networks feasible for computer chess?" is "No", I think it's "Not yet".

AlphaGo required specialised hardware to win at Go, remember.
Answer is No and Never. Bojun is 100% correct.
If you took Giraffe and replaced its eval with SFs eval it would gain few hundred Elo easily.
I would disagree. The Stockfish evaluation has been carefully tuned in concert with the search, but the individual evaluation scores are very bad. I think Larry Kaufman said that Stockfish has the evaluation knowledge of an 1800 player.

The Achilles heel of Giraffe is that all of the individual multiplications that go through a neural net to get a result have to be done on the CPU. It'd be nice if we had developed a processor that could perform lots of multiplication in parallel. You know, like, a GPU.

But modern GPUs have the problem of all the code needing to be transferred via memory copying. Fortunately, we're working on that.

So the answer is still "not yet".
Similarity between Go and chess is that they are both played at the board. That's exactly where any similarity ends.
Oh, so Go has no tactics? No strategy? No opening theory? They are more similar than you think. I'll accept that Go will never have endgame tablebases, but the two games have a reasonable amount of things in common.
I do not know if and when Larry said that, but SF has the evaluation knowledge of at least a 2300-elo player.
1800-eloers do not quite understand what outposts are, what piece square tables are(apart from pushing pieces towards the enemy king and towards the enemy camp), have just an intuitive notion of candidate passers and so on. SF, on the other hand, has all this knowledge in its code.

So yes, current top engines have a positional knowledge of around 2300, nothing to boast about, but not rudimentary either.
But this means that the evaluation/positional knowledge is still very bad and it is even weaker than an international master. But Giraffe is stronger, at least the 2016 version.
No, that is not true, Giraffe's positional knowledge is much weaker than that of an IM and way more costly than that of Stockfish which is actually a huge problem.

Lyudmil Tsvetkov
Posts: 6052
Joined: Tue Jun 12, 2012 10:41 am

Re: How far away are we from deep learning Stockfish, Komodo

Post by Lyudmil Tsvetkov » Tue May 23, 2017 6:09 am

Hai wrote:
Lyudmil Tsvetkov wrote:
ZirconiumX wrote:
Milos wrote:
ZirconiumX wrote:
noobpwnftw wrote:After some garage experiments, I think NN-based eval for chess is pretty much a joke even compared to current PST+MAT eval alone.

At most, it might be good for move pruning and sorting, prove me wrong.
Gladly.

I don't think the answer to "Are neural networks feasible for computer chess?" is "No", I think it's "Not yet".

AlphaGo required specialised hardware to win at Go, remember.
Answer is No and Never. Bojun is 100% correct.
If you took Giraffe and replaced its eval with SFs eval it would gain few hundred Elo easily.
I would disagree. The Stockfish evaluation has been carefully tuned in concert with the search, but the individual evaluation scores are very bad. I think Larry Kaufman said that Stockfish has the evaluation knowledge of an 1800 player.

The Achilles heel of Giraffe is that all of the individual multiplications that go through a neural net to get a result have to be done on the CPU. It'd be nice if we had developed a processor that could perform lots of multiplication in parallel. You know, like, a GPU.

But modern GPUs have the problem of all the code needing to be transferred via memory copying. Fortunately, we're working on that.

So the answer is still "not yet".
Similarity between Go and chess is that they are both played at the board. That's exactly where any similarity ends.
Oh, so Go has no tactics? No strategy? No opening theory? They are more similar than you think. I'll accept that Go will never have endgame tablebases, but the two games have a reasonable amount of things in common.
I do not know if and when Larry said that, but SF has the evaluation knowledge of at least a 2300-elo player.
1800-eloers do not quite understand what outposts are, what piece square tables are(apart from pushing pieces towards the enemy king and towards the enemy camp), have just an intuitive notion of candidate passers and so on. SF, on the other hand, has all this knowledge in its code.

So yes, current top engines have a positional knowledge of around 2300, nothing to boast about, but not rudimentary either.
But this means that the evaluation/positional knowledge is still very bad and it is even weaker than an international master. But Giraffe is stronger, at least the 2016 version.
because of tactics and not commiting obvious shallow mistakes, very typical of humans.

Hai
Posts: 455
Joined: Sun Aug 04, 2013 11:19 am

Re: How far away are we from deep learning Stockfish, Komodo

Post by Hai » Wed May 24, 2017 7:37 pm

Cardoso wrote:
Hai wrote:2r2rk1/1bpR1p2/1pq1pQp1/p3P2p/P1PR3P/5N2/2P2PPK/8 w - - 0 32
In the position above, Stockfish with 6 cores and depth 40 can't find Kg3 :roll:.
Giraffe with only one core found Kg3 at depth 22 :lol:.
=Much better result and much faster.

It looks like no matter how strong Stockfish will be, he can outcalculated something but never find genius moves which were based on correct evaluation.

How can I change the cores in Giraffe?
I've got a different experiment for you.
How about a match SF vs Giraffe, 200 games, LTC, using as much cores as available? I think that would clear your doubts. Game results is what matters.
(This is with all due respect to Giraffe's author and his work. wich I think should continue)
For me it is much more important to see how often Giraffe can get an advantage against Stockfish before Giraffe blunder tactically in the middlegame.

Post Reply