How far away are we from deep learning Stockfish, Komodo,...

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
hgm
Posts: 27787
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: How far away are we from deep learning Stockfish, Komodo

Post by hgm »

Lyudmil Tsvetkov wrote:go on a chess-like 8x8 board will be solved at least 100 times faster than chess.
As Chess will never be solved in this Universe, that doesn't mean very much.

It also doesn't seem to be based on anything.
smatovic
Posts: 2639
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: How far away are we from deep learning Stockfish, Komodo

Post by smatovic »

As Chess will never be solved in this Universe, that doesn't mean very much.
Applying Moore's Law on quantum computers,
i have hope to see an engine with perfect play.

..imagine you have a billion qubits or so.

--
Srdja
syzygy
Posts: 5557
Joined: Tue Feb 28, 2012 11:56 pm

Re: How far away are we from deep learning Stockfish, Komodo

Post by syzygy »

Hai wrote:3, 6, 9, 12 months?

Does it make sense to have a
Stockfish
Asmfish
Pedantfish
...
Deep learning Stockfish
Deep learning Komodo
Deep learning Houdini
...
No, that simply makes no sense at all. It's like asking whether it makes sense to have a combustion engine run on electricity.

The right question to ask is whether a "deep-learning" chess program will ever become stronger than Stockfish. The answer is unknown. It will not be soon.
syzygy
Posts: 5557
Joined: Tue Feb 28, 2012 11:56 pm

Re: How far away are we from deep learning Stockfish, Komodo

Post by syzygy »

Hai wrote:2r2rk1/1bpR1p2/1pq1pQp1/p3P2p/P1PR3P/5N2/2P2PPK/8 w - - 0 32
In the position above, Stockfish with 6 cores and depth 40 can't find Kg3 :roll:.
Giraffe with only one core found Kg3 at depth 22 :lol:.
=Much better result and much faster.
My patzer engine with 1 core finds Kg3 at depth 14 after 2.486 seconds. (If it did not prune at all it would probably have found it at lower depth.)

Comparing programs based on a single position is entirely meaningless.
How can I change the cores in Giraffe?
I suppose with vi and gcc.
syzygy
Posts: 5557
Joined: Tue Feb 28, 2012 11:56 pm

Re: How far away are we from deep learning Stockfish, Komodo

Post by syzygy »

smatovic wrote:
As Chess will never be solved in this Universe, that doesn't mean very much.
Applying Moore's Law on quantum computers,
i have hope to see an engine with perfect play.

..imagine you have a billion qubits or so.
Quantum computers help as much as perfect move ordering. Not enough.
User avatar
Laskos
Posts: 10948
Joined: Wed Jul 26, 2006 10:21 pm
Full name: Kai Laskos

Re: How far away are we from deep learning Stockfish, Komodo

Post by Laskos »

Hai wrote:2r2rk1/1bpR1p2/1pq1pQp1/p3P2p/P1PR3P/5N2/2P2PPK/8 w - - 0 32
In the position above, Stockfish with 6 cores and depth 40 can't find Kg3 :roll:.
Giraffe with only one core found Kg3 at depth 22 :lol:.
=Much better result and much faster.

It looks like no matter how strong Stockfish will be, he can outcalculated something but never find genius moves which were based on correct evaluation.

How can I change the cores in Giraffe?
In my experiments I found Giraffe eval significantly weaker than that of Fruit 2.1, which is fairly basic. The main strength of Giraffe, paradoxically, is the regular search adopted from other regular chess engines. I also tried the experiment of Sungorus engine, which has only broken PST (piece/square tables with fourfold symmetry made engine's play really painful to watch) and basic material with Andscacs search, and it outperformed Giraffe in game-play. Sure, Andscacs search is much stronger than that of Giraffe, but still Giraffe eval is probably not much above PST+MAT.

In foreseeable future, there is no way for Deep Learning in chess engines, the eval is too good in Chess even with PST+MAT, not talking of the likes of good engines. On the other hand, Giraffe eval is still better than that of a human patzer, which is the goal of the Deep Learning, to beat humans at human-like tasks. Chess at 3400 Elo level is not a very human-like task.
noobpwnftw
Posts: 560
Joined: Sun Nov 08, 2015 11:10 pm

Re: How far away are we from deep learning Stockfish, Komodo

Post by noobpwnftw »

Let me just re-phrase my one-line sentence into many.

Why NN is good for Go, as described by HGM, is because currently there is no better modeling/scoring methods than heuristics from machine learning, plus the nature of Go: search space gradually shrinks and MCTS friendly. When it comes to the question that whether NN can beat our current static eval implementations, I don't think so. We do have comprehensive modeling methods for chess.

NN is also too slow if we need to do eval a lot(since our search space does not usually shrink). This is due to our tree search architecture, unlike MCTS, massive parallel processing in NN won't benefit us much regarding performance, we are always waiting on the information from the slowest leaf, no matter how you split the tree.

If you look into AlphaGo you will see how it use its NNs: shallow depth move ordering (value network) and move pruning (policy network & MCTS fast roll-out).

So, if NN can beat some shallow A/B search both in speed and precision, we can use it for depth < N move sorting and pruning (e.g. LMR).

This is pretty much all that I can think of which NN can do any good for chess.
gladius
Posts: 568
Joined: Tue Dec 12, 2006 10:10 am
Full name: Gary Linscott

Re: How far away are we from deep learning Stockfish, Komodo

Post by gladius »

ZirconiumX wrote:
Milos wrote:
ZirconiumX wrote:
noobpwnftw wrote:After some garage experiments, I think NN-based eval for chess is pretty much a joke even compared to current PST+MAT eval alone.

At most, it might be good for move pruning and sorting, prove me wrong.
Gladly.

I don't think the answer to "Are neural networks feasible for computer chess?" is "No", I think it's "Not yet".

AlphaGo required specialised hardware to win at Go, remember.
Answer is No and Never. Bojun is 100% correct.
If you took Giraffe and replaced its eval with SFs eval it would gain few hundred Elo easily.
I would disagree. The Stockfish evaluation has been carefully tuned in concert with the search, but the individual evaluation scores are very bad. I think Larry Kaufman said that Stockfish has the evaluation knowledge of an 1800 player.
That may have been true a few years ago, but these days, it's actually quite complicated. There is even a wonderful piece of work to explore the different terms that contribute in the eval here: https://hxim.github.io/Stockfish-Evaluation-Guide/. Definitely higher than an 1800 player (speaking as a player in that range :).

Of course, the eval and search are tuned together, so the eval doesn't do things the search can do better/cheaper, and the other way around as well.

Deep nets are an incredibly powerful tool, but they are still clearly not competitive with old-school techniques for chess on current computers. As you mention, if the latency issue is solved with GPUs (or TPUs or whatever we end up with) it might be a different story. Even so, I'm doubtful. Chess is just so amenable to relatively cheap heuristics, and 64 square bitboards are almost too perfect a representation for 64 bit CPUs.
noobpwnftw
Posts: 560
Joined: Sun Nov 08, 2015 11:10 pm

Re: How far away are we from deep learning Stockfish, Komodo

Post by noobpwnftw »

My idea, if we can use NN to filter out X% from all possible moves with Y% overall accuracy while slowing down the engine by no more than Z% and manage to achieve (1 - X) * Y > Z, that can be an improvement.

I just wonder isn't that what we've always been working on, null move, probcut, history heuristics, etc. How much more do you expect from NN, considering we will effectively interpret the same position more than once per eval, resulting in a big number Z.

One other thing is that we can exploit over-fitting and train NNs to replace WDL tablebases, especially bigger ones. So far they are small in size and probing is fast enough. Let's just trick Ronald de Man into writing 7+ man generator first. :twisted:
Hai
Posts: 598
Joined: Sun Aug 04, 2013 1:19 pm

Re: How far away are we from deep learning Stockfish, Komodo

Post by Hai »

Lyudmil Tsvetkov wrote:
ZirconiumX wrote:
Milos wrote:
ZirconiumX wrote:
noobpwnftw wrote:After some garage experiments, I think NN-based eval for chess is pretty much a joke even compared to current PST+MAT eval alone.

At most, it might be good for move pruning and sorting, prove me wrong.
Gladly.

I don't think the answer to "Are neural networks feasible for computer chess?" is "No", I think it's "Not yet".

AlphaGo required specialised hardware to win at Go, remember.
Answer is No and Never. Bojun is 100% correct.
If you took Giraffe and replaced its eval with SFs eval it would gain few hundred Elo easily.
I would disagree. The Stockfish evaluation has been carefully tuned in concert with the search, but the individual evaluation scores are very bad. I think Larry Kaufman said that Stockfish has the evaluation knowledge of an 1800 player.

The Achilles heel of Giraffe is that all of the individual multiplications that go through a neural net to get a result have to be done on the CPU. It'd be nice if we had developed a processor that could perform lots of multiplication in parallel. You know, like, a GPU.

But modern GPUs have the problem of all the code needing to be transferred via memory copying. Fortunately, we're working on that.

So the answer is still "not yet".
Similarity between Go and chess is that they are both played at the board. That's exactly where any similarity ends.
Oh, so Go has no tactics? No strategy? No opening theory? They are more similar than you think. I'll accept that Go will never have endgame tablebases, but the two games have a reasonable amount of things in common.
I do not know if and when Larry said that, but SF has the evaluation knowledge of at least a 2300-elo player.
1800-eloers do not quite understand what outposts are, what piece square tables are(apart from pushing pieces towards the enemy king and towards the enemy camp), have just an intuitive notion of candidate passers and so on. SF, on the other hand, has all this knowledge in its code.

So yes, current top engines have a positional knowledge of around 2300, nothing to boast about, but not rudimentary either.
But this means that the evaluation/positional knowledge is still very bad and it is even weaker than an international master. But Giraffe is stronger, at least the 2016 version.