How far away are we from deep learning Stockfish, Komodo,...

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

Dann Corbit
Posts: 12540
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: How far away are we from deep learning Stockfish, Komodo

Post by Dann Corbit »

The brain of a honeybee does 10 billion calculations per second (about the same as your retina does).
A whole hive of them represents a pretty staggering calculation potential.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
Lyudmil Tsvetkov
Posts: 6052
Joined: Tue Jun 12, 2012 12:41 pm

Re: How far away are we from deep learning Stockfish, Komodo

Post by Lyudmil Tsvetkov »

Hai wrote:3, 6, 9, 12 months?

Does it make sense to have a
Stockfish
Asmfish
Pedantfish
...
Deep learning Stockfish
Deep learning Komodo
Deep learning Houdini
...
if you mean by deep learning training your eval function based on the outcome of the game(loss, draw, win), that should be more or less similar to just testing your eval terms the usual way: no guarantee such training would produce better results, probably quite the opposite.
what is important is that with this technique you can train only already existing terms, and most of current engine weaknesses are in not having sufficiently wide range of useful parameters.

if you mean by deep learning neural networks, increasing the number of interacting terms, that could be achieved both ways, but you still have to specify those additional terms. If SF and other leading engines can not successfully tune a handful of parameters, with a lot of redundancies necessarily involved, how are they going to manage it with exponentially more parameters?

I guess the only reason why some consider go more complex than chess is the significantly larger board, as well as the much more numerous stones with the progress of the game. Conceptually however, with similar board sizes, chess should be much more complex. So techniques that could be useful for go with major hardware will definitely fail for chess.

So far, each and every AI progress was exclusively due to humans, and that will remain the case in the future. So, in order to achieve better results, I guess we need more/brighter humans involved, in the first place. :)
Hai
Posts: 598
Joined: Sun Aug 04, 2013 1:19 pm

Re: How far away are we from deep learning Stockfish, Komodo

Post by Hai »

2r2rk1/1bpR1p2/1pq1pQp1/p3P2p/P1PR3P/5N2/2P2PPK/8 w - - 0 32
In the position above, Stockfish with 6 cores and depth 40 can't find Kg3 :roll:.
Giraffe with only one core found Kg3 at depth 22 :lol:.
=Much better result and much faster.

It looks like no matter how strong Stockfish will be, he can outcalculated something but never find genius moves which were based on correct evaluation.

How can I change the cores in Giraffe?
User avatar
hgm
Posts: 27795
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: How far away are we from deep learning Stockfish, Komodo

Post by hgm »

You don't seem to realize how it works. Because resources are finite and well below the size of the game tree of Chess, there will always be positions where you miss the relevant line, no matter how strong the engine. What makes the engine strong is that it spends its resources more wisely, by focusing on the most likely best lines in the most common situations. For every position that Stockfish cannot solve, but Giraffe can, there will be thousands where this is the reverse.

Giraffe is able to solve your position not because it has 'correct evaluation', but because it is a weaker engine, spending effort where it would usually not pay off.
Last edited by hgm on Sat May 20, 2017 10:54 am, edited 1 time in total.
User avatar
cdani
Posts: 2204
Joined: Sat Jan 18, 2014 10:24 am
Location: Andorra

Re: How far away are we from deep learning Stockfish, Komodo

Post by cdani »

Hai wrote:2r2rk1/1bpR1p2/1pq1pQp1/p3P2p/P1PR3P/5N2/2P2PPK/8 w - - 0 32
In the position above, Stockfish with 6 cores and depth 40 can't find Kg3 :roll:.
Giraffe with only one core found Kg3 at depth 22 :lol:.
=Much better result and much faster.

It looks like no matter how strong Stockfish will be, he can outcalculated something but never find genius moves which were based on correct evaluation.

How can I change the cores in Giraffe?

Current development version of Andscacs needs 42 seconds:

Code: Select all

info depth 29 seldepth 54 score cp 103 nodes 37798584 nps 1060506 tbhits 0 time
35642 pv f6f4 g8g7 c2c3 c8a8 d4d2 a8e8 h2g1 b7c8 d7d4 f7f5 e5f6 f8f6 f4e5 e8f8 d
4d8 c6c5 e5c5 b6c5 f3e5 f8d8 d2d8 f6f8 d8f8 g7f8 e5g6 f8f7 g6e5 f7f6 e5c6 c8d7 c
6a5 d7a4 a5b7 a4b3 b7c5 b3c4 f2f3 f6e5 g1f2
info depth 29 currmove h2g1 currmovenumber 3
info depth 29 currmove f3e1 currmovenumber 4
info depth 29 currmove c4c5 currmovenumber 5
info depth 29 currmove h2h1 currmovenumber 6
info depth 29 currmove d7e7 currmovenumber 7
info depth 29 currmove f6g5 currmovenumber 8
info depth 29 currmove f3d2 currmovenumber 9
info depth 29 currmove d4d3 currmovenumber 10
info depth 29 currmove h2g3 currmovenumber 11
info depth 29 seldepth 48 score cp 154 lowerbound nodes 44605417 nps 1054850 tbh
its 0 time 42286 pv h2g3
info depth 29 currmove h2g3 currmovenumber 1
info depth 29 seldepth 45 score cp 190 lowerbound nodes 46387607 nps 1056520 tbh
its 0 time 43906 pv h2g3
info depth 29 currmove h2g3 currmovenumber 1
info depth 29 seldepth 46 score cp 244 lowerbound nodes 48382210 nps 1058483 tbh
its 0 time 45709 pv h2g3
info depth 29 currmove h2g3 currmovenumber 1
info depth 29 seldepth 44 score cp 325 lowerbound nodes 50770524 nps 1058733 tbh
its 0 time 47954 pv h2g3
info depth 29 currmove h2g3 currmovenumber 1
info depth 29 seldepth 43 score cp 446 lowerbound nodes 53427971 nps 1057477 tbh
its 0 time 50524 pv h2g3
info depth 29 currmove h2g3 currmovenumber 1
info depth 29 seldepth 43 score cp 627 lowerbound nodes 55976147 nps 1059231 tbh
its 0 time 52846 pv h2g3
info depth 29 currmove h2g3 currmovenumber 1
stop
info nodes 57763932 nps 1055781 time 54712
mar
Posts: 2554
Joined: Fri Nov 26, 2010 2:00 pm
Location: Czech Republic
Full name: Martin Sedlak

Re: How far away are we from deep learning Stockfish, Komodo

Post by mar »

Hai wrote:2r2rk1/1bpR1p2/1pq1pQp1/p3P2p/P1PR3P/5N2/2P2PPK/8 w - - 0 32
In the position above, Stockfish with 6 cores and depth 40 can't find Kg3 :roll:.
Giraffe with only one core found Kg3 at depth 22 :lol:.
=Much better result and much faster.

It looks like no matter how strong Stockfish will be, he can outcalculated something but never find genius moves which were based on correct evaluation.

How can I change the cores in Giraffe?
So what? Positions happen, Cheng finds Kg3 at depth 12 in 0.3 seconds, single core, 4M hash.
ZirconiumX
Posts: 1334
Joined: Sun Jul 17, 2011 11:14 am

Re: How far away are we from deep learning Stockfish, Komodo

Post by ZirconiumX »

Milos wrote:
ZirconiumX wrote:
noobpwnftw wrote:After some garage experiments, I think NN-based eval for chess is pretty much a joke even compared to current PST+MAT eval alone.

At most, it might be good for move pruning and sorting, prove me wrong.
Gladly.

I don't think the answer to "Are neural networks feasible for computer chess?" is "No", I think it's "Not yet".

AlphaGo required specialised hardware to win at Go, remember.
Answer is No and Never. Bojun is 100% correct.
If you took Giraffe and replaced its eval with SFs eval it would gain few hundred Elo easily.
I would disagree. The Stockfish evaluation has been carefully tuned in concert with the search, but the individual evaluation scores are very bad. I think Larry Kaufman said that Stockfish has the evaluation knowledge of an 1800 player.

The Achilles heel of Giraffe is that all of the individual multiplications that go through a neural net to get a result have to be done on the CPU. It'd be nice if we had developed a processor that could perform lots of multiplication in parallel. You know, like, a GPU.

But modern GPUs have the problem of all the code needing to be transferred via memory copying. Fortunately, we're working on that.

So the answer is still "not yet".
Similarity between Go and chess is that they are both played at the board. That's exactly where any similarity ends.
Oh, so Go has no tactics? No strategy? No opening theory? They are more similar than you think. I'll accept that Go will never have endgame tablebases, but the two games have a reasonable amount of things in common.
Some believe in the almighty dollar.

I believe in the almighty printf statement.
User avatar
hgm
Posts: 27795
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: How far away are we from deep learning Stockfish, Komodo

Post by hgm »

Lyudmil Tsvetkov wrote:I guess the only reason why some consider go more complex than chess is the significantly larger board, as well as the much more numerous stones with the progress of the game. Conceptually however, with similar board sizes, chess should be much more complex. So techniques that could be useful for go with major hardware will definitely fail for chess.
No.

Go is more difficult than chess-like games because there doesn't exist a simple heuristic evaluation, like material + PST in Chess. Even mobility (number of legal moves), which is the AI-researcher's poor-man's solution to evaluation in cases where they don't have a clue (like in Reversi) is simply a function of the turn number in Go (namely the number of empty grid points), and doesn't depend on how you play.

I just made an engine for Tenjiku Shogi, which is Chess variant played on a 16x16 board with 78 pieces per side, where in the number of moves in a typical middle-game position is similar to that in Go. It was no problem at all.
Lyudmil Tsvetkov
Posts: 6052
Joined: Tue Jun 12, 2012 12:41 pm

Re: How far away are we from deep learning Stockfish, Komodo

Post by Lyudmil Tsvetkov »

hgm wrote:
Lyudmil Tsvetkov wrote:I guess the only reason why some consider go more complex than chess is the significantly larger board, as well as the much more numerous stones with the progress of the game. Conceptually however, with similar board sizes, chess should be much more complex. So techniques that could be useful for go with major hardware will definitely fail for chess.
No.

Go is more difficult than chess-like games because there doesn't exist a simple heuristic evaluation, like material + PST in Chess. Even mobility (number of legal moves), which is the AI-researcher's poor-man's solution to evaluation in cases where they don't have a clue (like in Reversi) is simply a function of the turn number in Go (namely the number of empty grid points), and doesn't depend on how you play.

I just made an engine for Tenjiku Shogi, which is Chess variant played on a 16x16 board with 78 pieces per side, where in the number of moves in a typical middle-game position is similar to that in Go. It was no problem at all.
go on a chess-like 8x8 board will be solved at least 100 times faster than chess.

so yes, complexity is a matter of size.

I am not very familiar with go, but there are always useful heuristic functions to be found.
Lyudmil Tsvetkov
Posts: 6052
Joined: Tue Jun 12, 2012 12:41 pm

Re: How far away are we from deep learning Stockfish, Komodo

Post by Lyudmil Tsvetkov »

ZirconiumX wrote:
Milos wrote:
ZirconiumX wrote:
noobpwnftw wrote:After some garage experiments, I think NN-based eval for chess is pretty much a joke even compared to current PST+MAT eval alone.

At most, it might be good for move pruning and sorting, prove me wrong.
Gladly.

I don't think the answer to "Are neural networks feasible for computer chess?" is "No", I think it's "Not yet".

AlphaGo required specialised hardware to win at Go, remember.
Answer is No and Never. Bojun is 100% correct.
If you took Giraffe and replaced its eval with SFs eval it would gain few hundred Elo easily.
I would disagree. The Stockfish evaluation has been carefully tuned in concert with the search, but the individual evaluation scores are very bad. I think Larry Kaufman said that Stockfish has the evaluation knowledge of an 1800 player.

The Achilles heel of Giraffe is that all of the individual multiplications that go through a neural net to get a result have to be done on the CPU. It'd be nice if we had developed a processor that could perform lots of multiplication in parallel. You know, like, a GPU.

But modern GPUs have the problem of all the code needing to be transferred via memory copying. Fortunately, we're working on that.

So the answer is still "not yet".
Similarity between Go and chess is that they are both played at the board. That's exactly where any similarity ends.
Oh, so Go has no tactics? No strategy? No opening theory? They are more similar than you think. I'll accept that Go will never have endgame tablebases, but the two games have a reasonable amount of things in common.
I do not know if and when Larry said that, but SF has the evaluation knowledge of at least a 2300-elo player.
1800-eloers do not quite understand what outposts are, what piece square tables are(apart from pushing pieces towards the enemy king and towards the enemy camp), have just an intuitive notion of candidate passers and so on. SF, on the other hand, has all this knowledge in its code.

So yes, current top engines have a positional knowledge of around 2300, nothing to boast about, but not rudimentary either.