As Chess will never be solved in this Universe, that doesn't mean very much.Lyudmil Tsvetkov wrote:go on a chess-like 8x8 board will be solved at least 100 times faster than chess.
It also doesn't seem to be based on anything.
Moderators: hgm, Rebel, chrisw
As Chess will never be solved in this Universe, that doesn't mean very much.Lyudmil Tsvetkov wrote:go on a chess-like 8x8 board will be solved at least 100 times faster than chess.
Applying Moore's Law on quantum computers,As Chess will never be solved in this Universe, that doesn't mean very much.
No, that simply makes no sense at all. It's like asking whether it makes sense to have a combustion engine run on electricity.Hai wrote:3, 6, 9, 12 months?
Does it make sense to have a
Stockfish
Asmfish
Pedantfish
...
Deep learning Stockfish
Deep learning Komodo
Deep learning Houdini
...
My patzer engine with 1 core finds Kg3 at depth 14 after 2.486 seconds. (If it did not prune at all it would probably have found it at lower depth.)Hai wrote:2r2rk1/1bpR1p2/1pq1pQp1/p3P2p/P1PR3P/5N2/2P2PPK/8 w - - 0 32
In the position above, Stockfish with 6 cores and depth 40 can't find Kg3 .
Giraffe with only one core found Kg3 at depth 22 .
=Much better result and much faster.
I suppose with vi and gcc.How can I change the cores in Giraffe?
Quantum computers help as much as perfect move ordering. Not enough.smatovic wrote:Applying Moore's Law on quantum computers,As Chess will never be solved in this Universe, that doesn't mean very much.
i have hope to see an engine with perfect play.
..imagine you have a billion qubits or so.
In my experiments I found Giraffe eval significantly weaker than that of Fruit 2.1, which is fairly basic. The main strength of Giraffe, paradoxically, is the regular search adopted from other regular chess engines. I also tried the experiment of Sungorus engine, which has only broken PST (piece/square tables with fourfold symmetry made engine's play really painful to watch) and basic material with Andscacs search, and it outperformed Giraffe in game-play. Sure, Andscacs search is much stronger than that of Giraffe, but still Giraffe eval is probably not much above PST+MAT.Hai wrote:2r2rk1/1bpR1p2/1pq1pQp1/p3P2p/P1PR3P/5N2/2P2PPK/8 w - - 0 32
In the position above, Stockfish with 6 cores and depth 40 can't find Kg3 .
Giraffe with only one core found Kg3 at depth 22 .
=Much better result and much faster.
It looks like no matter how strong Stockfish will be, he can outcalculated something but never find genius moves which were based on correct evaluation.
How can I change the cores in Giraffe?
That may have been true a few years ago, but these days, it's actually quite complicated. There is even a wonderful piece of work to explore the different terms that contribute in the eval here: https://hxim.github.io/Stockfish-Evaluation-Guide/. Definitely higher than an 1800 player (speaking as a player in that range .ZirconiumX wrote:I would disagree. The Stockfish evaluation has been carefully tuned in concert with the search, but the individual evaluation scores are very bad. I think Larry Kaufman said that Stockfish has the evaluation knowledge of an 1800 player.Milos wrote:Answer is No and Never. Bojun is 100% correct.ZirconiumX wrote:Gladly.noobpwnftw wrote:After some garage experiments, I think NN-based eval for chess is pretty much a joke even compared to current PST+MAT eval alone.
At most, it might be good for move pruning and sorting, prove me wrong.
I don't think the answer to "Are neural networks feasible for computer chess?" is "No", I think it's "Not yet".
AlphaGo required specialised hardware to win at Go, remember.
If you took Giraffe and replaced its eval with SFs eval it would gain few hundred Elo easily.
But this means that the evaluation/positional knowledge is still very bad and it is even weaker than an international master. But Giraffe is stronger, at least the 2016 version.Lyudmil Tsvetkov wrote:I do not know if and when Larry said that, but SF has the evaluation knowledge of at least a 2300-elo player.ZirconiumX wrote:I would disagree. The Stockfish evaluation has been carefully tuned in concert with the search, but the individual evaluation scores are very bad. I think Larry Kaufman said that Stockfish has the evaluation knowledge of an 1800 player.Milos wrote:Answer is No and Never. Bojun is 100% correct.ZirconiumX wrote:Gladly.noobpwnftw wrote:After some garage experiments, I think NN-based eval for chess is pretty much a joke even compared to current PST+MAT eval alone.
At most, it might be good for move pruning and sorting, prove me wrong.
I don't think the answer to "Are neural networks feasible for computer chess?" is "No", I think it's "Not yet".
AlphaGo required specialised hardware to win at Go, remember.
If you took Giraffe and replaced its eval with SFs eval it would gain few hundred Elo easily.
The Achilles heel of Giraffe is that all of the individual multiplications that go through a neural net to get a result have to be done on the CPU. It'd be nice if we had developed a processor that could perform lots of multiplication in parallel. You know, like, a GPU.
But modern GPUs have the problem of all the code needing to be transferred via memory copying. Fortunately, we're working on that.
So the answer is still "not yet".
Oh, so Go has no tactics? No strategy? No opening theory? They are more similar than you think. I'll accept that Go will never have endgame tablebases, but the two games have a reasonable amount of things in common.Similarity between Go and chess is that they are both played at the board. That's exactly where any similarity ends.
1800-eloers do not quite understand what outposts are, what piece square tables are(apart from pushing pieces towards the enemy king and towards the enemy camp), have just an intuitive notion of candidate passers and so on. SF, on the other hand, has all this knowledge in its code.
So yes, current top engines have a positional knowledge of around 2300, nothing to boast about, but not rudimentary either.