The next revolution in computer chess?

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

dkappe
Posts: 1632
Joined: Tue Aug 21, 2018 7:52 pm
Full name: Dietrich Kappe

Re: The next revolution in computer chess?

Post by dkappe »

While waiting on toga ii to finish producing data, I trained up a net on 200m positions of one of my personal projects. Looks to be about even with the more extensively trained gk0712 net, but not quite up to the standard of the Sv nets.

[pgn]
[Event "?"]
[Site "?"]
[Date "2020.07.25"]
[Round "1"]
[White "nightnurse0.1-1000"]
[Black "gk0712"]
[Result "1-0"]
[ECO "A60"]
[GameDuration "00:04:56"]
[GameEndTime "2020-07-25T02:24:03.038 CDT"]
[GameStartTime "2020-07-25T02:19:06.519 CDT"]
[Opening "Benoni defense"]
[PlyCount "201"]
[TimeControl "60+1"]

1. d4 {book} Nf6 {book} 2. c4 {book} c5 {book} 3. d5 {book} e6 {book}
4. Nc3 {+0.33/19 1.9s} d6 {-1.01/18 2.2s} 5. h3 {+0.30/17 1.3s}
Nbd7 {-0.83/20 2.0s} 6. e4 {+0.45/19 1.4s} Be7 {-0.83/16 0.76s}
7. Nf3 {+0.45/20 5.5s} Nf8 {-0.76/19 1.2s} 8. Bd3 {+0.37/20 1.3s}
Ng6 {-0.77/19 3.1s} 9. Nh2 {+0.36/20 1.4s} a6 {-0.76/19 4.2s}
10. a4 {+0.34/21 3.2s} e5 {-0.60/19 2.4s} 11. a5 {+0.32/18 4.3s}
h6 {-0.61/20 3.9s} 12. Bd2 {+0.36/18 3.2s} O-O {-0.66/17 1.2s}
13. Bc2 {+0.41/17 1.1s} Nh7 {-0.56/18 1.6s} 14. Na4 {+0.44/18 1.4s}
Bg5 {-0.45/20 1.3s} 15. Bc3 {+0.50/18 1.4s} Rb8 {-1.38/23 14s}
16. Nb6 {+0.53/18 1.3s} Ne7 {-1.16/21 4.0s} 17. Nf1 {+0.44/20 4.1s}
f5 {-1.18/19 0.74s} 18. h4 {+0.57/20 1.5s} Bf6 {-1.27/20 2.2s}
19. g3 {+0.48/19 1.9s} f4 {-1.26/19 2.3s} 20. Ra3 {+0.51/20 4.9s}
Ng6 {-0.97/19 3.4s} 21. Qh5 {+0.55/18 1.2s} Ne7 {-1.41/22 8.4s}
22. Qe2 {+0.51/22 4.5s} Rf7 {-1.25/19 1.9s} 23. Kd1 {+0.56/19 1.7s}
f3 {-1.11/19 2.7s} 24. Qxf3 {+0.52/21 3.2s} Bxh4 {-0.87/16 0.71s}
25. Qe2 {+0.58/23 0.94s} Bg5 {-0.89/19 1.4s} 26. Ba4 {+0.53/22 5.1s}
Nf6 {-0.77/20 2.2s} 27. Be1 {+0.51/20 1.3s} Bd7 {-0.79/21 1.1s}
28. Nxd7 {+0.50/21 1.2s} Nxd7 {-0.99/21 1.7s} 29. Nh2 {+0.49/20 1.3s}
Ng6 {-0.97/19 0.99s} 30. Nf3 {+0.53/21 2.9s} Be7 {-1.27/19 2.5s}
31. Bd2 {+0.51/20 3.4s} Ndf8 {-1.14/20 1.4s} 32. Rb3 {+0.50/20 2.0s}
Nh7 {-1.07/24 3.5s} 33. Kc2 {+0.50/20 2.2s} Ng5 {-0.94/20 0.94s}
34. Nxg5 {+0.46/20 2.5s} Bxg5 {-0.91/22 1.3s} 35. Be1 {+0.49/24 3.4s}
Nf8 {-0.89/20 0.95s} 36. Qg4 {+0.50/21 1.4s} Qc7 {-0.69/19 1.9s}
37. Kb1 {+0.47/24 1.1s} Rd8 {-0.75/20 3.5s} 38. Ka2 {+0.45/23 2.3s}
Nd7 {-0.81/20 1.7s} 39. Qe6 {+0.44/23 0.69s} Nf8 {-0.93/19 0.45s}
40. Qg4 {+0.35/23 3.3s} Nd7 {-0.82/21 1.6s} 41. Qe6 {+0.34/22 0.87s}
Nf8 {-0.78/20 0.53s} 42. Qh3 {+0.34/24 2.4s} Nh7 {-0.93/22 2.2s}
43. Ra3 {+0.34/21 2.3s} Nf6 {-0.87/18 1.2s} 44. f3 {+0.34/23 0.95s}
Rb8 {-0.62/17 0.28s} 45. Qe6 {+0.34/23 0.99s} Nd7 {-0.55/17 0.51s}
46. Rb3 {+0.34/24 0.67s} Nf8 {-0.64/22 2.2s} 47. Qh3 {+0.34/24 1.3s}
Nd7 {-0.67/21 0.91s} 48. Kb1 {+0.34/23 0.52s} Rbf8 {-0.69/21 0.32s}
49. Qe6 {+0.34/25 0.82s} Rd8 {-1.00/23 3.7s} 50. Rf1 {+0.34/24 0.73s}
Nf8 {-0.89/17 0.37s} 51. Qg4 {+0.34/24 0.81s} Qe7 {-0.86/19 0.56s}
52. Bc3 {+0.46/20 1.9s} Ra8 {-0.87/19 0.92s} 53. Rb6 {+0.42/18 1.5s}
Rb8 {-0.85/17 0.29s} 54. Kc2 {+0.41/18 0.96s} Rf6 {-0.86/20 1.1s}
55. Re1 {+0.41/18 1.6s} Rg6 {-1.42/21 2.2s} 56. Qh3 {+0.42/19 0.60s}
Qf7 {-1.30/18 0.28s} 57. Rf1 {+0.44/18 0.62s} Qe7 {-1.25/18 0.40s}
58. Kb1 {+0.43/18 0.98s} Be3 {-0.85/19 0.51s} 59. f4 {+0.58/20 2.5s}
exf4 {-0.74/21 1.4s} 60. gxf4 {+0.74/20 0.41s} Qxe4+ {-0.84/18 0.34s}
61. Bc2 {+0.65/20 0.92s} Qxc4 {-0.43/19 0.46s} 62. Bxg6 {+0.91/21 0.57s}
Nxg6 {-1.22/20 0.53s} 63. Qe6+ {+0.69/22 5.8s} Kh7 {-1.34/18 0.80s}
64. Rf3 {+0.60/17 0.34s} Qe2 {-0.90/19 0.65s} 65. Qe4 {+0.68/16 0.31s}
Qd1+ {-0.88/21 2.1s} 66. Ka2 {+0.68/7 0s} Bd4 {-0.80/21 0.51s}
67. Bxd4 {+0.80/17 0.39s} Qa4+ {-0.93/20 0.62s} 68. Ra3 {+0.91/18 0.46s}
Qc4+ {-2.02/23 2.5s} 69. Kb1 {+1.01/16 0.54s} Qf1+ {-1.70/20 0.72s}
70. Ka2 {+0.89/22 3.8s} Qc4+ {-2.00/21 0.76s} 71. Kb1 {+0.89/16 0.21s}
Qf1+ {-1.87/20 1.7s} 72. Kc2 {+0.85/20 0.90s} cxd4 {-1.44/18 0.35s}
73. Qxd4 {+0.89/19 0.89s} Nxf4 {-0.95/18 1.7s} 74. Rg3 {+1.01/17 0.77s}
Qe2+ {-1.34/18 0.92s} 75. Kb3 {+0.71/20 1.2s} Qe5 {-1.50/21 1.1s}
76. Qxe5 {+1.03/18 0.56s} dxe5 {-1.60/18 1.3s} 77. d6 {+0.92/18 0.65s}
Nd5 {-2.25/19 2.3s} 78. Kc4 {+0.97/17 0.46s} Nf6 {-2.06/16 0.92s}
79. Rd3 {+1.25/17 0.80s} Nd7 {-1.94/18 1.4s} 80. Rdb3 {+1.31/17 2.2s}
Nxb6+ {-0.79/13 0.31s} 81. axb6 {+1.52/17 0.65s} g5 {-3.95/17 3.1s}
82. Kd5 {+1.77/17 0.51s} Rd8 {-4.66/18 1.4s} 83. Kxe5 {+1.79/17 0.52s}
h5 {-3.48/16 0.85s} 84. Ke6 {+2.33/18 0.54s} h4 {-6.12/18 1.2s}
85. d7 {+2.72/17 0.58s} a5 {-7.33/16 0.97s} 86. Rc3 {+3.36/18 0.61s}
Kg6 {-8.96/18 0.97s} 87. Rc8 {+5.13/19 0.66s} Rxd7 {-6.74/15 0.46s}
88. Kxd7 {+5.89/17 0.73s} g4 {-12.40/15 1.5s} 89. Rh8 {+9.93/18 0.70s}
Kg5 {-13.91/17 0.77s} 90. Kc7 {+154.05/23 1.4s} g3 {-13.24/16 0.65s}
91. Kxb7 {+154.07/22 1.2s} Kf5 {-154.07/20 0.76s} 92. Rg8 {+154.08/26 0.69s}
Ke6 {-154.08/20 0.64s} 93. Ka6 {+154.09/21 0.69s} a4 {-154.10/17 0.44s}
94. b7 {+154.11/20 1.7s} g2 {-154.11/16 0.19s} 95. Rxg2 {+M17/19 0.70s}
Kf5 {-M14/18 1.2s} 96. b8=Q {+M13/22 0.70s} a3 {-M12/22 0.31s}
97. Rf2+ {+M11/23 0.70s} Ke4 {-M10/26 0.42s} 98. Kb5 {+M7/47 0.69s}
Kd4 {-M6/245 0.34s} 99. Qd6+ {+M5/245 0.62s} Ke4 {-M4/245 0.015s}
100. Kc4 {+M3/245 0.027s} a2 {-M2/245 0.009s}
101. Qf4# {+M1/245 0.013s, White mates} 1-0

[/pgn]
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
User avatar
towforce
Posts: 11858
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: The next revolution in computer chess?

Post by towforce »

Ovyron wrote: Sat Jul 25, 2020 1:48 amThe sad truth is that when the day comes you have a binary in your hands that plays perfect moves for any chess position, and its source code, you'll have no idea why it does it or how it works, all chess knowledge and its concepts are just human constructs.

Unfortunately, this could well be true - and it's a very important point.

For it to not be true:

1. there would have to exist simple rules for evaluating chess positions that we haven't thought of yet

2. we would need a new way of representing chess positions that we haven't thought of yet that could enable these rules to be captured by a process of emergence (see thread on solving chess, where I'm going to discuss this some more)
The simple reveals itself after the complex has been exhausted.
corres
Posts: 3657
Joined: Wed Nov 18, 2015 11:41 am
Location: hungary

Re: The next revolution in computer chess?

Post by corres »

Ovyron wrote: Fri Jul 24, 2020 8:38 pm
corres wrote: Fri Jul 24, 2020 8:08 pm the chess power of default Stockfish(dev) and Stockfish NNUE is near the same
NNUE showcases the potential of Stockfish's eval improvement, as it has a cost and makes it search slower, but nothing stops Stockfish's eval from being improved and reach that same strength without slowdown. But then the new improved eval can be used for a new NNUE that is again slower but stronger, creating a virtuous cycle.
So it seems as a snake what is eating its tail.
It is pity, but the NN of NNUE grows more faster than the speed of (def)Stockfish.
What is slower that is weaker - in generally.
User avatar
Ovyron
Posts: 4557
Joined: Tue Jul 03, 2007 4:30 am

Re: The next revolution in computer chess?

Post by Ovyron »

corres wrote: Sat Jul 25, 2020 4:59 pm It is pity, but the NN of NNUE grows more faster than the speed of (def)Stockfish.
This can't go on forever, not with the exponential nature of chess. NNUE plateaus, and it needs better eval to improve, it's going to come from someone that translates the concepts of NNUE into stockfish dev's eval (if the former scores a position at 1.50 while the latter says it's 0.00, and NNUE wins, you've gotta figure out the cause of the discrepancy), then a new NNUE based on this eval can arise.

Luckily NNUE has a lot of fuel, as any position can get better eval by using more depth. Perhaps what we need is a method to differentiate positions where the eval is already good from those that are still bad, and increase the depth of the bad ones. Otherwise, more depth will be wasted on positions where it doesn't help to get better eval (if eval is 0.15 at depth 8 and 0.15 at depth 9, you just wasted time getting there).
dkappe
Posts: 1632
Joined: Tue Aug 21, 2018 7:52 pm
Full name: Dietrich Kappe

Re: The next revolution in computer chess?

Post by dkappe »

Ovyron wrote: Mon Jul 27, 2020 5:04 am
corres wrote: Sat Jul 25, 2020 4:59 pm It is pity, but the NN of NNUE grows more faster than the speed of (def)Stockfish.
This can't go on forever, not with the exponential nature of chess. NNUE plateaus, and it needs better eval to improve, it's going to come from someone that translates the concepts of NNUE into stockfish dev's eval (if the former scores a position at 1.50 while the latter says it's 0.00, and NNUE wins, you've gotta figure out the cause of the discrepancy), then a new NNUE based on this eval can arise.

Luckily NNUE has a lot of fuel, as any position can get better eval by using more depth. Perhaps what we need is a method to differentiate positions where the eval is already good from those that are still bad, and increase the depth of the bad ones. Otherwise, more depth will be wasted on positions where it doesn't help to get better eval (if eval is 0.15 at depth 8 and 0.15 at depth 9, you just wasted time getting there).
NNUE is a relatively simple beast whose main innovation is that it’s efficiently computable when there’s a minor change in the inputs. So efficient, in fact, that you don’t need a GPU to run it quickly.

Now anyone that’s worked with shallow, fully connected networks knows that they’re relatively limited. Approximating a real valued function over a domain of bit strings is a great use. In fact it’s been a lively topic of research for over a decade. But how good the approximation is depends very much on the function to be approximated. Eval at depth 8 may be a good candidate, but a higher depth search may become increasingly difficult, especially if it has bigger and more frequent discontinuities. An educated guess is that for the 256 and 384 networks there is an N beyond which it no longer improves.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
Milos
Posts: 4190
Joined: Wed Nov 25, 2009 1:47 am

Re: The next revolution in computer chess?

Post by Milos »

dkappe wrote: Mon Jul 27, 2020 5:50 am Now anyone that’s worked with shallow, fully connected networks knows that they’re relatively limited. Approximating a real valued function over a domain of bit strings is a great use. In fact it’s been a lively topic of research for over a decade. But how good the approximation is depends very much on the function to be approximated. Eval at depth 8 may be a good candidate, but a higher depth search may become increasingly difficult, especially if it has bigger and more frequent discontinuities. An educated guess is that for the 256 and 384 networks there is an N beyond which it no longer improves.
Depth has nothing to do with function that needs to be evaluated. You are evaluating the position, the best would if it was a TB accurate evaluation i.e. a certain result. So the higher depth used for training the better. There are no discontinuities there otherwise SF search in endgame when using TB info would break, but ofc it doesn't.
However, there are couple of things that need to be taken into consideration. One is network size. With fixed network size you get roughly same nps in every phase of the game. That is not the case with standard eval. Hence, SF-NNUE might actually be weaker in endgame than regular SF despite more accurate eval.This becomes particularly bad once there are a lot of TB hits, because slower eval of non-TB nodes creates significantly lower search depth and less TB hits.
The other thing is coupling with search. Pruning depends on actual eval values, these are tuned for handcrafted eval. The numbers coming from NN eval might be totally off, completely changing the shape of a search tree and not in a good way.
User avatar
towforce
Posts: 11858
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: The next revolution in computer chess?

Post by towforce »

How is a position represented for training, please?
The simple reveals itself after the complex has been exhausted.
dkappe
Posts: 1632
Joined: Tue Aug 21, 2018 7:52 pm
Full name: Dietrich Kappe

Re: The next revolution in computer chess?

Post by dkappe »

Milos wrote: Mon Jul 27, 2020 6:54 am
Depth has nothing to do with function that needs to be evaluated. You are evaluating the position, the best would if it was a TB accurate evaluation i.e. a certain result. So the higher depth used for training the better. There are no discontinuities there otherwise SF search in endgame when using TB info would break, but ofc it doesn't.
A perfect oracle WDL would be a step function. It is left as an exercise to the reader that a step function is not continuous.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
Milos
Posts: 4190
Joined: Wed Nov 25, 2009 1:47 am

Re: The next revolution in computer chess?

Post by Milos »

dkappe wrote: Mon Jul 27, 2020 12:06 pm
Milos wrote: Mon Jul 27, 2020 6:54 am
Depth has nothing to do with function that needs to be evaluated. You are evaluating the position, the best would if it was a TB accurate evaluation i.e. a certain result. So the higher depth used for training the better. There are no discontinuities there otherwise SF search in endgame when using TB info would break, but ofc it doesn't.
A perfect oracle WDL would be a step function. It is left as an exercise to the reader that a step function is not continuous.
First there are 2 aspects when talking about evaluation values, one is training (i.e. output of SF search), another one is SF search using NN eval output.
SF search does not have problems with discontinuities of score values coming from eval.
Training ofc does, but real search never gives WDL values unless you find a TB win/loss. As a matter of fact the higher depth search you use the score gets smoother.
The biggest source of "jumpy" score is fixed depth used to generate it. Once actual TC is used with SFs TM this will also disappear. And I still fail to grasp how would score returned with higher fixed depth search be worse for training the net.
dkappe
Posts: 1632
Joined: Tue Aug 21, 2018 7:52 pm
Full name: Dietrich Kappe

Re: The next revolution in computer chess?

Post by dkappe »

Milos wrote: Mon Jul 27, 2020 12:27 pm The biggest source of "jumpy" score is fixed depth used to generate it. Once actual TC is used with SFs TM this will also disappear. And I still fail to grasp how would score returned with higher fixed depth search be worse for training the net.
I hope you’ll provide some evidence for this smoothness at higher depth.

It is possible to approximate certain discontinuous functions with simple nn’s, but not using gradient descent. The room where they held that graduate seminar didn’t have air conditioning, so I might have been delirious when I heard it, but I’m pretty sure that’s right.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".