LCZero update

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

User avatar
CMCanavessi
Posts: 1142
Joined: Thu Dec 28, 2017 4:06 pm
Location: Argentina

Re: LCZero update

Post by CMCanavessi »

[pgn][Event "024 - LCZero Gen 14 Gauntlet"]
[Site "RYZEN"]
[Date "2018.03.21"]
[Round "1"]
[White "Leela Chess Zero Gen 14 x64"]
[Black "Pwned v1.3 x64"]
[Result "1-0"]
[ECO "C00"]
[Opening "French"]
[Time "18:38:41"]
[Variation "Chigorin Variation, 1.e4 e6 2.Qe2"]
[TimeControl "60+1"]
[Termination "normal"]
[PlyCount "199"]
[WhiteType "program"]
[BlackType "program"]

1. e4 e6 2. Qe2 d5 3. Nf3 dxe4 4. Qxe4 Nf6 5. Qh4 Bd7 6. Be2 Nc6 7. O-O Be7
8. d4 Nd5 9. Qh3 {(9.Qh3 e5 10.Qg3 Nxd4 11.Nxe5 Nxe2+) -0.79/19 2} e5 {(9.
... e5 10.Qg3 exd4 11.Qxg7 Bf6 12.Qh6 Be6) +0.33/7 2} 10. Qg3 {(10.Qg3 Nxd4
11.Nxd4 exd4 12.Qxg7 Rf8 13.Qxh7 Nf6 14.Qh6 Ng8) -1.27/19 2} Nxd4 {(10. ...
Nxd4 11.Nxd4 exd4 12.c4 Nb6 13.Bf4 0-0 14.Bxc7) +0.22/7 2} 11. Nxd4
{(11.Nxd4 exd4 12.Qxg7 Rf8 13.Qxd4 Nf6 14.Nc3 c5) -0.81/19 2} exd4 {(11.
... exd4 12.Bc4 Be6 13.Qxg7 Bf6 14.Qg3 c6 15.Kh1) +0.17/8 2} 12. Qxg7
{(12.Qxg7 Bf6 13.Qh6 Bf5 14.c4 d3 15.Bxd3 Bxd3) -0.57/19 2} Bf6 {(12. ...
Bf6 13.Qh6 Nb4 14.Na3 Be6 15.Nb5 c6 16.Nc7+ Qxc7 17.Qxf6) +0.12/8 2} 13.
Qh6 {(13.Qh6 Bf5 14.Na3 d3 15.cxd3 Rg8 16.g4 Bxg4) -0.23/19 2} Qe7 {(13.
... Qe7 14.Bd2 Qe5 15.f4 Qe6 16.Bc4 Ke7) +0.07/7 2} 14. Bf3 {(14.Bf3 Nb4
15.Na3 Bf5 16.Bf4 Nxc2 17.Nxc2) +0.48/19 2} c6 {(14. ... c6 15.Bd2 0-0-0
16.Re1 Be6 17.Bxd5 Rxd5 18.Bf4) +0.03/8 2} 15. Rd1 {(15.Rd1 Qf8 16.Qxf8+
Rxf8 17.Nd2 Be6) +0.52/19 2} Nb4 {(15. ... Nb4 16.Qd2 0-0-0 17.a3 Na6
18.Qe1 Qf8) +0.37/7 2} 16. Na3 {(16.Na3 Nd5 17.g4 Bg7 18.Qxg7 Rf8 19.Rxd4
Qf6 20.Qxf6) +1.59/19 2} Be6 {(16. ... Be6 17.Bf4 Nxa2 18.Be4 Nb4 19.Bxh7
Na2) +0.24/7 2} 17. g4 {(17.g4 Bd5 18.Bxd5 Nxd5 19.g5 Bxg5 20.Bxg5 f6
21.Rxd4 fxg5 22.c4 Nf6) +2.16/19 2} Nxa2 {(17. ... Nxa2 18.Bf4 0-0-0 19.Re1
Rdg8 20.Nc4 Bg7) +0.64/7 2} 18. g5 {(18.g5 Be5 19.Re1 Nxc1 20.Rxe5 Nb3
21.cxb3 Rd8 22.Nc4) +3.18/19 2} Be5 {(18. ... Be5 19.Bd2 a5 20.Qh5 d3
21.cxd3 Bxb2) +1.00/7 2} 19. Re1 {(19.Re1 f6 20.gxf6 Qxf6 21.Qxf6 Bxf6
22.Rxe6+ Kf7 23.Rxf6+ Kxf6 24.Rxa2 Rhg8+ 25.Kf1) +2.76/19 2} Qc5 {(19. ...
Qc5 20.Bg4 Qa5 21.Bd2 Qd5 22.Nc4 Qxc4 23.Rxe5) +0.74/7 2} 20. Rxa2
{(20.Rxa2 a5 21.Ra1 Rd8 22.h4 b6 23.Rxe5) +3.93/19 2} Bxa2 {(20. ... Bxa2
21.Qf6 0-0 22.Rxe5 Qb4 23.b3 Rab8) -0.76/7 2} 21. Bf4 {(21.Bf4 f6 22.gxf6
Qf8 23.Rxe5+ Kd8 24.Qxf8+ Rxf8 25.Be4 Rxf6 26.f3) +5.45/19 2} Be6 {(21. ...
Be6 22.Bxe5 Rg8 23.h4 Qb4 24.Rb1 Bf5) -0.95/7 2} 22. Bxe5 {(22.Bxe5 Rg8
23.h4 Rg6 24.Qxh7 Bf5 25.Bxd4+ Kf8 26.Bxc5+) +5.51/19 2} Rg8 {(22. ... Rg8
23.h4 Qb4 24.Rb1 a5 25.Qxh7 0-0-0) -1.47/7 2} 23. Bf6 {(23.Bf6 Kd7 24.Qxh7
Kd6 25.h4 Rgf8 26.h5 Rae8 27.c3) +5.24/19 2} Qf5 {(23. ... Qf5 24.Be4 Qg4+
25.Bg2 Qf5 26.Rd1 Kd7 27.Rxd4+ Kc8) -1.59/7 2} 24. Be4 {(24.Be4 Qg4+ 25.Kh1
Rxg5 26.Bxg5 a5 27.f3 Qxg5 28.Qxg5 h6) +5.75/19 2} Qg4+ {(24. ... Qg4+
25.Bg2 Kd7 26.Qxh7 Rae8 27.h3 Qf4 28.Kf1) -1.72/8 2} 25. Kh1 {(25.Kh1 Rxg5
26.Bxg5 f5 27.Bf6 fxe4 28.Qxh7 Qf3+ 29.Kg1) +5.85/19 2} Qf4 {(25. ... Qf4
26.Qxh7 Kd7 27.Rd1 Qxf2 28.Rxd4+ Kc7 29.Be5+ Kb6 30.Rb4+ Ka5) -1.45/8 2}
26. Qxh7 {(26.Qxh7 Rxg5 27.Bxg5 Qxg5 28.h4 Qg4 29.f3 Qg3 30.h5 Qxe1+ 31.Kh2
Qd2+) +4.98/19 2} a6 {(26. ... a6) +7.59/7 2} 27. Qxg8+ {(27.Qxg8+ Kd7
28.Qxa8 Qc7 29.Bg2 Qb6 30.Nc4 Qc7 31.Ne5+ Kd6 32.Qa7) +11.17/18 2} Kd7
{(27. ... Kd7 28.Qxa8 Qc7 29.Nc4 Bxc4 30.Qe8+ Kd6 31.Be5+ Kc5 32.Bxc7 Kb5)
-21.32/9 2} 28. Qxa8 {(28.Qxa8 Qc7 29.Bg2 c5 30.Bxb7 Kd6 31.Bxa6 Bd7
32.Nb5+ Bxb5) +11.88/20 2} Qc7 {(28. ... Qc7) -21.52/7 2} 29. Bg2 {(29.Bg2
b5 30.Qxa6 b4 31.Nc4 Bxc4 32.Qxc4 d3 33.cxd3) +12.05/20 2} Qc8 {(29. ...
Qc8 30.Qa7 d3 31.Qd4+ Ke8 32.cxd3 Kf8 33.Re2) -14.18/7 2} 30. Qxc8+
{(30.Qxc8+ Kxc8 31.Bxd4 a5 32.h4 Bg4 33.f3) +14.24/18 2} Kxc8 {(30. ...
Kxc8 31.g6 fxg6 32.Rxe6 Kd7 33.Bh3 Kc7 34.Bxd4 g5 35.Rg6 b6) -17.21/11 1}
31. Bxd4 {(31.Bxd4 c5 32.Bxc5 Bf5 33.Nc4 Bxc2 34.Nb6+) +14.78/19 2} Kd7
{(31. ... Kd7 32.Bb6 Ke8 33.Bh3 Ke7 34.Nc4 Kd7 35.Rd1+ Ke7 36.Bxe6 fxe6)
-14.65/9 2} 32. h4 {(32.h4 Bg4 33.f3 Be6 34.Rxe6 fxe6) +15.35/18 2} Bg4
{(32. ... Bg4 33.Bb6 Be6 34.Rd1+ Ke7 35.f4 Bg4) -14.25/8 1} 33. f3 {(33.f3
Bh5 34.Nc4 c5 35.Bxc5 Bg6 36.b4 Kc6) +15.61/18 2} Be6 {(33. ... Be6 34.Bb6
Ke8 35.Bh3 Kd7 36.g6 f5 37.Rd1+ Ke7 38.Bc5+ Ke8 39.b3) -14.70/10 1} 34. h5
{(34.h5 Bf5 35.h6 Bg6 36.Nc4 c5) +17.30/18 2} Bf5 {(34. ... Bf5 35.Re5 c5
36.Rd5+ Ke6 37.Rxc5 Bh7 38.f4 Ke7 39.Bxb7) -15.92/9 1} 35. h6 {(35.h6 Bg6
36.Nc4 c5 37.Bxc5 Kc6) +17.76/18 2} Bg6 {(35. ... Bg6 36.Bb6 c5 37.f4 Kc8
38.Re8+ Kd7 39.Rd8+ Ke7 40.Bd5) -15.70/8 1} 36. Nc4 {(36.Nc4 c5 37.Bxc5 Kc6
38.b4 b5 39.Ne5+) +18.07/18 2} c5 {(36. ... c5 37.Ne5+ Kc7 38.Nxg6 cxd4
39.h7 fxg6 40.h8Q) -20.10/7 1} 37. Bxc5 {(37.Bxc5 Kc6 38.b4 b5 39.Ne5+ Kd5
40.Nxg6) +17.79/18 2} Bxc2 {(37. ... Bxc2 38.Re7+ Kc6 39.Rxf7 Bh7 40.b4 Bg8
41.Na5+ Kb5 42.Rf6) -16.82/8 1} 38. Kh2 {(38.Kh2 Bd3 39.b4 Bxc4 40.h7 Bd5
41.h8Q) +19.92/18 2} Kc6 {(38. ... Kc6 39.Bd4 b5 40.Na5+ Kd7 41.Bh3+ Kd6
42.Bb6 f5) -15.39/7 1} 39. b4 {(39.b4 Bb3 40.h7 Bxc4 41.h8Q Bb5) +19.46/18
2} b6 {(39. ... b6 40.f4+ Kd7 41.Nxb6+ Kc7 42.Nd5+ Kc8 43.Re8+ Kd7 44.Nf6+
Kc7) -16.99/7 1} 40. Nxb6 {(40.Nxb6 Kb5 41.Rc1 Bb3 42.Rb1 Ba2) +20.15/18 2}
a5 {(40. ... a5 41.Bd4 Kb5 42.bxa5 Kxa5 43.Ra1+ Kb4 44.Bh3 Kb3) -16.75/8 1}
41. bxa5 {(41.bxa5 Kxc5 42.Rc1 Kb5 43.Rxc2 Kxa5 44.h7 Kxb6 45.h8Q)
+21.23/18 2} Kb5 {(41. ... Kb5 42.Rc1 Bh7 43.Bf1+ Kc6 44.Bf2+ Kd6 45.Nc8+
Kd7 46.Bb5+ Ke6 47.Rc5) -18.09/8 1} 42. Rc1 {(42.Rc1 Bb3 43.Rb1 Kxc5
44.Rxb3 Kc6) +22.23/17 2} Bd3 {(42. ... Bd3 43.Bf1 Bxf1 44.h7 Be2 45.h8Q
Bxf3 Be4 46.Bd6) -24.48/10 1} 43. Rd1 {(43.Rd1 Kxc5 44.Rxd3 Kb5 45.Rd1 Kxa5
46.Rd8) +21.87/17 2} Bh7 {(43. ... Bh7 44.Rc1 Ka6 45.Bf1+ Kb7 46.Bd6 Bc2)
-20.35/9 1} 44. f4 {(44.f4 Kxc5 45.a6 Kxb6 46.Kg3 Kxa6) +20.42/17 1} Kxc5
{(44. ... Kxc5 45.Rd5+ Kc6 46.Rf5+ Kd6 47.Rxf7 Kc5 48.Rxh7 Kb4 49.Nd5+ Ka4
50.Ra7) -17.15/9 1} 45. Nd7+ {(45.Nd7+ Kb5 46.a6 Kxa6 47.Kg3 Kb5) +19.42/17
1} Kb4 {(45. ... Kb4 46.a6 Bc2 47.Rd5 Bb3 48.a7 Bxd5 49.Bxd5 f5 50.a8Q Kc3)
-20.98/11 1} 46. a6 {(46.a6 Ka3 47.a7 Ka2 48.a8Q+ Kb2 49.Qa7) +21.08/18 1}
Bf5 {(46. ... Bf5 47.a7 Bxd7 48.Rxd7 f5 49.gxf6 Kb3 50.Rb7+ Kc2 51.a8Q)
-23.52/9 1} 47. a7 {(47.a7 Bg6 48.a8Q Kc3 49.Qa7 Kc2) +22.22/17 1} f6 {(47.
... f6 48.gxf6 Bxd7 49.Rxd7 Kc3 50.a8Q Kb2 51.Rc7) -25.83/8 1} 48. a8=Q
{(48.a8Q fxg5 49.fxg5 Bg6 50.Qc8 Bh7) +22.40/17 1} fxg5 {(48. ... fxg5
49.Qb8+ Kc3 50.Qe5+ Kc2 51.Qxf5+ Kxd1 52.h7 gxf4 53.h8Q) -28.57/6 1} 49.
fxg5 {(49.fxg5 Bg6 50.Kg3 Bh7 51.Qa7) +21.91/17 1} Bg6 {(49. ... Bg6 50.Bd5
Bb1 51.Rxb1+ Kc3 52.h7 Kc2 53.h8Q Kxb1) -32.14/6 1} 50. Qc8 {(50.Qc8 Bh7
51.g6 Bxg6 52.Ne5) +22.13/17 1} Bf5 {(50. ... Bf5 51.Qc5+ Ka4 52.Ra1+ Kb3
53.Bd5+ Kb2 54.Qc1+) -M5/6 0} 51. Kg3 {(51.Kg3 Bg6 52.Kf4 Kb3 53.Ne5)
+22.78/17 1} Be4 {(51. ... Be4 52.Bxe4 Ka4 53.Ra1+ Kb4 54.Rb1+ Ka5 55.Qa8+)
-M5/6 0} 52. Bxe4 {(52.Bxe4 Kb5 53.h7 Ka4 54.h8Q Kb5 55.Kf4) +24.59/18 1}
Ka4 {(52. ... Ka4 53.Qa6+ Kb3 54.Rb1+ Kc3 55.Qd3+) -M4/4 0} 53. h7 {(53.h7
Kb5 54.h8Q Ka4 55.Kf4 Kb5 56.Qf6) +25.52/17 1} Ka5 {(53. ... Ka5 54.Qc5+
Ka6 55.Nb8+) -M3/3 0} 54. h8=Q {(54.h8Q Kb5 55.Qf6 Ka4 56.Qe7) +25.54/17 1}
Ka4 {(54. ... Ka4 55.Ra1+ Kb3 56.Qhc3+) -M3/3 0} 55. Kf4 {(55.Kf4 Kb5
56.Qf6 Ka4 57.Qe7) +25.74/17 1} Ka5 {(55. ... Ka5 56.Ra1+ Kb4 57.Qb2+)
-M3/3 0} 56. Qf6 {(56.Qf6 Ka4 57.Qf7 Kb5 58.g6) +26.32/17 1} Ka4 {(56. ...
Ka4 57.Ra1+ Kb4 58.Qb2+) -M3/3 0} 57. Qf7 {(57.Qf7 Kb5 58.g6 Ka4 59.g7)
+26.75/17 1} Ka5 {(57. ... Ka5 58.Ra1+ Kb5 59.Qb8+) -M3/3 0} 58. g6 {(58.g6
Ka4 59.g7 Kb5 60.g8Q) +27.83/17 1} Ka4 {(58. ... Ka4 59.Ra1+ Kb5 60.Qb8+)
-M3/3 0} 59. g7 {(59.g7 Kb5 60.g8Q Ka4 61.Qfh7) +26.87/17 1} Ka5 {(59. ...
Ka5 60.Ra1+ Kb5 61.Qb8+) -M3/3 0} 60. g8=Q {(60.g8Q Ka4 61.Qfh7 Kb5 62.Qgg6
Ka4) +28.73/17 1} Ka4 {(60. ... Ka4 61.Ra1+ Kb5 62.Qb8+) -M3/3 0} 61. Qfh7
{(61.Qfh7 Ka5 62.Qb8 Ka4) +27.88/17 1} Ka5 {(61. ... Ka5 62.Ra1+ Kb5
63.Qb8+) -M3/3 0} 62. Qgg6 {(62.Qgg6 Ka4 63.Qgg8 Kb5 64.Qgg6) +28.64/17 1}
Ka4 {(62. ... Ka4 63.Ra1+ Kb3 64.Qb6+) -M3/3 0} 63. Qgg8 {(63.Qgg8 Kb5
64.Bg6 Ka4) +29.03/17 1} Ka5 {(63. ... Ka5 64.Ra1+ Kb5 65.Qb8+) -M3/3 0}
64. Kf3 {(64.Kf3 Ka4 65.Bc6+ Ka3) +24.48/17 1} Ka4 {(64. ... Ka4 65.Ra1+
Kb5 66.Qb8+) -M3/3 0} 65. Qgg6 {(65.Qgg6 Kb5 66.Qgg8 Ka4 67.Qb7) +27.56/17
1} Ka3 {(65. ... Ka3 66.Qga6+ Kb3 67.Rb1+) -M3/3 0} 66. Qgg8 {(66.Qgg8 Kb4
67.Bg6 Kb5) +28.06/17 1} Ka4 {(66. ... Ka4) 0.00/63 0} 67. Qge8 {(67.Qge8
Kb5 68.Qeg8 Ka4) +26.03/17 1} Ka3 {(67. ... Ka3 68.Qc3+ Ka4 69.Nc5+) -M3/3
0} 68. Qeg6 {(68.Qeg6 Ka4 69.Qe7 Kb5) +27.93/16 0} Ka4 {(68. ... Ka4
69.Ra1+ Kb3 70.Qb6+) -M3/3 0} 69. Qgg8 {(69.Qgg8 Kb5 70.Bg6 Ka4) +29.18/17
1} Ka5 {(69. ... Ka5 70.Ra1+ Kb5 71.Qb8+) -M3/3 0} 70. Qb8 {(70.Qb8 Ka4
71.Qc7 Kb5 72.Qgg6) +27.29/16 1} Ka4 {(70. ... Ka4 71.Ra1+) -M2/2 0} 71.
Qbc8 {(71.Qbc8 Kb5 72.Bg6 Ka4) +26.59/16 1} Ka5 {(71. ... Ka5) 0.00/63 0}
72. Bc6 {(72.Bc6 Kb4 73.Qb8+ Kc3 74.Qbc8) +25.09/16 0} Kb4 {(72. ... Kb4
73.Qb8+ Ka5 74.Ra1+) -M3/3 0} 73. Ne5 {(73.Ne5 Ka3 74.Rd7 Kb4 75.Qgf7 Ka3)
+20.74/16 0} Kc3 {(73. ... Kc3 74.Qc4+ Kb2 75.Nd3+ Ka3 76.Ra1+) -M4/4 0}
74. Rd7 {(74.Rd7 Kb2 75.Qgf7 Ka1 76.Qff8) +30.21/17 1} Kb2 {(74. ... Kb2
75.Qb8+ Ka3 76.Ra7+) -M3/3 0} 75. Qgf8 {(75.Qgf8 Ka2 76.Qfg8+ Kb2)
+25.56/17 0} Ka2 {(75. ... Ka2 76.Ra7+ Kb3 77.Qa3+) -M3/3 0} 76. Qfg7
{(76.Qfg7 Kb3 77.Qhh6 Ka2 78.Qhh8) +30.44/16 0} Ka1 {(76. ... Ka1 77.Nc4+
Ka2 78.Qb2+) -M3/4 0} 77. Qhh6 {(77.Qhh6 Ka2 78.Qhh8 Kb1 79.Qhh6) +29.26/16
0} Kb1 {(77. ... Kb1 78.Rb7+ Ka2 79.Qa8+) -M3/3 0} 78. Qhh8 {(78.Qhh8 Ka2
79.Qhh6 Kb1 80.Qhh8) +30.01/17 0} Kc1 {(78. ... Kc1 79.Be4+ Kb2 80.Qb8+ Ka3
81.Ra7+) -M4/4 0} 79. Qhh6+ {(79.Qhh6+ Kb2 80.Qhh8 Kb3 81.Qhh6) +30.49/16
0} Kb1 {(79. ... Kb1) 0.00/63 0} 80. Qhh8 {(80.Qhh8 Ka2 81.Qhh6 Kb1)
+27.39/16 1} Kc1 {(80. ... Kc1 81.Qh2) 0.00/6 2} 81. Qb8 {(81.Qb8 Kc2
82.Qbc8 Kb1 83.Qhh6 Ka2) +25.18/16 0} Kc2 {(81. ... Kc2 82.Ba4+ Kc1
83.Qgh6+) -M3/3 0} 82. Qbe8 {(82.Qbe8 Kb1 83.Qb8+ Kc2) +23.11/16 1} Kb1
{(82. ... Kb1 83.Rb7+ Kc1 84.Nd3+ Kd2 85.Qe2+) -M4/4 0} 83. Qhh6 {(83.Qhh6
Kc2 84.Qd8 Kb1 85.Qhh8) +28.80/16 0} Ka1 {(83. ... Ka1 84.Qc1+ Ka2 85.Bd5+)
-M3/3 0} 84. Qef7 {(84.Qef7 Kb1 85.Qhh8 Kc2 86.Qff8 Kb1) +29.25/16 1} Kb1
{(84. ... Kb1 85.Rd1+ Kb2 86.Qc1+) -M3/3 0} 85. Qhh8 {(85.Qhh8 Kc2 86.Qc8
Kb1 87.Qff8 Ka2) +28.75/16 0} Ka1 {(85. ... Ka1 86.Ng6+ Kb1 87.Qb2+) -M3/4
0} 86. Qff8 {(86.Qff8 Kb1 87.Qhh6 Kc2 88.Qff6) +27.89/16 0} Kb1 {(86. ...
Kb1 87.Qb4+ Kc2 88.Qh2+ Kc1 89.Qgh6+) -M4/4 0} 87. Qhh6 {(87.Qhh6 Kc2
88.Qhf6 Kb1 89.Qge7) +27.86/17 0} Ka1 {(87. ... Ka1 88.Nc4+ Ka2 89.Qb2+)
-M3/3 0} 88. Qff6 {(88.Qff6 Kb1 89.Qhh8 Kc2 90.Qge7) +28.53/16 1} Kb2 {(88.
... Kb2 89.Qd2+ Ka3 90.Rd3+) -M3/4 0} 89. Qhh8 {(89.Qhh8 Kc2 90.Qc8 Kb1
91.Qff7) +29.91/17 0} Kb1 {(89. ... Kb1 90.Qb8+ Kc1 91.Qgg5+ Kc2 92.Qd2+)
-M4/4 0} 90. Qge7 {(90.Qge7 Ka2 91.Qhh7 Kb3) +28.25/16 1} Kc2 {(90. ... Kc2
91.Qh2+ Kc1 92.Qa3+ Kb1 93.Be4+) -M4/4 0} 91. Qg8 {(91.Qg8 Kb1 92.Qed8 Kc2
93.Qgg7) +28.75/16 0} Kc1 {(91. ... Kc1 92.Qa3+ Kb1 93.Qgb3+) -M3/3 0} 92.
Qed6 {(92.Qed6 Kc2 93.Qde7 Kb1 94.Qed8) +28.55/16 1} Kb1 {(92. ... Kb1
93.Qd1+ Kb2 94.Qgb3+) -M3/3 0} 93. Qgg7 {(93.Qgg7 Kc1 94.Qff8 Kb1 95.Qfe7)
+27.86/16 0} Ka1 {(93. ... Ka1 94.Qa3+ Kb1 95.Be4+) -M3/3 0} 94. Qge7
{(94.Qge7 Kb1 95.Qff8 Kc2 96.Qff6) +28.15/16 0} Kb1 {(94. ... Kb1 95.Be4+
Kb2 96.Qa3+) -M3/3 0} 95. Qff8 {(95.Qff8 Kc1 96.Qff6 Kb1 97.Qff8) +27.65/16
0} Ka1 {(95. ... Ka1 96.Qd1+ Ka2 97.Rd2+) -M3/3 0} 96. Qff6 {(96.Qff6 Kb1
97.Qff8 Kc2 98.Qff6) +27.58/16 0} Kb1 {(96. ... Kb1) 0.00/63 0} 97. Qff8
{(97.Qff8 Ka1 98.Bb5 Kb1) +25.52/16 0} Ka1 {(97. ... Ka1) 0.00/63 0} 98.
Qfg7 {(98.Qfg7 Kb1 99.Qef6 Kc2 100.Qge7) +23.92/16 0} Kb2 {(98. ... Kb2
99.Qb4+ Ka2 100.Ra7+) -M3/3 0} 99. Qg2+ {(99.Qg2+ Kc1 100.Qd8 Kb1)
+25.19/16 0} Kb1 {(99. ... Kb1 100.Qd1+) -M2/2 0} 100. Qd1# {(100.Qd1+)
+24.66/15 1} 1-0[/pgn]
Follow my tournament and some Leela gauntlets live at http://twitch.tv/ccls
User avatar
CMCanavessi
Posts: 1142
Joined: Thu Dec 28, 2017 4:06 pm
Location: Argentina

Re: LCZero update

Post by CMCanavessi »

noobpwnftw wrote:
CMCanavessi wrote: Decreased rate?

Code: Select all

186 Leela Chess Zero Gen 12 x64            :  1097.6     246   61   25  160    30    10  1298.7    48    39.8
198 Leela Chess Zero Gen 10 x64            :   862.1      92   53   11   28    64    12   656.1    23    23.0
201 Leela Chess Zero Gen 8 x64             :   793.3      92   45   17   30    58    18   656.1    23    23.0
206 Leela Chess Zero Gen 6 x64             :   598.5      92   31   18   43    43    20   656.1    23    23.0
210 Leela Chess Zero Gen 4 x64             :   369.6     150   43   18   89    35    12   623.6    15    15.0
Care to provide a title to your figures?

According to the bible every learning curve had a decreased rate after the initial breakthrough, so it is either they faked it or you are doing it wrong then.

Although it is rather a brief match against previous generation here http://lczero.org/matches, can you calculate the ratio of W/L/D and tell me how it is not improving at a a slower rate from the number of games between generations vs their corresponding stats?
Is it not obvious to you seeing higher draw rates in recent generations than it was before and since it is a small network anyway is there any good reason not to try a bigger one but struggling with the remaining few spaces it may have?

EDIT:
What is more important here anyway is to get to know how to do reinforcement learning in terms of generally acceptable resource vs relative outcome as a verifiable reference, or just to build a chess engine that is strong, according to the project brief. Either way it is not likely to work with 6-block networks, you'd probably get a 1800 rating if not lower.

Of course you can get higher ratings if other parts like search are improved, but it is not related to how you define the network, the bible didn't say too much about failed experiments but those are exactly what we should do, I think.
You're missing something from the whole picture here, those ratings (that I did myself as they are taken from my tournament that I broadcast live) not only include the increase in network strenght, but also improvements made to the engine itself (like NN cache which gave a big speed impact, or time control support), bug fixes, and general progress.

Besides, we're not past the "breakthru" as you call it, we're merely starting it and the progress is somewhat linear. Don't follow the graph on the homepage, as that's for self-play games and is completely misleading/useless. My ratings are calculated by playing several matches against a fixed group of opponents (changed it for Gen 12, as the group was already too weak for it) with a consistent time control.

Edit: the team is now testing another approach called FPU that, by itself and tested with the same network gives, alone, around +170 elo. That's the impact an improvement to the engine can make at this early stage.
Follow my tournament and some Leela gauntlets live at http://twitch.tv/ccls
User avatar
Ovyron
Posts: 4556
Joined: Tue Jul 03, 2007 4:30 am

Re: LCZero update

Post by Ovyron »

Uri Blass wrote:I do not know what do you mean
"plays like a 2900 elo entity 30% of its moves" and I think that even weak chess programs with rating 1200 have no problem to do it(assuming we talk about something like stockfish at depth 2 or 3) because
there are a lot of cases when searching to depth of 1 or 2 plies lead to the same move as searching to depth of 30 plies.
Yeah, but from some strength, all engines are going to avoid the same blunders or make the same best recaptures. A human might not. Say, there's a pawn recapture that leads to an uncovered attack by the Bishop to the opponent's Queen, the rest of the moves are -9.00s, so all engines with >1200 rating would see it and avoid the blunder, but all humans below 2100 rating might miss it. And I know, because I've been able to trap the queen of +2000 rated players, those blunders are just rarer.

The reason I said "30% time 2900 rating" and not "30% time perfect chess" is that I meant that in those cases the human would use the same principles of some 2900 player to make some move selections that would produce that strength if they were played 100% of the time, but still, a stronger entity of 3000 elo may not go for such choices. 2900 ELO players also make blunders and lose games, just because you make a 2900 ELO move doesn't mean it's good, it may be the 2900 ELO's blunder that doesn't allow a higher rating.

Uri Blass wrote:The way to identify weak humans is not by the fact that they play some strong moves but by the special blunders that they do that are not random blunders.
Not even humans might know why they played the blunders they played. It's possible they aren't even blunders, but they're winning moves that would require some 4000 ELO entity to carry on and go on to win, but that with our current knowledge and technology we can't but call them blunders (in the Alpha Zero games, A0 as white was making moves that Stockfish thought were 1.20 centipawns on its favor, so, blunders, but they turned out to be winning moves.)

Imagine that chess continues to be very complex, and that in the years to come there will be breakthroughs, and there will be some Stockfish 15 version with a 3700 ELO rating.

This S15 would be making moves within 1 minute that no current engine is able to find within 1 minute. What I'd call "3700 ELO moves". Well, as a matter of fact, humans all over the world are playing under one minute, such moves that will match those S15 selections, for that move there's an ELO spike, for the rest, they may be so low quality that this brilliant move is irrelevant and unnoticed.

So, in order to emulate this, a NN needs to allow itself the chance to play those "out of this world" moves now and then, like a blind chicken, so it's not only about emulating how humans make mistakes and how to replicate them, that's like chatbots tricking humans into thinking they're chatting with other humans, an NN also needs to emulate those brilliant moves that will go beyond its own rating (say, the NN is rated 3000, yet is able to play some Stockfish 15 moves), and this can't be achieved by just switching up to one of its weaker personalities, but I think an Adversarial Network could do it, because it wouldn't focus on winning games, but about emulating the human style.
Your beliefs create your reality, so be careful what you wish for.
noobpwnftw
Posts: 560
Joined: Sun Nov 08, 2015 11:10 pm

Re: LCZero update

Post by noobpwnftw »

CMCanavessi wrote:
You're missing something from the whole picture here, those ratings (that I did myself as they are taken from my tournament that I broadcast live) not only include the increase in network strenght, but also improvements made to the engine itself (like NN cache which gave a big speed impact, or time control support), bug fixes, and general progress.

Besides, we're not past the "breakthru" as you call it, we're merely starting it and the progress is somewhat linear. Don't follow the graph on the homepage, as that's for self-play games and is completely misleading/useless. My ratings are calculated by playing several matches against a fixed group of opponents (changed it for Gen 12, as the group was already too weak for it) with a consistent time control.

Edit: the team is now testing another approach called FPU that, by itself and tested with the same network gives, alone, around +170 elo. That's the impact an improvement to the engine can make at this early stage.
Things you mentioned here have almost nothing to do with what type of network should it train, so I will just ask it in the way you might understand: why is the current network 6 residual blocks with 64 input filters? Why it is not 5, 7, 8 or any other number of blocks and what are the differences in particular, despite the theoretical numbers of games required to train and how fast they run, do you know how would they perform compared to the current one given the same amount of processing power? All the other improvements would apply, or are you suggesting that the FPU approach you mentioned or NN caching is for 6-block networks only?

It is because of the fact that you just merely started it, there are things you could consider further before devoting too much work in something that would ultimately be revised, or even there may be a more efficient way to do so.

If you deny any way of objections in the methods, then why don't you stick to the A0 approach and start to train a 40-block network then?
Last edited by noobpwnftw on Thu Mar 22, 2018 12:31 am, edited 2 times in total.
jkiliani
Posts: 143
Joined: Wed Jan 17, 2018 1:26 pm

Re: LCZero update

Post by jkiliani »

noobpwnftw wrote:Things you mentioned here have almost nothing to do with what type of network should it train, so I will just ask it in the way you might understand: why is the current network 6 residual blocks with 64 input filters? Why it is not 5, 7, 8 or any other number of blocks and what are the differences in particular, despite the theoretical numbers of games required to train, do you know how would they perform compared to the current one given the same amount of processing power? All the other improvements would apply, or are you suggesting that the FPU approach you mentioned or NN caching is for 6-block networks only?
The network dimensions of 6 blocks, 64 filters were chosen arbitrarily at a value very similar to the net used by Leela Zero before the first bootstrap (which was 5 blocks, 64 filters). The choice was simply to use a relatively small network for fast improvement and to find and troubleshoot potential bugs. A range of values would have worked here, but a network of Deepmind's proportions (20 blocks, 256 filters) would have been a poor choice for starting out such a project. I should stress here that people talk a lot about residual blocks, but much less about filters, although the number of filters is just as important for the representation power of a neural net. Both parameters have to go together.

Techniques such as NN cache and FPU reduction do apply to every neural net architecture.
noobpwnftw
Posts: 560
Joined: Sun Nov 08, 2015 11:10 pm

Re: LCZero update

Post by noobpwnftw »

jkiliani wrote: The network dimensions of 6 blocks, 64 filters were chosen arbitrarily at a value very similar to the net used by Leela Zero before the first bootstrap (which was 5 blocks, 64 filters). The choice was simply to use a relatively small network for fast improvement and to find and troubleshoot potential bugs. A range of values would have worked here, but a network of Deepmind's proportions (20 blocks, 256 filters) would have been a poor choice for starting out such a project. I should stress here that people talk a lot about residual blocks, but much less about filters, although the number of filters is just as important for the representation power of a neural net. Both parameters have to go together.

Techniques such as NN cache and FPU reduction do apply to every neural net architecture.
I think there is also ongoing discussion on LZ about the filters, but changing it too much is effectively equivalent to another bootstrap from scratch and their user base do not like it so much.

In chess the whole thing is relatively new and I do not understand how people are trying to refuse the fine considerations before starting something.

If I need to make a poor example then the input filters are like the bitboards, and the number of residual blocks is more or less like how many plies you do a/b on average to evaluate a position, you don't get to know a long term advantage if you don't a/b that deep, or don't have enough residual blocks to hold that information, doing too much a/b or having too many blocks waste time. There isn't any dark magic out of nowhere that will just work.
jkiliani
Posts: 143
Joined: Wed Jan 17, 2018 1:26 pm

Re: LCZero update

Post by jkiliani »

noobpwnftw wrote:I think there is also ongoing discussion on LZ about the filters, but changing it too much is effectively equivalent to another bootstrap from scratch and their user base do not like it so much.

In chess the whole thing is relatively new and I do not understand how people are trying to refuse the fine considerations before starting something.
OK then let me ask you, how would you have set up such a project?

Leela Zero followed AGZ as far as possible but had to pioneer in many regards where the Alphago Zero approach was either not feasible (huge neural net from the start), or when there were unknown aspects of how Deepmind did it.

LCZero is in the relatively comfortable position that Leela Zero already did most of the work in setting up a reinforcement learning pipeline for a MCTS+NN project, and it only had to be adapted for chess. Why should LCZero not have simply adopted what LZ had already demonstrated?
Dann Corbit
Posts: 12538
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: LCZero update

Post by Dann Corbit »

Here's my take:

The approach taken is clearly feasible and it is getting results.

There may be a better approach. If the approach could be derived from first principles, such as "how large of a network will fit completely into the RAM on the sort of Video cards that are going to participate and still cycle trough an iteration in a reasonable time?" that might be a nice way to "back of the envelope" estimate a good starting approach.

Another approach that could be tried would be a binary search of starting parameters and find the one that resolves fastest/best {and fastest is not necessarily best}.

It is, of course, possible that the time spent finding the optimal setting is larger than the time spent finding the optimum from an imperfect set of starting conditions. So by the time you have found the best set of parameters, you would already have solved the original problem.

We're probably not going to get SETI or Folding@home type of participation here. So it is good to use the resources wisely. But how to approach that is not necessarily cut and dried.
Taking ideas is not a vice, it is a virtue. We have another word for this. It is called learning.
But sharing ideas is an even greater virtue. We have another word for this. It is called teaching.
noobpwnftw
Posts: 560
Joined: Sun Nov 08, 2015 11:10 pm

Re: LCZero update

Post by noobpwnftw »

jkiliani wrote:
noobpwnftw wrote:I think there is also ongoing discussion on LZ about the filters, but changing it too much is effectively equivalent to another bootstrap from scratch and their user base do not like it so much.

In chess the whole thing is relatively new and I do not understand how people are trying to refuse the fine considerations before starting something.
OK then let me ask you, how would you have set up such a project?

Leela Zero followed AGZ as far as possible but had to pioneer in many regards where the Alphago Zero approach was either not feasible (huge neural net from the start), or when there were unknown aspects of how Deepmind did it.

LCZero is in the relatively comfortable position that Leela Zero already did most of the work in setting up a reinforcement learning pipeline for a MCTS+NN project, and it only had to be adapted for chess. Why should LCZero not have simply adopted what LZ had already demonstrated?
If I'd do it then I will start from even smaller networks, and track the relationship between training efforts, learning rate, optimal batch size and strength, since we now have far more comprehensive ways to measure them as opposed to the early stages of LZ.
Since we have limited resources, a wise choice becomes more important than trying to prove "it works", essentially reinventing the wheels of what LZ did.
Also I will probably ditch the idea of being "zero", which is also a part of proving it works, given the above circumstances.
jkiliani
Posts: 143
Joined: Wed Jan 17, 2018 1:26 pm

Re: LCZero update

Post by jkiliani »

Does anyone here know how engines can be entered into the CCRL 40/40 or similar contests? And did the team managing this server give any thoughts yet into how fair conditions for NN based engines can be provided? I.e. can they also provide some modern GPUs for engines that need them? If that was the case, LCZero could soon (or already) enter some tournaments, with fixed binary and network, for example as an entry "LCZero 0.3 GTX-1080 6a5ccda2".