A random walk down NNUE street ….

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

User avatar
MikeB
Posts: 4889
Joined: Thu Mar 09, 2006 6:34 am
Location: Pen Argyl, Pennsylvania

Re: A random walk down NNUE street ….

Post by MikeB »

while waiting for the next generations of fens to complete ,running a small match vs Zappa Mexico.
A few days agim Zappa Mexico was scoring 80%, a plus 200 ELo differential, they are clearly closer now
in strength, some of these games almost look like two drunken GM's playing late at night, check out the last game where Black Queen gets entombed in prison to its detriment for pawn grabbing. The queen was held in prison for 20 moves before its execution - moves 16 to 36.


[pgn]
[Event "Computer Chess Game"]
[Site "VM-894787"]
[Date "2021.03.08"]
[Round "1"]
[White "Zappa Mexico II"]
[Black "Stockfish 13"]
[Result "1-0"]
[TimeControl "60+1"]
[Annotator "1. +0.20 1... +0.05"]

1. d4 {+0.20/12} d5 {+0.05/17 7} 2. Bf4 {+0.19/11 4} e6 {+0.14/15 1.6} 3.
e3 {+0.28/11 4} a6 {+0.14/14 1.7} 4. Nf3 {+0.41/11 2.7} Nf6 {+0.14/15 2.5}
5. Bd3 {+0.42/11 4} b6 {+0.14/17 10} 6. O-O {+0.48/11 2.9} Bb7 {+0.14/16 3}
7. Nbd2 {+0.47/10 3} Nbd7 {+0.14/15 1.9} 8. c4 {+0.47/11 3} dxc4
{+0.14/14 1.1} 9. Nxc4 {+0.64/11 4} Be7 {+0.00/19 9} 10. Nfe5 {+0.53/11 6}
Nxe5 {+0.09/18 4} 11. Bxe5 {+0.47/11 1.4} O-O {+0.08/16 2.5} 12. Qc2
{+0.59/9 1.2} Rc8 {+0.06/17 1.4} 13. Bxf6 {+0.63/10 6} Bxf6 {-0.07/22 2.3}
14. Bxh7+ {+0.73/12 1.1} Kh8 {-0.05/23 0.5} 15. Be4 {+0.78/13 1.1} Bxe4
{+0.05/22 1.4} 16. Qxe4 {+0.78/12 0.2} c5 {+0.04/22 1.7} 17. dxc5
{+0.72/11 1.4} Rxc5 {-0.04/21 1.0} 18. b4 {+0.70/12 0.9} Rc7 {+0.04/23 6}
19. Rad1 {+0.97/11 1.3} Qa8 {+0.04/21 1.1} 20. Qf4 {+1.04/11 1.8} Qb8
{+0.04/20 3} 21. Ne5 {+1.00/11 1.3} Qc8 {+0.06/18 0.7} 22. Rd6
{+1.22/10 1.2} Rd8 {+0.04/20 1.5} 23. Rxb6 {+1.36/11 1.5} Kg8 {-0.05/22 4}
24. Ng4 {+1.42/11 1.8} Bb2 {-0.04/19 0.9} 25. h3 {+1.42/11 1.6} Rc6
{-0.04/21 2.0} 26. Rxc6 {+1.58/6 0.1} Qxc6 {-0.04/21 0.8} 27. Qg5
{+1.03/10 3} Qc7 {-0.04/21 5} 28. e4 {+1.39/10 1.1} Kf8 {-0.05/17 2.0} 29.
b5 {+1.62/10 2.0} axb5 {-0.03/18 1.1} 30. Qxb5 {+1.61/11 1.6} Ba3
{-0.03/18 1.4} 31. Qb3 {+1.57/11 1.6} Bd6 {-0.04/19 2.2} 32. Rd1
{+1.60/10 1.1} Kg8 {-0.05/19 1.4} 33. e5 {+1.82/11 1.0} Be7 {-0.04/17 0.6}
34. Rb1 {+1.73/11 1.0} Rc8 {-0.05/19 3} 35. Kh2 {+1.95/10 2.2} Qc2
{-0.05/18 0.8} 36. Qxc2 {+2.37/12 1.1} Rxc2 {+0.00/18 0.3} 37. Rb8+
{+2.26/13 1.5} Kh7 {+0.00/20 1.1} 38. Rb7 {+2.26/13 0.1} Bc5 {-0.03/19 0.5}
39. Rxf7 {+2.26/12 0.2} Rxa2 {+0.00/20 0.6} 40. Kg3 {+2.21/13 1.1} Kg6
{-0.02/20 0.5} 41. Rd7 {+2.19/14 1.6} Ra4 {-0.02/20 0.6} 42. h4
{+2.26/13 1.2} Ra7 {-0.02/24 2.7} 43. Rxa7 {+3.19/15 1.1} Bxa7
{-0.02/24 0.5} 44. f4 {+3.37/16 1.6} Bg1 {-0.03/22 0.8} 45. Nf2
{+3.39/12 0.2} Kf7 {-0.03/24 0.5} 46. Kf3 {+7.25/16 0.9} Kg6 {-8.66/26 5}
47. g4 {+7.25/13 0.1} Kf7 {-13.36/26 1.8} 48. Kg2 {+10.05/15 0.8} Bh2
{-13.42/20 1.1} 49. Kxh2 {+10.02/10 0.1} Ke8 {-10.46/21 0.1} 50. Ne4
{+14.15/13 0.9} Ke7 {-73.01/31 1.8} 51. f5 {+14.15/12 0.1} Kf8
{-27.46/23 0.5} 52. Ng5 {+14.55/10 0.1} exf5 {-72.34/30 0.8} 53. Ne6+
{+16.36/14 1.6} Kf7 {-72.34/28 0.2} 54. gxf5 {+16.36/14 0.1} Kg8
{-138.39/30 1.0} 55. Nxg7 {+19.31/14 1.7} Kxg7 {-1000.12/44 0.7} 56. f6+
{+19.78/13 0.9} Kg6 {-1000.11/33 0.3} 57. h5+ {+1000.11/14 2.5} Kf7
{-1000.10/40 0.3} 58. Kg3 {+1000.10/13 1.4} Ke8 {-1000.09/42 0.4} 59. h6
{+1000.09/11 0.1} Kd7 {-1000.08/40 0.5} 60. h7 {+1000.08/12 1.0} Kc6
{-1000.07/40 0.5} 61. f7 {+1000.07/11 0.1} Kb5 {-1000.06/43 0.6} 62. h8=Q
{+1000.06/11 0.3} Kc4 {-1000.05/47 0.6} 63. Qc8+ {+1000.05/10 0.1} Kb4
{-1000.04/126 0.6} 64. f8=Q+ {+1000.04/11 0.1} Kb3 {-1000.03/245 0.6} 65.
Qf3+ {+1000.03/25 0.1} Kb2 {-1000.02/245 0.6} 66. Qfb7+ {+1000.02/63 0.1}
Ka1 {-1000.01/245 0.5} 67. Qca8# {+1000.01/63 0.1}
{Xboard adjudication: Checkmate} 1-0

[Event "Computer Chess Game"]
[Site "VM-894787"]
[Date "2021.03.08"]
[Round "2"]
[White "Stockfish 13"]
[Black "Zappa Mexico II"]
[Result "0-1"]
[TimeControl "60+1"]
[Annotator "1. -0.12 1... -0.19"]

1. Nf3 {-0.12/16} d5 {-0.19/12 5} 2. e3 {+0.15/16 3} Nf6 {-0.15/12 5} 3. d4
{+0.14/15 1.5} e6 {-0.12/11 5} 4. a3 {+0.15/15 3} Bd6 {+0.03/11 9} 5. c4
{+0.15/17 2.5} b6 {+0.11/11 6} 6. Nbd2 {+0.15/15 1.6} c5 {-0.04/10 5} 7.
cxd5 {+0.13/20 7} exd5 {+0.03/12 2.5} 8. Bb5+ {+0.09/17 6} Bd7
{+0.48/9 0.2} 9. Bxd7+ {+0.09/15 1.4} Nbxd7 {+0.37/12 2.0} 10. Nb1
{+0.12/15 0.7} O-O {+0.50/11 1.1} 11. Nc3 {+0.00/17 4} Re8 {+0.48/11 2.4}
12. dxc5 {+0.07/19 5} Nxc5 {+0.72/11 1.2} 13. O-O {+0.11/18 3} Rc8
{+0.63/10 1.9} 14. Ne2 {+0.10/18 4} Nce4 {+0.66/10 4} 15. Nfd4 {+0.09/19 4}
Ng4 {+1.08/9 1.1} 16. g3 {+0.09/22 0.8} Qg5 {+1.22/9 1.5} 17. h3
{+0.08/20 3} Ngf6 {+1.04/10 1.8} 18. Kg2 {+0.09/18 2.3} Qg6 {+1.15/8 1.7}
19. a4 {+0.09/19 5} h5 {+1.26/8 1.4} 20. Nf4 {+0.08/17 1.9} Bxf4
{+1.19/11 1.5} 21. exf4 {+0.08/19 0.8} Nd6 {+1.07/12 1.9} 22. a5
{+0.07/21 4} Qe4+ {+1.03/10 1.1} 23. Kh2 {+0.06/19 1.6} b5 {+0.95/10 1.4}
24. f3 {+0.06/19 1.3} Qh7 {+1.16/11 3} 25. Rf2 {+0.05/19 1.4} Qg6
{+1.05/10 1.4} 26. Nc2 {+0.00/21 5} Nf5 {+1.78/9 1.1} 27. g4 {-0.08/20 1.1}
hxg4 {+1.23/10 1.1} 28. fxg4 {+0.00/21 1.3} Ne4 {+1.05/9 1.3} 29. Re2
{+0.00/18 1.4} a6 {+0.97/9 1.7} 30. Ra3 {-0.09/17 1.3} Nfd6 {+0.88/9 1.4}
31. Rd3 {-0.06/16 0.7} f5 {+0.72/9 1.4} 32. Ne3 {-0.06/19 1.5} fxg4
{+0.63/10 1.5} 33. Nxg4 {+0.00/18 0.4} Qe6 {+0.61/10 2.0} 34. Ne5
{+0.00/19 2.8} Nf5 {+0.78/9 1.0} 35. Nf3 {+0.00/18 1.2} Rc5 {+0.93/9 1.1}
36. Bd2 {-0.04/17 1.3} Rc4 {+0.77/9 1.8} 37. b3 {+0.00/16 0.6} Rcc8
{+0.73/9 0.8} 38. Rg2 {-0.05/16 0.4} Qf6 {+0.90/9 1.2} 39. Rxd5
{-0.06/16 0.6} Nxd2 {+1.80/11 1.4} 40. Qxd2 {-0.06/18 0.4} Ne3
{+1.80/10 0.1} 41. Qd4 {-0.05/19 0.8} Qxd4 {+2.28/11 1.0} 42. Rxd4
{-0.05/20 0.4} Nxg2 {+2.28/11 0.1} 43. Kxg2 {-0.06/23 0.9} Re2+
{+2.57/12 2.7} 44. Kf1 {-0.05/20 0.7} Ra2 {+2.57/11 0.1} 45. Rd2
{-0.05/20 0.6} Rxa5 {+2.77/11 0.1} 46. Kf2 {-0.05/20 0.6} Ra1
{+2.99/10 1.1} 47. Kg3 {-0.05/19 0.5} Rc3 {+3.94/11 1.0} 48. Kg4
{-0.05/22 1.2} Rxb3 {+3.79/12 1.6} 49. Ng5 {-0.04/22 2.2} Kf8
{+3.69/12 1.6} 50. Rd7 {-0.04/18 0.3} Re3 {+3.56/11 2.7} 51. Rb7
{-0.05/23 2.1} g6 {+3.89/12 1.8} 52. Rb6 {-0.04/19 0.4} Ke8 {+3.77/11 2.0}
53. Rxg6 {-0.05/20 0.6} Rg1+ {+3.23/11 2.9} 54. Kh5 {-0.04/21 1.9} Rf1
{+3.23/11 2.0} 55. f5 {-0.04/19 0.4} Rxf5 {+3.59/11 0.9} 56. Kg4
{-0.05/20 1.2} Rf2 {+3.81/11 1.0} 57. Rxa6 {-0.03/18 0.6} b4 {+3.81/10 0.2}
58. Ra8+ {-2.79/22 5} Kd7 {+3.93/12 1.2} 59. Ra7+ {-3.09/19 0.7} Kc6
{+4.14/11 1.1} 60. Ra6+ {-3.41/20 0.9} Kb7 {+4.12/12 1.2} 61. Ra4
{-2.17/18 0.4} Rb2 {+4.09/12 1.3} 62. Ra1 {-2.73/20 2.6} Rd3 {+4.32/10 1.0}
63. Rf1 {-0.05/16 0.7} b3 {+4.44/9 1.1} 64. Ne6 {-1.12/17 0.5} Kc6
{+4.46/10 1.0} 65. Nf4 {-0.30/17 0.9} Rd4 {+4.18/10 1.0} 66. Kf3
{-1.69/19 0.7} Rdd2 {+4.34/9 0.7} 67. Ke3 {-0.75/16 0.5} Rh2 {+4.32/10 0.8}
68. Ke4 {-2.06/19 1.9} Ra2 {+4.38/10 0.8} 69. Rb1 {-0.93/18 0.5} Rhb2
{+4.35/10 1.0} 70. Rd1 {-0.93/17 0.4} Rd2 {+4.39/11 1.5} 71. Rb1
{-2.92/19 1.1} Rdb2 {+4.35/11 1.8} 72. Rg1 {-2.82/20 2.8} Ra4+
{+4.27/10 1.8} 73. Ke3 {-2.64/18 0.2} Rh2 {+4.32/10 0.9} 74. Rc1+
{-2.60/19 0.4} Kd6 {+4.20/11 1.2} 75. Rd1+ {-2.56/20 0.6} Kc6
{+4.25/11 1.5} 76. Rc1+ {-2.49/19 0.4} Kb5 {+4.29/11 0.8} 77. Rb1
{-3.71/21 1.6} Kc4 {+4.34/12 1.0} 78. h4 {-4.97/20 2.9} Rxh4 {+5.01/11 0.8}
79. Ne2 {-6.71/19 1.4} Ra3 {+5.41/10 1.1} 80. Rc1+ {-8.61/15 0.5} Kb4
{+7.61/10 1.0} 81. Rb1 {-3.86/19 0.6} b2+ {+7.61/10 0.1} 82. Kd2
{-0.31/17 0.2} Rb3 {+7.61/10 0.1} 83. Kd1 {-1000.12/33 1.2} Kc4
{+1000.14/12 1.4} 84. Ng1 {-1000.10/36 0.3} Rh2 {+1000.10/13 0.8} 85. Ne2
{-1000.09/46 0.3} Rd3+ {+1000.09/12 0.2} 86. Ke1 {-1000.08/51 0.4} Rh1+
{+1000.08/12 0.1} 87. Kf2 {-1000.07/53 0.4} Rxb1 {+1000.07/12 0.1} 88. Ng3
{-1000.06/57 0.4} Ra1 {+1000.06/11 0.2} 89. Kg2 {-1000.05/63 0.5} b1=Q
{+1000.05/10 0.1} 90. Kh3 {-1000.04/86 0.5} Qb8 {+1000.04/10 0.1} 91. Kg4
{-1000.03/245 0.6} Qxg3+ {+1000.03/36 0.1} 92. Kf5 {-1000.02/245 0.6} Rf1+
{+1000.02/63 0.1} 93. Ke4 {-1000.01/245 0.6} Qe3# {+1000.01/63 0.1}
{Xboard adjudication: Checkmate} 0-1

[Event "Computer Chess Game"]
[Site "VM-894787"]
[Date "2021.03.08"]
[Round "3"]
[White "Zappa Mexico II"]
[Black "Stockfish 13"]
[Result "0-1"]
[TimeControl "60+1"]
[Annotator "1. +0.20 1... +0.05"]

1. d4 {+0.20/12} d5 {+0.05/17 7} 2. Bf4 {+0.19/12 4} e6 {-0.01/18 8} 3. e3
{+0.28/11 4} Nf6 {+0.13/16 2.8} 4. Nf3 {+0.34/11 7} Bd6 {+0.10/16 1.0} 5.
Ne5 {+0.34/11 3} Ne4 {+0.11/18 3} 6. f3 {+0.26/10 2.9} Nf6 {+0.13/17 6} 7.
g4 {+0.17/10 4} Nfd7 {+0.14/16 0.8} 8. Nc3 {+0.19/11 6} Bxe5 {+0.13/18 3}
9. dxe5 {-0.13/13 5} c5 {+0.11/19 4} 10. h4 {+0.21/9 1.2} Nc6
{+0.10/17 1.5} 11. Bb5 {-0.27/10 2.1} O-O {+0.13/18 2.0} 12. Bxc6
{-0.18/10 1.0} bxc6 {+0.11/18 1.1} 13. Qe2 {-0.46/11 1.6} a5 {+0.10/17 2.1}
14. O-O-O {-0.31/10 0.8} a4 {+0.10/15 0.7} 15. a3 {-0.15/10 0.8} Qb6
{+0.11/20 2.1} 16. Qh2 {-0.06/10 1.0} Rb8 {+0.13/21 1.6} 17. Nxa4
{-0.53/11 1.7} Qa5 {+0.12/20 1.0} 18. b3 {-0.51/12 1.4} c4 {+0.13/22 2.1}
19. Qd2 {-0.63/12 1.6} Qa8 {+0.13/21 1.0} 20. Qc3 {-0.99/11 1.7} cxb3
{+0.13/20 1.3} 21. cxb3 {-0.61/11 1.0} Ba6 {+0.12/20 2.1} 22. Kd2
{-0.81/11 3} c5 {+0.13/19 1.4} 23. Ke1 {-0.84/10 1.0} Rfc8 {+0.12/22 2.5}
24. Qc2 {-1.05/10 1.3} Rb5 {+0.11/22 12} 25. Nc3 {-0.22/9 1.1} Rb6
{+0.12/22 6} 26. Kf2 {-0.27/10 1.2} Bb7 {+0.11/18 0.5} 27. a4 {-0.18/9 1.5}
Bc6 {+0.11/17 1.5} 28. h5 {-0.22/9 1.4} Rb4 {+0.11/19 1.7} 29. h6
{+0.46/9 0.8} g6 {+0.12/18 0.7} 30. Na2 {+0.06/8 0.9} Rb6 {+0.10/17 0.6}
31. Nc3 {+0.00/9 0.8} d4 {+0.10/20 2.1} 32. Ne4 {-0.26/10 2.0} Bd5
{+0.07/19 1.4} 33. Nd2 {-0.41/9 1.9} dxe3+ {+0.10/18 1.1} 34. Bxe3
{-0.48/10 0.9} Nxe5 {+0.07/19 0.6} 35. Rh3 {-0.41/10 1.1} Rb4
{+0.04/17 1.0} 36. Qc3 {-0.48/9 1.0} Qb8 {+0.08/17 0.5} 37. f4
{+0.00/9 0.9} Nxg4+ {+0.11/15 0.6} 38. Kg3 {-0.23/11 4} e5 {+0.00/19 0.8}
39. Kxg4 {-0.34/10 1.2} f5+ {+1.78/21 6} 40. Kh4 {+0.53/10 1.4} g5+
{+1.75/18 0.3} 41. Kg3 {-2.11/9 1.9} exf4+ {+1.63/15 0.3} 42. Kf2
{-2.11/10 2.3} fxe3+ {+2.40/16 0.3} 43. Qxe3 {-2.02/10 0.9} Rf4+
{+4.13/18 0.4} 44. Kg1 {-2.76/11 1.6} Re8 {+4.27/22 0.9} 45. Qc3
{-3.17/10 1.4} Rg4+ {+4.25/22 0.6} 46. Kf2 {-3.17/11 1.0} Re7
{+4.78/23 0.5} 47. a5 {-4.41/9 1.3} Rg2+ {+6.67/22 0.7} 48. Kf1
{-3.41/5 0.1} Rh2 {+7.59/22 0.7} 49. Qg3 {-5.68/9 1.7} Rh1+ {+8.13/20 0.6}
50. Kf2 {-7.70/11 1.1} Qxg3+ {+8.48/19 0.7} 51. Rxg3 {-7.70/11 0.2} Rxd1
{+8.73/22 1.0} 52. Rxg5+ {-8.72/12 2.1} Kf7 {+8.86/21 0.7} 53. Nf1
{-9.36/12 4} Kf6 {+9.19/22 0.9} 54. Rg3 {-11.00/12 2.8} c4 {+9.86/22 0.8}
55. bxc4 {-9.78/11 2.2} Bxc4 {+22.69/23 0.9} 56. Rc3 {-12.40/11 1.8} Rxf1+
{+1000.10/31 0.7} 57. Kg2 {-12.40/11 0.1} Rf4 {+1000.08/43 0.7} 58. Kg3
{-1000.09/11 1.6} Kg5 {+1000.07/53 0.8} 59. Rf3 {-1000.06/12 0.8} Rg4+
{+1000.06/52 0.8} 60. Kh3 {-1000.05/12 0.1} Bd5 {+1000.05/68 0.8} 61. Rg3
{-1000.04/12 0.1} Rxg3+ {+1000.04/235 0.8} 62. Kxg3 {-1000.03/10 0.1} Re2
{+1000.03/245 0.5} 63. Kh3 {-1000.02/13 0.1} f4 {+1000.02/245 0.5} 64. a6
{-1000.01/14 0.1} Be6# {+1000.01/245 0.6}
{Xboard adjudication: Checkmate} 0-1

[Event "Computer Chess Game"]
[Site "VM-894787"]
[Date "2021.03.08"]
[Round "4"]
[White "Stockfish 13"]
[Black "Zappa Mexico II"]
[Result "1-0"]
[TimeControl "60+1"]
[Annotator "1. +0.08 1... -0.03"]

1. e3 {+0.08/16} e5 {-0.03/13 10} 2. Ne2 {+0.12/16 5} d5 {+0.24/12 4} 3. d4
{+0.14/15 0.8} Nc6 {+0.36/11 2.8} 4. c3 {+0.13/18 4} Nf6 {+0.52/11 5} 5. a3
{+0.15/17 5} Bd6 {+0.74/11 5} 6. g3 {+0.14/16 2.4} O-O {+0.91/11 4} 7. Bg2
{+0.14/18 2.2} Bg4 {+0.90/11 2.8} 8. f3 {+0.13/19 3} Bf5 {+0.87/12 6} 9.
O-O {+0.12/18 1.5} Re8 {+0.89/11 2.3} 10. g4 {+0.13/17 4} Bg6
{+0.96/10 1.6} 11. h3 {+0.08/18 3} Qe7 {+1.08/9 1.3} 12. Rf2 {+0.13/19 4}
a5 {+1.25/9 1.8} 13. a4 {+0.11/18 1.9} h5 {+1.11/9 1.2} 14. Na3
{+0.08/20 2.0} hxg4 {+1.03/10 1.6} 15. hxg4 {+0.08/19 0.9} e4
{+1.00/10 1.6} 16. Nf4 {+0.07/20 1.1} exf3 {+0.95/11 1.3} 17. Bxf3
{-0.10/21 2.8} Be4 {+1.04/10 1.2} 18. g5 {-0.05/21 3} Nh7 {+0.75/10 2.2}
19. Bxe4 {+0.07/19 0.9} Qxe4 {+0.59/11 2.6} 20. Nb5 {+0.05/18 1.0} Nxg5
{+1.15/11 1.1} 21. Nxd6 {-0.07/21 1.5} cxd6 {+1.15/10 0.2} 22. Qh5
{-0.08/22 1.1} f6 {+1.00/11 2.1} 23. Nh3 {-0.08/17 1.3} Nxh3+
{+1.10/11 2.0} 24. Qxh3 {-0.08/19 1.1} Rad8 {+1.05/11 2.3} 25. Ra3
{-0.07/23 5} Qb1 {+1.11/10 1.1} 26. Rf1 {-0.10/18 1.0} Qg6+ {+1.02/11 1.2}
27. Qg2 {-0.04/21 1.3} Qf7 {+0.97/11 1.0} 28. Rb3 {-0.08/20 2.9} Re4
{+0.84/11 1.8} 29. Kf2 {+0.06/18 0.9} Rd7 {+0.74/10 3} 30. Rb5 {+0.04/20 3}
Rde7 {+0.43/10 1.2} 31. Ke1 {+0.06/20 2.4} Rc7 {+0.21/10 2.2} 32. Bd2
{+0.00/23 9} Qe6 {+0.03/10 1.4} 33. Kd1 {+0.06/21 1.1} Kf7 {-0.01/10 1.5}
34. Qh1 {+0.06/17 0.9} Qg4+ {+0.18/10 0.9} 35. Kc1 {+0.04/16 0.5} Nb4
{+0.18/10 0.7} 36. Kb1 {+0.04/19 0.5} Qe2 {+0.66/10 1.6} 37. cxb4
{+0.04/20 0.5} Qxd2 {+0.66/10 0.1} 38. Rc1 {+0.03/20 0.9} Rxc1+
{+0.17/10 2.1} 39. Qxc1 {+0.02/22 0.8} Qd3+ {+0.17/10 0.2} 40. Qc2
{+0.02/23 0.7} Qxe3 {+0.48/10 1.2} 41. Rxb7+ {+0.02/21 0.7} Kg6
{+0.17/12 1.9} 42. bxa5 {+0.04/19 0.8} Qg1+ {+0.00/11 2.1} 43. Ka2
{+0.04/21 0.9} Qxd4 {+0.00/12 0.8} 44. b3 {+0.02/21 0.7} f5 {-0.19/12 2.1}
45. a6 {+0.07/18 0.8} Qe5 {-0.41/12 1.4} 46. Qg2+ {+0.05/20 1.8} Kh7
{-0.64/11 1.9} 47. Qh1+ {+0.08/22 1.8} Kg6 {-0.64/13 2.2} 48. Qg1+
{+0.08/23 0.4} Rg4 {-0.21/11 1.1} 49. Qc1 {+0.08/24 0.8} Re4 {-0.21/11 0.7}
50. Ka3 {+0.08/21 1.4} Re1 {+0.00/7 0.1} 51. Qc2 {+0.09/18 0.7} Re2
{+0.00/10 0.6} 52. Qd3 {+0.08/17 0.7} Re3 {-0.62/10 2.3} 53. Qf1
{+0.08/20 3} Re1 {-1.05/10 2.5} 54. Qg2+ {+0.09/16 0.5} Kh7 {-1.13/10 1.1}
55. Rc7 {+1.13/22 0.8} Qa1+ {-1.16/11 1.4} 56. Kb4 {+1.32/18 0.5} Re8
{-1.70/10 1.9} 57. a7 {+1.55/20 1.4} Qd4+ {-1.66/9 1.3} 58. Kb5
{+2.27/17 0.7} Rd8 {-1.69/9 0.8} 59. Qg5 {+3.44/20 0.7} Rf8 {-2.35/11 1.5}
60. Qe7 {+4.21/19 0.6} Ra8 {-2.70/10 1.3} 61. Ka6 {+5.78/19 0.9} Kh6
{-3.31/10 1.3} 62. Qxd6+ {+6.32/19 0.8} g6 {-3.08/10 1.2} 63. Qc6
{+6.83/19 0.8} Rxa7+ {-3.36/9 1.1} 64. Rxa7 {+7.70/18 0.7} f4
{-3.95/11 1.1} 65. Rd7 {+8.05/19 0.8} Qd3+ {-4.95/10 1.1} 66. Ka5
{+8.50/18 0.7} Qe3 {-4.93/9 1.0} 67. b4 {+10.22/18 1.0} Qe5 {-5.43/10 1.0}
68. b5 {+10.68/20 0.9} Qd4 {-5.01/9 1.0} 69. b6 {+12.17/17 0.8} Qd2+
{-7.63/9 1.0} 70. Ka6 {+13.19/19 1.0} Qd3+ {-13.40/9 1.0} 71. Ka7
{+17.16/20 0.9} Qd4 {-19.73/10 1.0} 72. Rxd5 {+79.09/31 1.6} Qe3
{-1000.08/11 1.0} 73. Qf6 {+1000.10/35 0.7} Qe8 {-26.25/8 1.0} 74. Qxf4+
{+1000.09/39 0.9} Kh7 {-1000.12/10 1.0} 75. b7 {+1000.08/41 0.8} Qe7
{-1000.07/12 1.0} 76. Qd2 {+1000.07/45 0.8} Kg8 {-1000.06/12 0.5} 77. Rd7
{+1000.06/52 0.8} Qc5+ {-1000.05/13 0.1} 78. Ka8 {+1000.05/113 0.8} Qf8+
{-1000.04/23 0.1} 79. b8=R {+1000.04/245 0.7} Qxb8+ {-1000.03/63 0.1} 80.
Kxb8 {+1000.03/245 0.7} Kf8 {-1000.02/63 0.1} 81. Qd5 {+1000.02/245 0.7}
Ke8 {-1000.01/13 0.2} 82. Qf7# {+1000.01/245 0.6}
{Xboard adjudication: Checkmate} 1-0

[Event "Computer Chess Game"]
[Site "VM-894787"]
[Date "2021.03.08"]
[Round "5"]
[White "Zappa Mexico II"]
[Black "Stockfish 13"]
[Result "1/2-1/2"]
[TimeControl "60+1"]
[Annotator "1. +0.20 1... +0.05"]

1. d4 {+0.20/12} d5 {+0.05/17 7} 2. Bf4 {+0.19/11 4} e6 {+0.14/16 1.6} 3.
e3 {+0.28/11 4} Nf6 {+0.14/16 3} 4. Nf3 {+0.20/11 8} Nbd7 {+0.13/16 6} 5.
Bd3 {+0.38/11 3} c5 {+0.14/15 2.6} 6. dxc5 {+0.32/11 2.9} Bxc5 {+0.14/17 4}
7. Nc3 {+0.26/11 2.7} O-O {+0.13/16 0.6} 8. O-O {+0.25/11 2.3} a6
{+0.13/19 6} 9. a3 {+0.21/10 2.5} b5 {+0.13/20 11} 10. b4 {+0.14/10 2.9}
Be7 {+0.12/20 1.2} 11. Ne2 {-0.02/11 6} Bb7 {+0.13/20 2.0} 12. c3
{-0.11/11 5} Nb6 {+0.14/20 4} 13. Qc2 {-0.11/10 0.8} Na4 {+0.14/19 1.8} 14.
Be5 {-0.18/9 1.1} h6 {+0.14/20 1.7} 15. Bd4 {-0.28/10 1.6} Bd6
{+0.13/19 1.3} 16. Ne5 {-0.32/10 0.7} Bxe5 {+0.12/21 1.0} 17. Bxe5
{-0.08/11 1.4} Nd7 {+0.12/22 1.2} 18. Bg3 {-0.24/10 1.5} Re8 {+0.12/19 4}
19. Rfd1 {-0.10/9 0.8} Ndb6 {+0.13/20 3} 20. e4 {-0.10/9 1.1} Rc8
{+0.11/17 1.9} 21. exd5 {-0.35/10 2.9} Bxd5 {+0.12/18 0.6} 22. f3
{-0.43/10 1.3} Qg5 {+0.12/17 2.3} 23. Bf4 {-0.38/9 0.8} Qe7 {+0.11/20 1.7}
24. Be5 {-0.39/9 0.9} Qg5 {+0.11/20 2.8} 25. Bf4 {-0.49/11 1.8} Qd8
{+0.11/19 1.5} 26. Be3 {-0.22/10 2.8} Re7 {+0.10/20 4} 27. Bd4
{+0.00/8 0.7} Rd7 {+0.11/18 1.5} 28. Kh1 {-0.21/9 1.6} h5 {+0.11/20 4} 29.
Ng3 {-0.02/9 2.1} h4 {+0.10/17 1.0} 30. Nh5 {-0.07/9 1.0} f6 {+0.08/16 0.3}
31. h3 {-0.15/8 2.3} Rc6 {+0.10/16 1.1} 32. Qf2 {-0.11/9 3} e5
{+0.08/16 0.8} 33. Bxb6 {-0.15/10 0.7} Nxb6 {+0.05/16 0.4} 34. Bf5
{-0.10/10 0.8} Be6 {+0.05/20 1.4} 35. Bxe6+ {-0.21/10 0.6} Rxe6
{+0.06/20 0.4} 36. Rxd7 {-0.05/11 1.7} Qxd7 {+0.05/19 0.5} 37. Qxh4
{-0.05/11 0.1} Rc6 {+0.05/18 0.9} 38. Qe1 {-0.15/11 1.1} Qe6 {+0.05/19 1.5}
39. Rd1 {+0.54/10 1.0} Kf7 {+0.00/19 1.9} 40. Ng3 {+0.61/10 0.9} f5
{+0.04/18 0.5} 41. Rd8 {+0.43/10 1.1} Rc8 {+0.03/20 1.7} 42. Rxc8
{+0.63/11 0.9} Nxc8 {+0.03/16 0.5} 43. Qf2 {+0.51/12 4} g6 {+0.03/19 1.5}
44. Nf1 {+0.59/11 0.8} Ke8 {+0.03/17 0.8} 45. Qc5 {+0.74/11 1.1} Kd7
{+0.02/17 0.3} 46. Qf8 {+0.62/11 1.8} Qd6 {+0.02/18 1.0} 47. Qh6
{+0.24/11 1.0} f4 {+0.00/18 1.7} 48. Qh4 {+0.13/11 1.0} Qd3 {+0.03/19 0.3}
49. Qe1 {+0.22/12 1.8} Nd6 {+0.03/18 0.4} 50. Nh2 {+0.22/11 0.2} Ke6
{+0.00/19 0.8} 51. Ng4 {+0.07/11 1.2} e4 {+0.00/20 0.5} 52. fxe4
{+0.58/11 0.8} f3 {+0.02/22 1.4} 53. e5 {+0.58/11 1.3} Ne4 {+0.00/17 0.6}
54. gxf3 {+0.41/11 0.7} Qxf3+ {+0.02/17 0.5} 55. Kh2 {+0.41/12 1.8} Kf5
{+0.02/18 0.8} 56. c4 {+0.55/11 1.4} bxc4 {+0.00/24 1.7} 57. Ne3+
{+0.17/11 3} Kxe5 {+0.00/20 0.8} 58. Nxc4+ {+0.17/11 0.1} Kd4
{+0.00/24 0.6} 59. Na5 {+0.10/11 0.7} g5 {+0.00/23 2.6} 60. a4
{+0.10/10 0.6} g4 {+0.00/20 0.8} 61. hxg4 {+0.00/10 0.9} Qf4+
{+0.00/21 0.8} 62. Kg1 {+0.00/11 1.4} Qxg4+ {+0.00/22 0.6} 63. Kh2
{+0.00/12 1.6} Qh5+ {+0.00/23 0.6} 64. Kg1 {+0.00/12 1.2} Qg4+
{+0.00/25 1.2} 65. Kh2 {+0.00/13 1.7} Kd3 {+0.00/24 4} 66. Qb1+
{+0.00/11 0.8} Ke2 {+0.05/20 0.4} 67. Qb2+ {+0.00/12 1.1} Nd2
{+0.05/18 0.3} 68. Kh1 {+0.00/11 0.6} Qg3 {+0.00/23 1.7} 69. b5
{+0.00/11 1.0} Qh3+ {+0.00/21 0.2} 70. Kg1 {+0.00/10 0.1} Qe3+
{+0.00/23 0.7} 71. Kg2 {+0.00/11 0.7} Qf3+ {+0.00/26 0.4} 72. Kg1
{+0.00/12 0.8} Qg3+ {+0.00/26 0.6} 73. Kh1 {+0.00/11 0.1} Qh3+
{+0.00/26 1.1} 74. Kg1 {+0.00/10 0.2} Qf1+ {+0.00/27 1.0} 75. Kh2
{+0.00/10 0.2} Qf4+ {+0.00/27 0.7} 76. Kg2 {+0.00/12 1.8} Qg4+
{+0.00/26 0.5} 77. Kh2 {+0.00/12 1.3} axb5 {+0.00/26 0.6} 78. Qxb5+
{+0.00/11 0.9} Ke1 {+0.00/24 0.7} 79. Qe5+ {+0.00/12 0.3} Kd1
{+0.00/27 1.0} 80. Qa1+ {+0.00/12 0.8} Ke2 {+0.00/30 0.6} 81. Qe5+
{+0.00/13 1.8} Kd1 {+0.00/27 0.6} 82. Qa1+ {+0.00/13 0.9} Kc2
{+0.00/29 0.7} 83. Qa2+ {+0.00/13 1.2} Kd3 {+0.00/24 1.8} 84. Qd5+
{+0.00/12 0.8} Kc3 {+0.00/25 0.5} 85. Qc6+ {+0.00/12 0.8} Kd3 {+0.00/30 4}
86. Qd6+ {+0.00/12 0.7} Ke2 {+0.00/29 0.3} 87. Qe5+ {+0.00/12 0.1} Kf2
{+0.00/26 1.3} 88. Qc5+ {+0.00/12 0.2} Kf1 {+0.00/29 1.1} 89. Qb5+
{+0.00/14 1.1} Ke1 {+0.00/33 2.4} 90. Qe5+ {+0.00/14 0.3} Kd1
{+0.00/31 1.5}
{XBoard adjudication: repetition draw} 1/2-1/2

[Event "Computer Chess Game"]
[Site "VM-894787"]
[Date "2021.03.08"]
[Round "6"]
[White "Stockfish 13"]
[Black "Zappa Mexico II"]
[Result "1/2-1/2"]
[TimeControl "60+1"]
[Annotator "1. -0.12 1... -0.19"]

1. Nf3 {-0.12/16} d5 {-0.19/12 5} 2. e3 {+0.15/17 4} Nf6 {-0.15/12 5} 3.
Be2 {+0.15/17 3} e6 {+0.00/11 3} 4. c4 {+0.15/17 6} c5 {+0.05/11 6} 5. cxd5
{+0.11/17 4} exd5 {+0.05/11 2.7} 6. d4 {+0.11/17 3} Nc6 {+0.08/11 7} 7. Nc3
{+0.12/17 3} c4 {+0.02/11 4} 8. Ne5 {+0.10/15 2.2} Bb4 {+0.06/11 2.3} 9.
Nxc6 {+0.12/20 2.9} bxc6 {+0.23/11 4} 10. f3 {+0.12/18 0.9} Bxc3+
{+0.14/10 2.8} 11. bxc3 {+0.12/19 2.2} Qa5 {-0.05/11 1.7} 12. Qc2
{+0.12/19 4} c5 {-0.01/11 1.8} 13. O-O {+0.10/17 4} Bd7 {-0.31/11 2.2} 14.
Bd2 {+0.09/16 1.7} Rc8 {+0.03/10 1.0} 15. Bd1 {+0.12/19 8} O-O
{+0.36/10 0.8} 16. a4 {+0.12/19 5} Rfe8 {+0.45/10 1.4} 17. Kh1
{+0.11/16 0.9} Rb8 {+0.60/10 1.0} 18. Rg1 {+0.00/20 6} Kh8 {+0.70/10 3} 19.
g4 {+0.10/18 2.3} h6 {+0.70/9 1.0} 20. h4 {+0.11/18 1.5} Nh7 {+0.63/10 4}
21. Qc1 {+0.10/18 1.6} Qd8 {+0.64/10 1.1} 22. Be1 {+0.06/17 1.7} cxd4
{+0.63/11 1.4} 23. cxd4 {+0.06/17 1.1} Qe7 {+0.69/11 0.9} 24. Ra3
{-0.07/20 3} Nf8 {+0.58/10 1.0} 25. Rc3 {+0.09/17 3} Qf6 {+0.70/10 1.7} 26.
Rg2 {+0.11/14 0.4} Ng6 {+0.81/10 1.3} 27. Bg3 {+0.06/16 0.7} Nxh4
{+0.92/11 1.5} 28. Rf2 {+0.07/15 0.2} Rb4 {+1.26/10 1.2} 29. Qa3
{-0.07/17 2.9} a5 {+1.47/10 2.7} 30. Qa1 {-0.08/15 0.6} Re6 {+1.32/10 1.5}
31. Bc2 {-0.07/15 0.7} Bc6 {+1.29/9 1.5} 32. f4 {+0.04/15 0.2} Qe7
{+1.00/10 4} 33. Kg1 {-0.04/19 0.9} g5 {+0.95/9 1.5} 34. Bxh4
{-0.07/17 0.8} gxh4 {+1.33/11 1.3} 35. Qe1 {+0.00/15 0.2} Bxa4
{+1.67/12 1.2} 36. Bf5 {-0.07/16 0.5} Reb6 {+1.77/12 1.7} 37. Rh2
{-0.05/18 0.4} Rb3 {+2.12/12 2.6} 38. Rxb3 {-0.08/20 1.2} cxb3
{+2.18/11 1.1} 39. Rxh4 {-0.08/17 0.3} Bd7 {+2.18/10 0.1} 40. Bd3
{-0.09/16 0.5} a4 {+2.17/10 2.3} 41. g5 {-0.10/19 1.7} a3 {+2.08/10 1.6}
42. Kf2 {-0.10/17 0.4} Qb4 {+2.33/10 1.0} 43. Rxh6+ {-0.09/17 1.2} Kg8
{+2.74/10 1.1} 44. Qd1 {-0.06/16 0.4} a2 {+3.04/9 0.9} 45. g6
{+0.00/19 0.6} fxg6 {+1.84/9 1.3} 46. Qh1 {+0.00/21 0.6} Kf8 {+1.95/8 1.4}
47. Rh7 {+0.00/22 1.0} Re6 {+0.00/9 0.8} 48. Be2 {+0.00/23 0.7} a1=Q
{+0.00/9 0.6} 49. Qxa1 {+0.00/25 2.6} Qd2 {+0.00/10 0.7} 50. Qa3+
{+0.00/25 0.4} Kg8 {+0.00/10 0.1} 51. Rh8+ {+0.00/32 0.6} Kxh8
{+0.00/11 0.1} 52. Qf8+ {+0.00/48 0.6} Kh7 {+0.00/12 0.2} 53. Qf7+
{+0.00/51 1.6} Kh6 {+0.00/16 1.0} 54. Qf8+ {+0.00/52 4} Kh7 {+0.00/16 0.1}
55. Qf7+ {+0.00/50 0.4} Kh6 {+0.00/17 0.8} 56. Qf8+ {+0.00/55 0.9} Kh7
{+0.00/63 0.1}
{XBoard adjudication: repetition draw} 1/2-1/2

[Event "Computer Chess Game"]
[Site "VM-894787"]
[Date "2021.03.08"]
[Round "7"]
[White "Zappa Mexico II"]
[Black "Stockfish 13"]
[Result "1-0"]
[TimeControl "60+1"]
[Annotator "1. +0.20 1... +0.05"]

1. d4 {+0.20/12} d5 {+0.05/17 7} 2. Bf4 {+0.19/11 4} e6 {+0.14/16 2.7} 3.
e3 {+0.28/11 4} Nf6 {+0.14/16 4} 4. Nf3 {+0.27/11 6} Bd6 {+0.14/15 0.8} 5.
Ne5 {+0.32/11 2.9} O-O {+0.13/16 2.1} 6. Bd3 {+0.35/11 5} Nc6 {+0.10/18 6}
7. O-O {+0.35/11 3} Bxe5 {+0.10/18 1.5} 8. dxe5 {+0.27/12 2.2} Nd7
{+0.10/19 0.7} 9. Nd2 {+0.05/12 2.5} Ncxe5 {+0.11/18 1.9} 10. Bxe5
{+0.00/12 2.9} Nxe5 {+0.12/19 1.4} 11. Bxh7+ {+0.00/12 0.2} Kxh7
{+0.07/18 1.4} 12. Qh5+ {+0.00/12 0.1} Kg8 {+0.12/17 0.5} 13. Qxe5
{+0.00/12 0.1} f6 {+0.11/17 1.5} 14. Qc3 {-0.03/12 3} Bd7 {+0.09/18 6} 15.
Rfd1 {+0.27/11 4} c6 {+0.07/17 2.3} 16. Qb3 {+0.21/10 0.8} b5
{+0.08/19 2.8} 17. a4 {+0.33/10 0.9} a5 {+0.05/20 3} 18. e4 {+0.34/10 0.9}
bxa4 {-0.07/18 2.8} 19. Qxa4 {+0.32/11 1.7} Qe7 {+0.05/21 2.9} 20. Nb3
{+0.19/10 1.6} Kf7 {+0.05/21 1.0} 21. Nxa5 {+0.56/9 0.9} Rfb8
{+0.05/21 1.1} 22. Qd4 {+0.60/11 1.3} Qb4 {+0.03/20 1.8} 23. Qxb4
{+0.17/12 1.6} Rxb4 {+0.02/22 1.2} 24. Nb3 {+0.09/12 0.8} Rxa1
{+0.03/22 1.2} 25. Rxa1 {+0.09/12 0.2} Rb5 {+0.00/21 4} 26. f3
{+0.68/11 1.3} Ke7 {+0.00/19 1.2} 27. Ra5 {+0.48/11 1.0} Rxa5
{+0.00/20 1.5} 28. Nxa5 {+0.72/8 0.1} Kd6 {+0.03/19 1.2} 29. Nb7+
{+0.76/12 1.4} Ke5 {+0.02/21 1.2} 30. Nc5 {+0.99/14 1.3} Be8 {+0.02/25 3}
31. Kf2 {+1.09/13 1.6} dxe4 {+0.00/23 1.0} 32. fxe4 {+1.19/13 1.3} Kd4
{-0.02/22 1.2} 33. Nxe6+ {+1.72/8 0.1} Kxe4 {-0.02/21 1.8} 34. Nxg7
{+2.46/14 1.1} Bg6 {-0.02/24 1.4} 35. h4 {+2.58/14 1.9} Ke5 {-0.03/25 1.4}
36. c4 {+2.92/14 2.3} Bf7 {-0.03/24 4} 37. c5 {+3.31/14 2.7} Kf4
{-0.12/25 4} 38. g3+ {+3.64/13 0.1} Kg4 {-3.67/29 10} 39. b4 {+3.64/13 0.1}
Bc4 {-1.24/24 0.5} 40. Ne8 {+3.64/9 0.2} Bb3 {-0.64/21 1.0} 41. Nxf6+
{+4.51/14 1.2} Kf5 {-0.66/23 0.4} 42. Ne8 {+4.82/15 2.6} Bd1 {-1.09/25 0.5}
43. Nd6+ {+5.12/14 1.0} Ke5 {-3.70/24 1.6} 44. Nf7+ {+5.54/14 1.8} Ke6
{-4.73/28 4} 45. Nd8+ {+5.54/10 0.1} Kd7 {-4.73/24 0.3} 46. Nb7
{+5.54/10 0.1} Ke6 {-4.71/24 0.3} 47. Ke3 {+5.31/10 0.1} Kf5 {-5.21/26 1.4}
48. Nd6+ {+5.70/13 1.3} Ke5 {-5.25/23 0.3} 49. Nf7+ {+5.70/13 0.1} Ke6
{-4.60/23 0.3} 50. Nd8+ {+6.54/15 2.1} Kd7 {-7.90/24 3} 51. Nb7
{+6.82/15 1.0} Bc2 {-7.94/17 0.3} 52. g4 {+6.87/15 1.9} Ke6 {-6.67/22 0.4}
53. Kf4 {+6.87/11 0.1} Bh7 {-9.05/23 2.5} 54. Nd8+ {+10.33/13 2.1} Kd7
{-8.97/21 0.2} 55. Nf7 {+10.33/13 0.1} Kc7 {-14.85/23 1.9} 56. Ne5
{+11.81/16 1.6} Kb7 {-12.03/22 0.6} 57. h5 {+13.66/15 1.1} Ka6
{-17.06/23 1.5} 58. g5 {+13.66/14 0.1} Bc2 {-16.55/20 0.2} 59. Nxc6
{+15.34/14 3} Kb7 {-34.00/23 1.7} 60. Ne7 {+15.34/9 0.1} Bd3
{-29.72/21 1.1} 61. h6 {+20.27/13 1.6} Ka6 {-61.57/21 1.0} 62. g6
{+16.35/7 0.1} Kb5 {-76.22/21 1.0} 63. h7 {+23.25/12 2.4} Bc4
{-86.43/20 1.0} 64. c6 {+22.43/7 0.1} Be6 {-1000.10/23 0.8} 65. h8=Q
{+22.59/6 0.2} Ka4 {-1000.08/25 0.2} 66. Qc3 {+1000.07/13 0.9} Bb3
{-1000.06/30 0.2} 67. c7 {+1000.06/11 0.2} Be6 {-1000.05/48 0.2} 68. g7
{+1000.05/10 0.2} Kb5 {-1000.04/72 0.3} 69. g8=Q {+1000.04/12 0.1} Bxg8
{-1000.03/245 0.6} 70. Nxg8 {+1000.03/48 0.2} Ka4 {-1000.02/245 0.6} 71.
c8=Q {+1000.02/63 0.1} Kb5 {-1000.01/245 0.6} 72. Q8c6# {+1000.01/63 0.1}
{Xboard adjudication: Checkmate} 1-0

[Event "Computer Chess Game"]
[Site "VM-894787"]
[Date "2021.03.08"]
[Round "8"]
[White "Stockfish 13"]
[Black "Zappa Mexico II"]
[Result "1-0"]
[TimeControl "60+1"]
[Annotator "1. -0.12 1... -0.19"]

1. Nf3 {-0.12/16} d5 {-0.19/12 5} 2. c3 {+0.14/17 6} Nf6 {+0.10/10 3} 3. d4
{+0.15/16 1.6} e6 {+0.09/10 4} 4. e3 {+0.15/14 0.9} c5 {+0.11/11 7} 5. Bd3
{+0.14/16 2.8} Bd6 {+0.17/10 2.6} 6. dxc5 {+0.11/15 2.9} Bxc5 {+0.17/12 6}
7. c4 {+0.11/15 0.7} O-O {+0.14/11 2.3} 8. cxd5 {+0.10/17 2.5} Qxd5
{+0.09/11 2.4} 9. Nc3 {+0.09/16 0.8} Qh5 {-0.01/12 5} 10. a3 {+0.09/18 5}
e5 {+0.00/10 2.5} 11. Qc2 {+0.08/19 7} Bh3 {+0.57/11 3} 12. Rg1
{+0.10/19 2.1} Bxg2 {+0.72/10 1.2} 13. Rxg2 {+0.09/17 0.9} Qxf3
{+0.97/11 1.5} 14. Rg3 {-0.06/19 3} Qh1+ {+1.22/10 1.6} 15. Bf1
{+0.07/20 1.8} Rd8 {+1.06/9 1.6} 16. Rg2 {+0.12/19 2.4} Nbd7 {+1.15/11 1.2}
17. Ne2 {+0.08/20 4} Nh5 {+1.14/9 1.9} 18. Qe4 {+0.07/18 2.5} Kh8
{+0.97/9 1.6} 19. Bd2 {+0.09/18 3} Bb6 {+1.10/10 1.7} 20. Bb4
{+0.10/20 2.2} Ndf6 {+0.74/10 1.6} 21. Qc2 {+0.10/21 5} Rac8 {+1.71/10 1.2}
22. Bc3 {+0.11/18 0.8} Ba5 {+1.67/9 1.1} 23. b4 {+0.11/21 4} Bc7
{+1.65/11 1.3} 24. Rd1 {+0.11/18 2.4} Rxd1+ {+1.57/11 1.5} 25. Qxd1
{+0.11/18 0.7} Rd8 {+1.42/12 2.6} 26. Qc2 {+0.10/20 5} Bb8 {+1.34/11 5} 27.
Bb2 {+0.11/18 1.1} Nd5 {+1.28/10 2.6} 28. Qe4 {+0.08/20 6} Ndf6
{+1.22/10 1.0} 29. Qc2 {+0.00/19 0.2} b6 {+1.29/11 4} 30. Nc3
{+0.11/19 0.8} g6 {+1.13/10 2.4} 31. f3 {+0.11/16 0.4} Rc8 {-0.26/10 2.8}
32. Qe2 {+1.50/17 0.4} Rxc3 {-1.32/9 0.9} 33. Bxc3 {+4.43/20 0.4} Nd5
{-1.53/10 0.8} 34. Bb2 {+5.61/20 0.6} f6 {-2.67/9 0.9} 35. Qf2
{+6.22/22 0.5} Nxe3 {-4.03/11 2.6} 36. Rg1 {+6.36/22 0.6} Qxg1
{-4.93/11 1.6} 37. Qxg1 {+6.33/22 0.7} Nf5 {-5.66/11 1.2} 38. Qg2
{+7.02/23 2.0} Nf4 {-6.25/11 2.1} 39. Qc2 {+7.21/21 0.4} Ne7 {-6.88/12 1.8}
40. Qa4 {+9.71/23 1.9} Kg7 {-8.14/13 0.9} 41. Qd7 {+10.01/20 0.4} Nd5
{-10.19/11 1.4} 42. Bc4 {+10.40/20 0.6} Kh6 {-11.12/11 1.4} 43. Bxd5
{+11.25/19 0.8} Nxd5 {-15.02/11 1.3} 44. Qxd5 {+11.86/20 0.6} f5
{-15.02/11 1.2} 45. Qd8 {+1000.10/35 0.6} Kh5 {-16.03/10 1.1} 46. Qxb8
{+1000.09/37 0.7} a5 {-1000.05/10 1.0} 47. Qxe5 {+1000.06/36 0.8} axb4
{-1000.05/10 0.3} 48. Bc1 {+1000.05/50 0.7} h6 {-1000.04/9 0.1} 49. axb4
{+1000.04/166 0.7} g5 {-1000.03/12 0.1} 50. Qxf5 {+1000.03/245 0.6} b5
{-1000.02/12 0.1} 51. Kf2 {+1000.02/245 0.7} Kh4 {-1000.01/63 0.1} 52. Qg4#
{+1000.01/245 0.6}
{Xboard adjudication: Checkmate} 1-0

[/pgn]
Image
User avatar
pedrox
Posts: 1056
Joined: Fri Mar 10, 2006 6:07 am
Location: Basque Country (Spain)

Re: A random walk down NNUE street ….

Post by pedrox »

After reading one of your posts I decided to try a nnue network. For this I generated 300M positions in depth 4 and 1M in depth 8 to validate.

I used the scripts and software shown in these youtube videos.

To create a fresh network use:

Code: Select all

uci
setoption name EnableTranspositionTable value false
setoption name PruneAtShallowDepth value false
setoption name SkipLoadingEval value true
setoption name Use NNUE value pure
setoption name EvalSaveDir value fresh-network
setoption name Threads value 10
isready
learn targetdir training_data validation_set_file_name validation_data\data.binpack set_recommended_uci_options use_draw_in_training 1 use_draw_in_validation 1 eval_limit 32000 epochs 1000 lr 1.0 lambda 1.0 nn_batch_size 1000 batchsize 200000 eval_save_interval 200000  loss_output_interval 200000 newbob_decay 0.5 newbob_num_trials 4
quit
I have managed to create in a few minutes a network that I have tested and it seems to play amazingly well over 2500 on my engine.

You can see in the log how the network starts with random and a move accuracy = 0.45% and how with each epcoch that move accuracy grows to just over 16%.

Code: Select all

C:\Users\pecas\Ajedrez\nnue\NNUE>PUSHD "C:\Users\pecas\Ajedrez\nnue\NNUE\"

C:\Users\pecas\Ajedrez\nnue\NNUE>TITLE Learning - Fresh Network

C:\Users\pecas\Ajedrez\nnue\NNUE>rd /q /s fresh-network

C:\Users\pecas\Ajedrez\nnue\NNUE>md fresh-network

C:\Users\pecas\Ajedrez\nnue\NNUE>SET OMP_NUM_THREADS=1

C:\Users\pecas\Ajedrez\nnue\NNUE>type "step 3a - learn_fresh_network.txt"   | "Stockfish x86-64-avx2.exe"
Stockfish 260221 by the Stockfish developers (see AUTHORS file)
info string Loaded eval file nn-c3ca321c51c9.nnue
id name Stockfish 260221
id author the Stockfish developers (see AUTHORS file)

option name Debug Log File type string default
option name Contempt type spin default 24 min -100 max 100
option name Analysis Contempt type combo default Both var Off var White var Black var Both
option name Threads type spin default 1 min 1 max 512
option name Hash type spin default 16 min 1 max 33554432
option name Clear Hash type button
option name Ponder type check default false
option name MultiPV type spin default 1 min 1 max 500
option name Skill Level type spin default 20 min 0 max 20
option name Move Overhead type spin default 10 min 0 max 5000
option name Slow Mover type spin default 100 min 10 max 1000
option name nodestime type spin default 0 min 0 max 10000
option name UCI_Chess960 type check default false
option name UCI_AnalyseMode type check default false
option name UCI_LimitStrength type check default false
option name UCI_Elo type spin default 1350 min 1350 max 2850
option name UCI_ShowWDL type check default false
option name SyzygyPath type string default <empty>
option name SyzygyProbeDepth type spin default 1 min 1 max 100
option name Syzygy50MoveRule type check default true
option name SyzygyProbeLimit type spin default 7 min 0 max 7
option name Use NNUE type combo default true var true var false var pure
option name EvalFile type string default nn-c3ca321c51c9.nnue
option name SkipLoadingEval type check default false
option name EvalSaveDir type string default evalsave
option name PruneAtShallowDepth type check default true
option name EnableTranspositionTable type check default true
uciok
readyok
INFO: Executing learn command
INFO: Input files:
  - training_data/data.binpack
INFO: Parameters:
  - validation set           : validation_data\data.binpack
  - validation count         : 2000
  - epochs                   : 1000
  - epochs * minibatch size  : 200000000
  - eval_limit               : 32000
  - save_only_once           : false
  - shuffle on read          : true
  - Loss Function            : ELMO_METHOD(WCSC27)
  - minibatch size           : 200000
  - nn_batch_size            : 1000
  - nn_options               :
  - learning rate            : 1
  - max_grad                 : 1
  - use draws in training    : 1
  - use draws in validation  : 1
  - skip repeated positions  : 1
  - winning prob coeff       : 0.00276753
  - use_wdl                  : 0
  - src_score_min_value      : 0
  - src_score_max_value      : 1
  - dest_score_min_value     : 0
  - dest_score_max_value     : 1
  - reduction_gameply        : 1
  - elmo_lambda_low          : 1
  - elmo_lambda_high         : 1
  - elmo_lambda_limit        : 32000
  - eval_save_interval       : 200000 sfens
  - loss_output_interval     : 200000 sfens
  - sfen_read_size           : 10000000
  - thread_buffer_size       : 10000
  - smart_fen_skipping       : 0
  - smart_fen_skipping_val   : 0
  - seed                     :
  - verbose                  : false
  - learning rate scheduling : newbob with decay
  - newbob_decay             : 0.5
  - newbob_num_trials        : 4

INFO: Started initialization.
INFO (initialize_training): Initializing NN training for Features=HalfKP(Friend)[41024->256x2],Network=AffineTransform[1<-32](ClippedReLU[32](AffineTransform[32<-32](ClippedReLU[32](AffineTransform[32<-512](InputSlice[512(0:512)])))))

Layers:
  - 0 - HalfKP(Friend)[41024->256x2]
  - 1 - InputSlice[512(0:512)]
  - 2 - AffineTransform[32<-512]
  - 3 - ClippedReLU[32]
  - 4 - AffineTransform[32<-32]
  - 5 - ClippedReLU[32]
  - 6 - AffineTransform[1<-32]

Factorizers:
  - Factorizer<HalfKP(Friend)> -> HalfK, P, HalfRelativeKP

INFO (initialize_training): Performing random net initialization.
Finished initialization.
info string NNUE evaluation using  enabled
INFO (sfen_reader): Opened file for reading: training_data/data.binpack
INFO (sfen_reader): Opened file for reading: validation_data\data.binpack

PROGRESS (calc_loss): Tue Mar 09 18:09:22 2021, 0 sfens, 0 sfens/second, epoch 0
  - learning rate = 1
  - startpos eval = 0
  - val_loss       = 0.416409
  - norm = 0
  - move accuracy = 0.45%
INFO (learn): initial loss = 0.416409
.INFO (save_eval): Saving current evaluation file in fresh-network/0
INFO (save_eval): Finished saving evaluation file in fresh-network/0

PROGRESS (calc_loss): Tue Mar 09 18:09:31 2021, 200000 sfens, 21563 sfens/second, epoch 1
  - learning rate = 1
  - startpos eval = 20
  - val_loss       = 0.416501
  - train_loss       = 0.398687
  - train_grad_norm  = 0.598973
  - norm = 40000
  - move accuracy = 0.45%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 31753 (out of 43979) features
  - (min, max) of pre-activations = -0.101404, 1.0322 (limit = 258.008)
  - largest min activation = 0.252439 , smallest max activation = 0.768682
  - avg_abs_bias   = 0.500579
  - avg_abs_weight = 0.0137935
  - clipped 3.41797e-05% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.585499
  - avg_abs_bias_diff   = 7.66894e-05
  - avg_abs_weight      = 0.035895
  - avg_abs_weight_diff = 3.85336e-05
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 0.280741 , smallest max activation = 0.718176
  - clipped 98.2333% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.467714
  - avg_abs_bias_diff   = 0.000259837
  - avg_abs_weight      = 0.140762
  - avg_abs_weight_diff = 0.000124379
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0.694149
  - clipped 94.9137% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.0337131
  - avg_abs_bias_diff   = 0.0272825
  - avg_abs_weight      = 0.0585321
  - avg_abs_weight_diff = 0.00132287
.INFO (save_eval): Saving current evaluation file in fresh-network/1
INFO (save_eval): Finished saving evaluation file in fresh-network/1
INFO (learning_rate):
  - loss = 0.416501 >= best (0.416409), rejected
  - reducing learning rate from 1 to 0.5 (3 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:09:33 2021, 400000 sfens, 35896 sfens/second, epoch 2
  - learning rate = 0.5
  - startpos eval = 3
  - val_loss       = 0.416373
  - train_loss       = 0.395883
  - train_grad_norm  = 0.597733
  - norm = 5996
  - move accuracy = 0.45%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 34084 (out of 43979) features
  - (min, max) of pre-activations = -0.0613348, 1.03045 (limit = 258.008)
  - largest min activation = 0.271284 , smallest max activation = 0.725052
  - avg_abs_bias   = 0.500576
  - avg_abs_weight = 0.0137934
  - clipped 3.80859e-05% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.585489
  - avg_abs_bias_diff   = 3.13135e-07
  - avg_abs_weight      = 0.0358948
  - avg_abs_weight_diff = 1.58586e-07
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 99.7783% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.467766
  - avg_abs_bias_diff   = 1.85801e-06
  - avg_abs_weight      = 0.140762
  - avg_abs_weight_diff = 8.66296e-07
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 99.608% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.00655166
  - avg_abs_bias_diff   = 0.0177786
  - avg_abs_weight      = 0.0585996
  - avg_abs_weight_diff = 8.45999e-07
.INFO (save_eval): Saving current evaluation file in fresh-network/2
INFO (save_eval): Finished saving evaluation file in fresh-network/2
INFO (learning_rate):
  - loss = 0.416373 < best (0.416409), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:35 2021, 600000 sfens, 46129 sfens/second, epoch 3
  - learning rate = 0.5
  - startpos eval = 17
  - val_loss       = 0.405943
  - train_loss       = 0.395553
  - train_grad_norm  = 0.596937
  - norm = 51284
  - move accuracy = 6.7%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 35236 (out of 43979) features
  - (min, max) of pre-activations = -0.0699659, 1.07201 (limit = 258.008)
  - largest min activation = 0.246764 , smallest max activation = 0.729006
  - avg_abs_bias   = 0.499179
  - avg_abs_weight = 0.0144284
  - clipped 3.90625e-05% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.585174
  - avg_abs_bias_diff   = 6.38492e-05
  - avg_abs_weight      = 0.0361257
  - avg_abs_weight_diff = 3.20849e-05
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 97.1364% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.467965
  - avg_abs_bias_diff   = 0.000224271
  - avg_abs_weight      = 0.141104
  - avg_abs_weight_diff = 0.00010382
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 87.8029% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.162805
  - avg_abs_bias_diff   = 0.0150043
  - avg_abs_weight      = 0.0755494
  - avg_abs_weight_diff = 0.000463723
.INFO (save_eval): Saving current evaluation file in fresh-network/3
INFO (save_eval): Finished saving evaluation file in fresh-network/3
INFO (learning_rate):
  - loss = 0.405943 < best (0.416373), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:37 2021, 800000 sfens, 53799 sfens/second, epoch 4
  - learning rate = 0.5
  - startpos eval = 39
  - val_loss       = 0.417296
  - train_loss       = 0.395754
  - train_grad_norm  = 0.597574
  - norm = 78000
  - move accuracy = 0.45%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 35979 (out of 43979) features
  - (min, max) of pre-activations = -0.100556, 1.05062 (limit = 258.008)
  - largest min activation = 0.221421 , smallest max activation = 0.752297
  - avg_abs_bias   = 0.498258
  - avg_abs_weight = 0.0145998
  - clipped 0.000352539% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.585736
  - avg_abs_bias_diff   = 2.5246e-05
  - avg_abs_weight      = 0.0362498
  - avg_abs_weight_diff = 1.24944e-05
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 99.8195% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.46637
  - avg_abs_bias_diff   = 9.8387e-05
  - avg_abs_weight      = 0.141583
  - avg_abs_weight_diff = 4.55471e-05
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 94.6775% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.0646924
  - avg_abs_bias_diff   = 0.0118451
  - avg_abs_weight      = 0.0766396
  - avg_abs_weight_diff = 0.000181719
.INFO (save_eval): Saving current evaluation file in fresh-network/4
INFO (save_eval): Finished saving evaluation file in fresh-network/4
INFO (learning_rate):
  - loss = 0.417296 >= best (0.405943), rejected
  - reducing learning rate from 0.5 to 0.25 (3 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:09:38 2021, 1000000 sfens, 59530 sfens/second, epoch 5
  - learning rate = 0.25
  - startpos eval = 28
  - val_loss       = 0.416752
  - train_loss       = 0.395499
  - train_grad_norm  = 0.597595
  - norm = 56000
  - move accuracy = 0.45%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 36483 (out of 43979) features
  - (min, max) of pre-activations = -0.161064, 1.04624 (limit = 258.008)
  - largest min activation = 0.231287 , smallest max activation = 0.750544
  - avg_abs_bias   = 0.498243
  - avg_abs_weight = 0.0146011
  - clipped 0.000413086% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.585711
  - avg_abs_bias_diff   = 1.96852e-07
  - avg_abs_weight      = 0.0362499
  - avg_abs_weight_diff = 9.53579e-08
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 99.9939% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.466457
  - avg_abs_bias_diff   = 7.97229e-06
  - avg_abs_weight      = 0.14158
  - avg_abs_weight_diff = 3.73425e-06
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 96.8746% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.0473122
  - avg_abs_bias_diff   = 0.00914914
  - avg_abs_weight      = 0.076385
  - avg_abs_weight_diff = 6.35041e-06
.INFO (save_eval): Saving current evaluation file in fresh-network/5
INFO (save_eval): Finished saving evaluation file in fresh-network/5
INFO (learning_rate):
  - loss = 0.416752 >= best (0.405943), rejected
  - reducing learning rate from 0.25 to 0.125 (2 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:09:40 2021, 1200000 sfens, 63911 sfens/second, epoch 6
  - learning rate = 0.125
  - startpos eval = 120
  - val_loss       = 0.379456
  - train_loss       = 0.389374
  - train_grad_norm  = 0.589321
  - norm = 231767
  - move accuracy = 4.25%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 36839 (out of 43979) features
  - (min, max) of pre-activations = -0.219392, 1.0801 (limit = 258.008)
  - largest min activation = 0.235672 , smallest max activation = 0.74145
  - avg_abs_bias   = 0.494117
  - avg_abs_weight = 0.015312
  - clipped 0.00141113% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.585581
  - avg_abs_bias_diff   = 9.92744e-05
  - avg_abs_weight      = 0.0366705
  - avg_abs_weight_diff = 4.79341e-05
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 99.3282% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.468666
  - avg_abs_bias_diff   = 0.000355264
  - avg_abs_weight      = 0.142015
  - avg_abs_weight_diff = 0.000162865
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 90.8695% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.180674
  - avg_abs_bias_diff   = 0.0119275
  - avg_abs_weight      = 0.0968762
  - avg_abs_weight_diff = 0.000349115
.INFO (save_eval): Saving current evaluation file in fresh-network/6
INFO (save_eval): Finished saving evaluation file in fresh-network/6
INFO (learning_rate):
  - loss = 0.379456 < best (0.405943), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:42 2021, 1400000 sfens, 67915 sfens/second, epoch 7
  - learning rate = 0.125
  - startpos eval = -210
  - val_loss       = 0.267728
  - train_loss       = 0.325339
  - train_grad_norm  = 0.493963
  - norm = 735858
  - move accuracy = 7.75%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 37149 (out of 43979) features
  - (min, max) of pre-activations = -0.205239, 1.04001 (limit = 258.008)
  - largest min activation = 0.210378 , smallest max activation = 0.712174
  - avg_abs_bias   = 0.484659
  - avg_abs_weight = 0.0168286
  - clipped 0.00717188% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.58554
  - avg_abs_bias_diff   = 0.000303167
  - avg_abs_weight      = 0.0373809
  - avg_abs_weight_diff = 0.000137568
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 97.5786% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.472723
  - avg_abs_bias_diff   = 0.000706124
  - avg_abs_weight      = 0.142515
  - avg_abs_weight_diff = 0.00033628
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0.060689 , smallest max activation = 0
  - clipped 84.2233% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.462488
  - avg_abs_bias_diff   = 0.0125747
  - avg_abs_weight      = 0.121592
  - avg_abs_weight_diff = 0.000759557
.INFO (save_eval): Saving current evaluation file in fresh-network/7
INFO (save_eval): Finished saving evaluation file in fresh-network/7
INFO (learning_rate):
  - loss = 0.267728 < best (0.379456), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:44 2021, 1600000 sfens, 71317 sfens/second, epoch 8
  - learning rate = 0.125
  - startpos eval = 163
  - val_loss       = 0.261218
  - train_loss       = 0.261386
  - train_grad_norm  = 0.403635
  - norm = 801397
  - move accuracy = 10.05%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 37405 (out of 43979) features
  - (min, max) of pre-activations = -0.294544, 0.951142 (limit = 258.008)
  - largest min activation = 0.155974 , smallest max activation = 0.659514
  - avg_abs_bias   = 0.471469
  - avg_abs_weight = 0.0184298
  - clipped 0.109976% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.58633
  - avg_abs_bias_diff   = 0.000437064
  - avg_abs_weight      = 0.0381593
  - avg_abs_weight_diff = 0.000173221
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 94.9548% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.473089
  - avg_abs_bias_diff   = 0.0010488
  - avg_abs_weight      = 0.143618
  - avg_abs_weight_diff = 0.000486951
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0.41457 , smallest max activation = 0
  - clipped 82.4149% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.606025
  - avg_abs_bias_diff   = 0.0145985
  - avg_abs_weight      = 0.135376
  - avg_abs_weight_diff = 0.00126558
.INFO (save_eval): Saving current evaluation file in fresh-network/8
INFO (save_eval): Finished saving evaluation file in fresh-network/8
INFO (learning_rate):
  - loss = 0.261218 < best (0.267728), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:46 2021, 1800000 sfens, 74177 sfens/second, epoch 9
  - learning rate = 0.125
  - startpos eval = -319
  - val_loss       = 0.159683
  - train_loss       = 0.206607
  - train_grad_norm  = 0.337158
  - norm = 1.07094e+06
  - move accuracy = 14.15%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 37577 (out of 43979) features
  - (min, max) of pre-activations = -0.420278, 0.850462 (limit = 258.008)
  - largest min activation = 0.0575945 , smallest max activation = 0.62396
  - avg_abs_bias   = 0.45627
  - avg_abs_weight = 0.0199819
  - clipped 1.10194% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.584842
  - avg_abs_bias_diff   = 0.000637732
  - avg_abs_weight      = 0.0388539
  - avg_abs_weight_diff = 0.000194225
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 88.4383% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.476298
  - avg_abs_bias_diff   = 0.00123829
  - avg_abs_weight      = 0.144648
  - avg_abs_weight_diff = 0.000550119
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 80.9084% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.769338
  - avg_abs_bias_diff   = 0.0146618
  - avg_abs_weight      = 0.152401
  - avg_abs_weight_diff = 0.00131879
.INFO (save_eval): Saving current evaluation file in fresh-network/9
INFO (save_eval): Finished saving evaluation file in fresh-network/9
INFO (learning_rate):
  - loss = 0.159683 < best (0.261218), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:48 2021, 2000000 sfens, 76645 sfens/second, epoch 10
  - learning rate = 0.125
  - startpos eval = -184
  - val_loss       = 0.125427
  - train_loss       = 0.136081
  - train_grad_norm  = 0.253947
  - norm = 1.40338e+06
  - move accuracy = 14.6%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 37761 (out of 43979) features
  - (min, max) of pre-activations = -0.561741, 0.765331 (limit = 258.008)
  - largest min activation = 0 , smallest max activation = 0.592094
  - avg_abs_bias   = 0.443094
  - avg_abs_weight = 0.0212794
  - clipped 5.52607% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.582962
  - avg_abs_bias_diff   = 0.000829241
  - avg_abs_weight      = 0.0393626
  - avg_abs_weight_diff = 0.000190581
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 77.9441% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.478039
  - avg_abs_bias_diff   = 0.00124159
  - avg_abs_weight      = 0.145515
  - avg_abs_weight_diff = 0.000530369
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 73.1073% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.884985
  - avg_abs_bias_diff   = 0.0124117
  - avg_abs_weight      = 0.170559
  - avg_abs_weight_diff = 0.00132573
.INFO (save_eval): Saving current evaluation file in fresh-network/10
INFO (save_eval): Finished saving evaluation file in fresh-network/10
INFO (learning_rate):
  - loss = 0.125427 < best (0.159683), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:50 2021, 2200000 sfens, 78827 sfens/second, epoch 11
  - learning rate = 0.125
  - startpos eval = -345
  - val_loss       = 0.0865551
  - train_loss       = 0.0963315
  - train_grad_norm  = 0.203269
  - norm = 1.57992e+06
  - move accuracy = 14.55%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 37887 (out of 43979) features
  - (min, max) of pre-activations = -0.579598, 0.824825 (limit = 258.008)
  - largest min activation = 0 , smallest max activation = 0.569385
  - avg_abs_bias   = 0.436094
  - avg_abs_weight = 0.0220826
  - clipped 9.94394% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.581345
  - avg_abs_bias_diff   = 0.000691517
  - avg_abs_weight      = 0.0396269
  - avg_abs_weight_diff = 0.000138613
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 72.4492% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.47895
  - avg_abs_bias_diff   = 0.000930835
  - avg_abs_weight      = 0.146011
  - avg_abs_weight_diff = 0.000390152
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 69.0309% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.980369
  - avg_abs_bias_diff   = 0.00809311
  - avg_abs_weight      = 0.186263
  - avg_abs_weight_diff = 0.00104845
.INFO (save_eval): Saving current evaluation file in fresh-network/11
INFO (save_eval): Finished saving evaluation file in fresh-network/11
INFO (learning_rate):
  - loss = 0.0865551 < best (0.125427), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:51 2021, 2400000 sfens, 80577 sfens/second, epoch 12
  - learning rate = 0.125
  - startpos eval = 241
  - val_loss       = 0.0960863
  - train_loss       = 0.0774279
  - train_grad_norm  = 0.175477
  - norm = 1.64095e+06
  - move accuracy = 16%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 37999 (out of 43979) features
  - (min, max) of pre-activations = -0.609198, 0.855343 (limit = 258.008)
  - largest min activation = 0 , smallest max activation = 0.563336
  - avg_abs_bias   = 0.431323
  - avg_abs_weight = 0.0226676
  - clipped 12.2125% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.581352
  - avg_abs_bias_diff   = 0.000641459
  - avg_abs_weight      = 0.0397609
  - avg_abs_weight_diff = 0.000116381
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 70.5055% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.479023
  - avg_abs_bias_diff   = 0.000776562
  - avg_abs_weight      = 0.14622
  - avg_abs_weight_diff = 0.000321951
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0.0982055 , smallest max activation = 0
  - clipped 67.4807% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 1.03502
  - avg_abs_bias_diff   = 0.00619873
  - avg_abs_weight      = 0.196501
  - avg_abs_weight_diff = 0.000932214
.INFO (save_eval): Saving current evaluation file in fresh-network/12
INFO (save_eval): Finished saving evaluation file in fresh-network/12
INFO (learning_rate):
  - loss = 0.0960863 >= best (0.0865551), rejected
  - reducing learning rate from 0.125 to 0.0625 (3 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:09:53 2021, 2600000 sfens, 82223 sfens/second, epoch 13
  - learning rate = 0.0625
  - startpos eval = 70
  - val_loss       = 0.0618871
  - train_loss       = 0.068422
  - train_grad_norm  = 0.161779
  - norm = 1.64347e+06
  - move accuracy = 16.65%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 38101 (out of 43979) features
  - (min, max) of pre-activations = -0.636511, 0.885022 (limit = 258.008)
  - largest min activation = 0 , smallest max activation = 0.558438
  - avg_abs_bias   = 0.428319
  - avg_abs_weight = 0.0231015
  - clipped 13.7755% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.580535
  - avg_abs_bias_diff   = 0.00055079
  - avg_abs_weight      = 0.0398027
  - avg_abs_weight_diff = 9.24211e-05
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 69.0439% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.479244
  - avg_abs_bias_diff   = 0.000614391
  - avg_abs_weight      = 0.146297
  - avg_abs_weight_diff = 0.000254153
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0.192297 , smallest max activation = 0
  - clipped 66.3492% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 1.07182
  - avg_abs_bias_diff   = 0.00464471
  - avg_abs_weight      = 0.20347
  - avg_abs_weight_diff = 0.000759814
.INFO (save_eval): Saving current evaluation file in fresh-network/13
INFO (save_eval): Finished saving evaluation file in fresh-network/13
INFO (learning_rate):
  - loss = 0.0618871 < best (0.0865551), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:55 2021, 2800000 sfens, 83722 sfens/second, epoch 14
  - learning rate = 0.0625
  - startpos eval = -81
  - val_loss       = 0.064333
  - train_loss       = 0.0333138
  - train_grad_norm  = 0.104749
  - norm = 1.78335e+06
  - move accuracy = 15.7%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 38202 (out of 43979) features
  - (min, max) of pre-activations = -0.651166, 0.750784 (limit = 258.008)
  - largest min activation = 0 , smallest max activation = 0.544983
  - avg_abs_bias   = 0.429044
  - avg_abs_weight = 0.023067
  - clipped 12.7396% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.580056
  - avg_abs_bias_diff   = 6.91663e-05
  - avg_abs_weight      = 0.0397866
  - avg_abs_weight_diff = 1.2565e-05
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 70.2084% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.479494
  - avg_abs_bias_diff   = 6.29022e-05
  - avg_abs_weight      = 0.146309
  - avg_abs_weight_diff = 2.70082e-05
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0.262042 , smallest max activation = 0
  - clipped 66.9023% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 1.09208
  - avg_abs_bias_diff   = 0.000426594
  - avg_abs_weight      = 0.211248
  - avg_abs_weight_diff = 8.71273e-05
.INFO (save_eval): Saving current evaluation file in fresh-network/14
INFO (save_eval): Finished saving evaluation file in fresh-network/14
INFO (learning_rate):
  - loss = 0.064333 >= best (0.0618871), rejected
  - reducing learning rate from 0.0625 to 0.03125 (3 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:09:57 2021, 3000000 sfens, 84981 sfens/second, epoch 15
  - learning rate = 0.03125
  - startpos eval = 338
  - val_loss       = 0.0707723
  - train_loss       = 0.0323261
  - train_grad_norm  = 0.103057
  - norm = 1.82902e+06
  - move accuracy = 16.2%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 38281 (out of 43979) features
  - (min, max) of pre-activations = -0.645263, 0.767126 (limit = 258.008)
  - largest min activation = 0 , smallest max activation = 0.546313
  - avg_abs_bias   = 0.429273
  - avg_abs_weight = 0.0230761
  - clipped 11.9205% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.579621
  - avg_abs_bias_diff   = 7.83599e-05
  - avg_abs_weight      = 0.0397591
  - avg_abs_weight_diff = 1.41908e-05
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 72.254% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.479458
  - avg_abs_bias_diff   = 7.24238e-05
  - avg_abs_weight      = 0.146289
  - avg_abs_weight_diff = 3.01774e-05
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0.300609 , smallest max activation = 0
  - clipped 67.6739% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 1.10851
  - avg_abs_bias_diff   = 0.00046963
  - avg_abs_weight      = 0.216324
  - avg_abs_weight_diff = 8.73834e-05
.INFO (save_eval): Saving current evaluation file in fresh-network/15
INFO (save_eval): Finished saving evaluation file in fresh-network/15
INFO (learning_rate):
  - loss = 0.0707723 >= best (0.0618871), rejected
  - reducing learning rate from 0.03125 to 0.015625 (2 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:09:59 2021, 3200000 sfens, 86183 sfens/second, epoch 16
  - learning rate = 0.015625
  - startpos eval = 16
  - val_loss       = 0.0620044
  - train_loss       = 0.0259137
  - train_grad_norm  = 0.0884716
  - norm = 1.836e+06
  - move accuracy = 16.4%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 38360 (out of 43979) features
  - (min, max) of pre-activations = -0.646309, 0.760772 (limit = 258.008)
  - largest min activation = 0 , smallest max activation = 0.557711
  - avg_abs_bias   = 0.429469
  - avg_abs_weight = 0.0230555
  - clipped 11.4834% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.579492
  - avg_abs_bias_diff   = 2.58113e-05
  - avg_abs_weight      = 0.0397463
  - avg_abs_weight_diff = 5.13404e-06
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 73.1835% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.479408
  - avg_abs_bias_diff   = 2.46542e-05
  - avg_abs_weight      = 0.146274
  - avg_abs_weight_diff = 1.02848e-05
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0.316902 , smallest max activation = 0
  - clipped 68.001% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 1.11675
  - avg_abs_bias_diff   = 0.000159962
  - avg_abs_weight      = 0.218584
  - avg_abs_weight_diff = 3.00839e-05
.INFO (save_eval): Saving current evaluation file in fresh-network/16
INFO (save_eval): Finished saving evaluation file in fresh-network/16
INFO (learning_rate):
  - loss = 0.0620044 >= best (0.0618871), rejected
  - reducing learning rate from 0.015625 to 0.0078125 (1 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:10:01 2021, 3400000 sfens, 87313 sfens/second, epoch 17
  - learning rate = 0.0078125
  - startpos eval = 71
  - val_loss       = 0.0619761
  - train_loss       = 0.0251817
  - train_grad_norm  = 0.0861851
  - norm = 1.85571e+06
  - move accuracy = 16.65%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 38434 (out of 43979) features
  - (min, max) of pre-activations = -0.654235, 0.736818 (limit = 258.008)
  - largest min activation = 0 , smallest max activation = 0.555133
  - avg_abs_bias   = 0.429525
  - avg_abs_weight = 0.023046
  - clipped 11.303% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.579507
  - avg_abs_bias_diff   = 1.18133e-05
  - avg_abs_weight      = 0.0397382
  - avg_abs_weight_diff = 2.39722e-06
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 73.7253% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.47944
  - avg_abs_bias_diff   = 1.14416e-05
  - avg_abs_weight      = 0.146266
  - avg_abs_weight_diff = 4.76686e-06
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0.322341 , smallest max activation = 0
  - clipped 68.1017% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 1.12038
  - avg_abs_bias_diff   = 7.326e-05
  - avg_abs_weight      = 0.219646
  - avg_abs_weight_diff = 1.36197e-05
.INFO (save_eval): Saving current evaluation file in fresh-network/17
INFO (save_eval): Finished saving evaluation file in fresh-network/17
INFO (learning_rate):
  - loss = 0.0619761 >= best (0.0618871), rejected
  - converged
INFO (save_eval): Saving current evaluation file in fresh-network/final
INFO (save_eval): Finished saving evaluation file in fresh-network/final

C:\Users\pecas\Ajedrez\nnue\NNUE>POPD

C:\Users\pecas\Ajedrez\nnue\NNUE>PAUSE
Presione una tecla para continuar . . .
However I have a problem, if you run the script again, it is possible that the network you get is a network that you have not learned. I may have run the same script 100 times and 3 times it learned and the rest it didn't, the usual is:

Code: Select all

C:\Users\pecas\Ajedrez\nnue\NNUE>PUSHD "C:\Users\pecas\Ajedrez\nnue\NNUE\"

C:\Users\pecas\Ajedrez\nnue\NNUE>TITLE Learning - Fresh Network

C:\Users\pecas\Ajedrez\nnue\NNUE>rd /q /s fresh-network

C:\Users\pecas\Ajedrez\nnue\NNUE>md fresh-network

C:\Users\pecas\Ajedrez\nnue\NNUE>SET OMP_NUM_THREADS=1

C:\Users\pecas\Ajedrez\nnue\NNUE>type "step 3a - learn_fresh_network.txt"   | "Stockfish x86-64-avx2.exe"
Stockfish 260221 by the Stockfish developers (see AUTHORS file)
info string Loaded eval file nn-c3ca321c51c9.nnue
id name Stockfish 260221
id author the Stockfish developers (see AUTHORS file)

option name Debug Log File type string default
option name Contempt type spin default 24 min -100 max 100
option name Analysis Contempt type combo default Both var Off var White var Black var Both
option name Threads type spin default 1 min 1 max 512
option name Hash type spin default 16 min 1 max 33554432
option name Clear Hash type button
option name Ponder type check default false
option name MultiPV type spin default 1 min 1 max 500
option name Skill Level type spin default 20 min 0 max 20
option name Move Overhead type spin default 10 min 0 max 5000
option name Slow Mover type spin default 100 min 10 max 1000
option name nodestime type spin default 0 min 0 max 10000
option name UCI_Chess960 type check default false
option name UCI_AnalyseMode type check default false
option name UCI_LimitStrength type check default false
option name UCI_Elo type spin default 1350 min 1350 max 2850
option name UCI_ShowWDL type check default false
option name SyzygyPath type string default <empty>
option name SyzygyProbeDepth type spin default 1 min 1 max 100
option name Syzygy50MoveRule type check default true
option name SyzygyProbeLimit type spin default 7 min 0 max 7
option name Use NNUE type combo default true var true var false var pure
option name EvalFile type string default nn-c3ca321c51c9.nnue
option name SkipLoadingEval type check default false
option name EvalSaveDir type string default evalsave
option name PruneAtShallowDepth type check default true
option name EnableTranspositionTable type check default true
uciok
readyok
INFO: Executing learn command
INFO: Input files:
  - training_data/data.binpack
INFO: Parameters:
  - validation set           : validation_data\data.binpack
  - validation count         : 2000
  - epochs                   : 1000
  - epochs * minibatch size  : 200000000
  - eval_limit               : 32000
  - save_only_once           : false
  - shuffle on read          : true
  - Loss Function            : ELMO_METHOD(WCSC27)
  - minibatch size           : 200000
  - nn_batch_size            : 1000
  - nn_options               :
  - learning rate            : 1
  - max_grad                 : 1
  - use draws in training    : 1
  - use draws in validation  : 1
  - skip repeated positions  : 1
  - winning prob coeff       : 0.00276753
  - use_wdl                  : 0
  - src_score_min_value      : 0
  - src_score_max_value      : 1
  - dest_score_min_value     : 0
  - dest_score_max_value     : 1
  - reduction_gameply        : 1
  - elmo_lambda_low          : 1
  - elmo_lambda_high         : 1
  - elmo_lambda_limit        : 32000
  - eval_save_interval       : 200000 sfens
  - loss_output_interval     : 200000 sfens
  - sfen_read_size           : 10000000
  - thread_buffer_size       : 10000
  - smart_fen_skipping       : 0
  - smart_fen_skipping_val   : 0
  - seed                     :
  - verbose                  : false
  - learning rate scheduling : newbob with decay
  - newbob_decay             : 0.5
  - newbob_num_trials        : 4

INFO: Started initialization.
INFO (initialize_training): Initializing NN training for Features=HalfKP(Friend)[41024->256x2],Network=AffineTransform[1<-32](ClippedReLU[32](AffineTransform[32<-32](ClippedReLU[32](AffineTransform[32<-512](InputSlice[512(0:512)])))))

Layers:
  - 0 - HalfKP(Friend)[41024->256x2]
  - 1 - InputSlice[512(0:512)]
  - 2 - AffineTransform[32<-512]
  - 3 - ClippedReLU[32]
  - 4 - AffineTransform[32<-32]
  - 5 - ClippedReLU[32]
  - 6 - AffineTransform[1<-32]

Factorizers:
  - Factorizer<HalfKP(Friend)> -> HalfK, P, HalfRelativeKP

INFO (initialize_training): Performing random net initialization.
Finished initialization.
info string NNUE evaluation using  enabled
INFO (sfen_reader): Opened file for reading: training_data/data.binpack
INFO (sfen_reader): Opened file for reading: validation_data\data.binpack
INFO (sfen_reader): Opened file for reading: validation_data\data.binpack

PROGRESS (calc_loss): Tue Mar 09 18:47:12 2021, 0 sfens, 0 sfens/second, epoch 0
  - learning rate = 1
  - startpos eval = 0
  - val_loss       = 0.415104
  - norm = 0
  - move accuracy = 0.3%
INFO (learn): initial loss = 0.415104
.INFO (save_eval): Saving current evaluation file in fresh-network/0
INFO (save_eval): Finished saving evaluation file in fresh-network/0

PROGRESS (calc_loss): Tue Mar 09 18:47:21 2021, 200000 sfens, 21788 sfens/second, epoch 1
  - learning rate = 1
  - startpos eval = 19
  - val_loss       = 0.415042
  - train_loss       = 0.398409
  - train_grad_norm  = 0.599358
  - norm = 38000
  - move accuracy = 0.3%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 31693 (out of 43979) features
  - (min, max) of pre-activations = -0.105013, 1.03203 (limit = 258.008)
  - largest min activation = 0.248225 , smallest max activation = 0.764855
  - avg_abs_bias   = 0.500001
  - avg_abs_weight = 0.0137394
  - clipped 2.83203e-05% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.463809
  - avg_abs_bias_diff   = 6.37289e-05
  - avg_abs_weight      = 0.035838
  - avg_abs_weight_diff = 3.18667e-05
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 0.270045 , smallest max activation = 0.719873
  - clipped 98.1911% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.6231
  - avg_abs_bias_diff   = 0.000173892
  - avg_abs_weight      = 0.146059
  - avg_abs_weight_diff = 0.000100077
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0.714195
  - clipped 94.7478% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.0331538
  - avg_abs_bias_diff   = 0.0240333
  - avg_abs_weight      = 0.0446703
  - avg_abs_weight_diff = 0.00119196
.INFO (save_eval): Saving current evaluation file in fresh-network/1
INFO (save_eval): Finished saving evaluation file in fresh-network/1
INFO (learning_rate):
  - loss = 0.415042 < best (0.415104), accepted

PROGRESS (calc_loss): Tue Mar 09 18:47:23 2021, 400000 sfens, 36218 sfens/second, epoch 2
  - learning rate = 1
  - startpos eval = 49
  - val_loss       = 0.41635
  - train_loss       = 0.396185
  - train_grad_norm  = 0.598233
  - norm = 98000
  - move accuracy = 0.3%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 34082 (out of 43979) features
  - (min, max) of pre-activations = -0.0622003, 1.08296 (limit = 258.008)
  - largest min activation = 0.225561 , smallest max activation = 0.746496
  - avg_abs_bias   = 0.500001
  - avg_abs_weight = 0.0137394
  - clipped 2.73437e-05% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.463809
  - avg_abs_bias_diff   = 6.13744e-09
  - avg_abs_weight      = 0.035838
  - avg_abs_weight_diff = 3.06497e-09
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 99.9702% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.623074
  - avg_abs_bias_diff   = 1.17931e-06
  - avg_abs_weight      = 0.146058
  - avg_abs_weight_diff = 7.00162e-07
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 96.8749% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.0828251
  - avg_abs_bias_diff   = 0.0201302
  - avg_abs_weight      = 0.0445773
  - avg_abs_weight_diff = 6.42917e-06
.INFO (save_eval): Saving current evaluation file in fresh-network/2
INFO (save_eval): Finished saving evaluation file in fresh-network/2
INFO (learning_rate):
  - loss = 0.41635 >= best (0.415042), rejected
  - reducing learning rate from 1 to 0.5 (3 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:47:25 2021, 600000 sfens, 46375 sfens/second, epoch 3
  - learning rate = 0.5
  - startpos eval = 14
  - val_loss       = 0.414991
  - train_loss       = 0.396866
  - train_grad_norm  = 0.598888
  - norm = 28000
  - move accuracy = 0.3%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 35258 (out of 43979) features
  - (min, max) of pre-activations = -0.0725056, 1.01159 (limit = 258.008)
  - largest min activation = 0.251366 , smallest max activation = 0.736929
  - avg_abs_bias   = 0.500001
  - avg_abs_weight = 0.0137394
  - clipped 2.63672e-05% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.46381
  - avg_abs_bias_diff   = 1.71243e-09
  - avg_abs_weight      = 0.035838
  - avg_abs_weight_diff = 8.5003e-10
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 99.9686% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.62307
  - avg_abs_bias_diff   = 1.68173e-07
  - avg_abs_weight      = 0.146058
  - avg_abs_weight_diff = 9.97987e-08
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 96.8773% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.0241509
  - avg_abs_bias_diff   = 0.0188792
  - avg_abs_weight      = 0.0445574
  - avg_abs_weight_diff = 8.99084e-07
.INFO (save_eval): Saving current evaluation file in fresh-network/3
INFO (save_eval): Finished saving evaluation file in fresh-network/3
INFO (learning_rate):
  - loss = 0.414991 < best (0.415042), accepted

PROGRESS (calc_loss): Tue Mar 09 18:47:27 2021, 800000 sfens, 54054 sfens/second, epoch 4
  - learning rate = 0.5
  - startpos eval = 25
  - val_loss       = 0.415166
  - train_loss       = 0.396834
  - train_grad_norm  = 0.599195
  - norm = 50000
  - move accuracy = 0.3%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 35963 (out of 43979) features
  - (min, max) of pre-activations = -0.0564605, 1.02667 (limit = 258.008)
  - largest min activation = 0.247446 , smallest max activation = 0.720379
  - avg_abs_bias   = 0.500001
  - avg_abs_weight = 0.0137394
  - clipped 2.73437e-05% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.46381
  - avg_abs_bias_diff   = 9.7853e-10
  - avg_abs_weight      = 0.035838
  - avg_abs_weight_diff = 4.85001e-10
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 99.9699% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.62307
  - avg_abs_bias_diff   = 2.88776e-08
  - avg_abs_weight      = 0.146058
  - avg_abs_weight_diff = 1.71088e-08
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 96.8831% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.0427103
  - avg_abs_bias_diff   = 0.00870472
  - avg_abs_weight      = 0.0445576
  - avg_abs_weight_diff = 1.39592e-07
.INFO (save_eval): Saving current evaluation file in fresh-network/4
INFO (save_eval): Finished saving evaluation file in fresh-network/4
INFO (learning_rate):
  - loss = 0.415166 >= best (0.414991), rejected
  - reducing learning rate from 0.5 to 0.25 (3 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:47:29 2021, 1000000 sfens, 59797 sfens/second, epoch 5
  - learning rate = 0.25
  - startpos eval = 29
  - val_loss       = 0.415287
  - train_loss       = 0.395601
  - train_grad_norm  = 0.597181
  - norm = 58000
  - move accuracy = 0.3%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 36442 (out of 43979) features
  - (min, max) of pre-activations = -0.0666179, 1.01982 (limit = 258.008)
  - largest min activation = 0.251885 , smallest max activation = 0.759496
  - avg_abs_bias   = 0.500001
  - avg_abs_weight = 0.0137394
  - clipped 2.53906e-05% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.46381
  - avg_abs_bias_diff   = 6.84443e-10
  - avg_abs_weight      = 0.035838
  - avg_abs_weight_diff = 3.38364e-10
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 99.9694% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.623069
  - avg_abs_bias_diff   = 1.83669e-08
  - avg_abs_weight      = 0.146058
  - avg_abs_weight_diff = 1.08793e-08
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 96.8876% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.0495948
  - avg_abs_bias_diff   = 0.0110125
  - avg_abs_weight      = 0.0445562
  - avg_abs_weight_diff = 1.24192e-07
.INFO (save_eval): Saving current evaluation file in fresh-network/5
INFO (save_eval): Finished saving evaluation file in fresh-network/5
INFO (learning_rate):
  - loss = 0.415287 >= best (0.414991), rejected
  - reducing learning rate from 0.25 to 0.125 (2 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:47:30 2021, 1200000 sfens, 64325 sfens/second, epoch 6
  - learning rate = 0.125
  - startpos eval = 23
  - val_loss       = 0.415117
  - train_loss       = 0.396437
  - train_grad_norm  = 0.598531
  - norm = 46000
  - move accuracy = 0.3%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 36817 (out of 43979) features
  - (min, max) of pre-activations = -0.0753079, 1.02287 (limit = 258.008)
  - largest min activation = 0.259006 , smallest max activation = 0.760966
  - avg_abs_bias   = 0.500001
  - avg_abs_weight = 0.0137394
  - clipped 2.24609e-05% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.46381
  - avg_abs_bias_diff   = 3.93934e-10
  - avg_abs_weight      = 0.035838
  - avg_abs_weight_diff = 1.94706e-10
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 99.9693% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.62307
  - avg_abs_bias_diff   = 6.12394e-09
  - avg_abs_weight      = 0.146058
  - avg_abs_weight_diff = 3.62205e-09
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 96.8901% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.0387856
  - avg_abs_bias_diff   = 0.00397188
  - avg_abs_weight      = 0.0445561
  - avg_abs_weight_diff = 3.42702e-08
.INFO (save_eval): Saving current evaluation file in fresh-network/6
INFO (save_eval): Finished saving evaluation file in fresh-network/6
INFO (learning_rate):
  - loss = 0.415117 >= best (0.414991), rejected
  - reducing learning rate from 0.125 to 0.0625 (1 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:47:32 2021, 1400000 sfens, 68346 sfens/second, epoch 7
  - learning rate = 0.0625
  - startpos eval = 29
  - val_loss       = 0.415287
  - train_loss       = 0.395858
  - train_grad_norm  = 0.597981
  - norm = 58000
  - move accuracy = 0.3%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 37092 (out of 43979) features
  - (min, max) of pre-activations = -0.0402039, 1.01587 (limit = 258.008)
  - largest min activation = 0.259212 , smallest max activation = 0.741486
  - avg_abs_bias   = 0.500001
  - avg_abs_weight = 0.0137394
  - clipped 2.53906e-05% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.46381
  - avg_abs_bias_diff   = 2.36028e-10
  - avg_abs_weight      = 0.035838
  - avg_abs_weight_diff = 1.17054e-10
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 99.9691% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.62307
  - avg_abs_bias_diff   = 3.1536e-09
  - avg_abs_weight      = 0.146058
  - avg_abs_weight_diff = 1.8633e-09
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 96.8912% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.0494999
  - avg_abs_bias_diff   = 0.00207748
  - avg_abs_weight      = 0.044556
  - avg_abs_weight_diff = 1.74314e-08
.INFO (save_eval): Saving current evaluation file in fresh-network/7
INFO (save_eval): Finished saving evaluation file in fresh-network/7
INFO (learning_rate):
  - loss = 0.415287 >= best (0.414991), rejected
  - converged
INFO (save_eval): Saving current evaluation file in fresh-network/final
INFO (save_eval): Finished saving evaluation file in fresh-network/final

C:\Users\pecas\Ajedrez\nnue\NNUE>POPD

C:\Users\pecas\Ajedrez\nnue\NNUE>PAUSE
Presione una tecla para continuar . . .
User avatar
MikeB
Posts: 4889
Joined: Thu Mar 09, 2006 6:34 am
Location: Pen Argyl, Pennsylvania

Re: A random walk down NNUE street ….

Post by MikeB »

pedrox wrote: Tue Mar 09, 2021 6:52 pm After reading one of your posts I decided to try a nnue network. For this I generated 300M positions in depth 4 and 1M in depth 8 to validate.

I used the scripts and software shown in these youtube videos.

To create a fresh network use:

Code: Select all

uci
setoption name EnableTranspositionTable value false
setoption name PruneAtShallowDepth value false
setoption name SkipLoadingEval value true
setoption name Use NNUE value pure
setoption name EvalSaveDir value fresh-network
setoption name Threads value 10
isready
learn targetdir training_data validation_set_file_name validation_data\data.binpack set_recommended_uci_options use_draw_in_training 1 use_draw_in_validation 1 eval_limit 32000 epochs 1000 lr 1.0 lambda 1.0 nn_batch_size 1000 batchsize 200000 eval_save_interval 200000  loss_output_interval 200000 newbob_decay 0.5 newbob_num_trials 4
quit
I have managed to create in a few minutes a network that I have tested and it seems to play amazingly well over 2500 on my engine.

You can see in the log how the network starts with random and a move accuracy = 0.45% and how with each epcoch that move accuracy grows to just over 16%.

Code: Select all

C:\Users\pecas\Ajedrez\nnue\NNUE>PUSHD "C:\Users\pecas\Ajedrez\nnue\NNUE\"

C:\Users\pecas\Ajedrez\nnue\NNUE>TITLE Learning - Fresh Network

C:\Users\pecas\Ajedrez\nnue\NNUE>rd /q /s fresh-network

C:\Users\pecas\Ajedrez\nnue\NNUE>md fresh-network

C:\Users\pecas\Ajedrez\nnue\NNUE>SET OMP_NUM_THREADS=1

C:\Users\pecas\Ajedrez\nnue\NNUE>type "step 3a - learn_fresh_network.txt"   | "Stockfish x86-64-avx2.exe"
Stockfish 260221 by the Stockfish developers (see AUTHORS file)
info string Loaded eval file nn-c3ca321c51c9.nnue
id name Stockfish 260221
id author the Stockfish developers (see AUTHORS file)

option name Debug Log File type string default
option name Contempt type spin default 24 min -100 max 100
option name Analysis Contempt type combo default Both var Off var White var Black var Both
option name Threads type spin default 1 min 1 max 512
option name Hash type spin default 16 min 1 max 33554432
option name Clear Hash type button
option name Ponder type check default false
option name MultiPV type spin default 1 min 1 max 500
option name Skill Level type spin default 20 min 0 max 20
option name Move Overhead type spin default 10 min 0 max 5000
option name Slow Mover type spin default 100 min 10 max 1000
option name nodestime type spin default 0 min 0 max 10000
option name UCI_Chess960 type check default false
option name UCI_AnalyseMode type check default false
option name UCI_LimitStrength type check default false
option name UCI_Elo type spin default 1350 min 1350 max 2850
option name UCI_ShowWDL type check default false
option name SyzygyPath type string default <empty>
option name SyzygyProbeDepth type spin default 1 min 1 max 100
option name Syzygy50MoveRule type check default true
option name SyzygyProbeLimit type spin default 7 min 0 max 7
option name Use NNUE type combo default true var true var false var pure
option name EvalFile type string default nn-c3ca321c51c9.nnue
option name SkipLoadingEval type check default false
option name EvalSaveDir type string default evalsave
option name PruneAtShallowDepth type check default true
option name EnableTranspositionTable type check default true
uciok
readyok
INFO: Executing learn command
INFO: Input files:
  - training_data/data.binpack
INFO: Parameters:
  - validation set           : validation_data\data.binpack
  - validation count         : 2000
  - epochs                   : 1000
  - epochs * minibatch size  : 200000000
  - eval_limit               : 32000
  - save_only_once           : false
  - shuffle on read          : true
  - Loss Function            : ELMO_METHOD(WCSC27)
  - minibatch size           : 200000
  - nn_batch_size            : 1000
  - nn_options               :
  - learning rate            : 1
  - max_grad                 : 1
  - use draws in training    : 1
  - use draws in validation  : 1
  - skip repeated positions  : 1
  - winning prob coeff       : 0.00276753
  - use_wdl                  : 0
  - src_score_min_value      : 0
  - src_score_max_value      : 1
  - dest_score_min_value     : 0
  - dest_score_max_value     : 1
  - reduction_gameply        : 1
  - elmo_lambda_low          : 1
  - elmo_lambda_high         : 1
  - elmo_lambda_limit        : 32000
  - eval_save_interval       : 200000 sfens
  - loss_output_interval     : 200000 sfens
  - sfen_read_size           : 10000000
  - thread_buffer_size       : 10000
  - smart_fen_skipping       : 0
  - smart_fen_skipping_val   : 0
  - seed                     :
  - verbose                  : false
  - learning rate scheduling : newbob with decay
  - newbob_decay             : 0.5
  - newbob_num_trials        : 4

INFO: Started initialization.
INFO (initialize_training): Initializing NN training for Features=HalfKP(Friend)[41024->256x2],Network=AffineTransform[1<-32](ClippedReLU[32](AffineTransform[32<-32](ClippedReLU[32](AffineTransform[32<-512](InputSlice[512(0:512)])))))

Layers:
  - 0 - HalfKP(Friend)[41024->256x2]
  - 1 - InputSlice[512(0:512)]
  - 2 - AffineTransform[32<-512]
  - 3 - ClippedReLU[32]
  - 4 - AffineTransform[32<-32]
  - 5 - ClippedReLU[32]
  - 6 - AffineTransform[1<-32]

Factorizers:
  - Factorizer<HalfKP(Friend)> -> HalfK, P, HalfRelativeKP

INFO (initialize_training): Performing random net initialization.
Finished initialization.
info string NNUE evaluation using  enabled
INFO (sfen_reader): Opened file for reading: training_data/data.binpack
INFO (sfen_reader): Opened file for reading: validation_data\data.binpack

PROGRESS (calc_loss): Tue Mar 09 18:09:22 2021, 0 sfens, 0 sfens/second, epoch 0
  - learning rate = 1
  - startpos eval = 0
  - val_loss       = 0.416409
  - norm = 0
  - move accuracy = 0.45%
INFO (learn): initial loss = 0.416409
.INFO (save_eval): Saving current evaluation file in fresh-network/0
INFO (save_eval): Finished saving evaluation file in fresh-network/0

PROGRESS (calc_loss): Tue Mar 09 18:09:31 2021, 200000 sfens, 21563 sfens/second, epoch 1
  - learning rate = 1
  - startpos eval = 20
  - val_loss       = 0.416501
  - train_loss       = 0.398687
  - train_grad_norm  = 0.598973
  - norm = 40000
  - move accuracy = 0.45%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 31753 (out of 43979) features
  - (min, max) of pre-activations = -0.101404, 1.0322 (limit = 258.008)
  - largest min activation = 0.252439 , smallest max activation = 0.768682
  - avg_abs_bias   = 0.500579
  - avg_abs_weight = 0.0137935
  - clipped 3.41797e-05% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.585499
  - avg_abs_bias_diff   = 7.66894e-05
  - avg_abs_weight      = 0.035895
  - avg_abs_weight_diff = 3.85336e-05
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 0.280741 , smallest max activation = 0.718176
  - clipped 98.2333% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.467714
  - avg_abs_bias_diff   = 0.000259837
  - avg_abs_weight      = 0.140762
  - avg_abs_weight_diff = 0.000124379
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0.694149
  - clipped 94.9137% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.0337131
  - avg_abs_bias_diff   = 0.0272825
  - avg_abs_weight      = 0.0585321
  - avg_abs_weight_diff = 0.00132287
.INFO (save_eval): Saving current evaluation file in fresh-network/1
INFO (save_eval): Finished saving evaluation file in fresh-network/1
INFO (learning_rate):
  - loss = 0.416501 >= best (0.416409), rejected
  - reducing learning rate from 1 to 0.5 (3 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:09:33 2021, 400000 sfens, 35896 sfens/second, epoch 2
  - learning rate = 0.5
  - startpos eval = 3
  - val_loss       = 0.416373
  - train_loss       = 0.395883
  - train_grad_norm  = 0.597733
  - norm = 5996
  - move accuracy = 0.45%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 34084 (out of 43979) features
  - (min, max) of pre-activations = -0.0613348, 1.03045 (limit = 258.008)
  - largest min activation = 0.271284 , smallest max activation = 0.725052
  - avg_abs_bias   = 0.500576
  - avg_abs_weight = 0.0137934
  - clipped 3.80859e-05% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.585489
  - avg_abs_bias_diff   = 3.13135e-07
  - avg_abs_weight      = 0.0358948
  - avg_abs_weight_diff = 1.58586e-07
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 99.7783% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.467766
  - avg_abs_bias_diff   = 1.85801e-06
  - avg_abs_weight      = 0.140762
  - avg_abs_weight_diff = 8.66296e-07
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 99.608% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.00655166
  - avg_abs_bias_diff   = 0.0177786
  - avg_abs_weight      = 0.0585996
  - avg_abs_weight_diff = 8.45999e-07
.INFO (save_eval): Saving current evaluation file in fresh-network/2
INFO (save_eval): Finished saving evaluation file in fresh-network/2
INFO (learning_rate):
  - loss = 0.416373 < best (0.416409), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:35 2021, 600000 sfens, 46129 sfens/second, epoch 3
  - learning rate = 0.5
  - startpos eval = 17
  - val_loss       = 0.405943
  - train_loss       = 0.395553
  - train_grad_norm  = 0.596937
  - norm = 51284
  - move accuracy = 6.7%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 35236 (out of 43979) features
  - (min, max) of pre-activations = -0.0699659, 1.07201 (limit = 258.008)
  - largest min activation = 0.246764 , smallest max activation = 0.729006
  - avg_abs_bias   = 0.499179
  - avg_abs_weight = 0.0144284
  - clipped 3.90625e-05% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.585174
  - avg_abs_bias_diff   = 6.38492e-05
  - avg_abs_weight      = 0.0361257
  - avg_abs_weight_diff = 3.20849e-05
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 97.1364% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.467965
  - avg_abs_bias_diff   = 0.000224271
  - avg_abs_weight      = 0.141104
  - avg_abs_weight_diff = 0.00010382
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 87.8029% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.162805
  - avg_abs_bias_diff   = 0.0150043
  - avg_abs_weight      = 0.0755494
  - avg_abs_weight_diff = 0.000463723
.INFO (save_eval): Saving current evaluation file in fresh-network/3
INFO (save_eval): Finished saving evaluation file in fresh-network/3
INFO (learning_rate):
  - loss = 0.405943 < best (0.416373), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:37 2021, 800000 sfens, 53799 sfens/second, epoch 4
  - learning rate = 0.5
  - startpos eval = 39
  - val_loss       = 0.417296
  - train_loss       = 0.395754
  - train_grad_norm  = 0.597574
  - norm = 78000
  - move accuracy = 0.45%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 35979 (out of 43979) features
  - (min, max) of pre-activations = -0.100556, 1.05062 (limit = 258.008)
  - largest min activation = 0.221421 , smallest max activation = 0.752297
  - avg_abs_bias   = 0.498258
  - avg_abs_weight = 0.0145998
  - clipped 0.000352539% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.585736
  - avg_abs_bias_diff   = 2.5246e-05
  - avg_abs_weight      = 0.0362498
  - avg_abs_weight_diff = 1.24944e-05
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 99.8195% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.46637
  - avg_abs_bias_diff   = 9.8387e-05
  - avg_abs_weight      = 0.141583
  - avg_abs_weight_diff = 4.55471e-05
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 94.6775% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.0646924
  - avg_abs_bias_diff   = 0.0118451
  - avg_abs_weight      = 0.0766396
  - avg_abs_weight_diff = 0.000181719
.INFO (save_eval): Saving current evaluation file in fresh-network/4
INFO (save_eval): Finished saving evaluation file in fresh-network/4
INFO (learning_rate):
  - loss = 0.417296 >= best (0.405943), rejected
  - reducing learning rate from 0.5 to 0.25 (3 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:09:38 2021, 1000000 sfens, 59530 sfens/second, epoch 5
  - learning rate = 0.25
  - startpos eval = 28
  - val_loss       = 0.416752
  - train_loss       = 0.395499
  - train_grad_norm  = 0.597595
  - norm = 56000
  - move accuracy = 0.45%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 36483 (out of 43979) features
  - (min, max) of pre-activations = -0.161064, 1.04624 (limit = 258.008)
  - largest min activation = 0.231287 , smallest max activation = 0.750544
  - avg_abs_bias   = 0.498243
  - avg_abs_weight = 0.0146011
  - clipped 0.000413086% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.585711
  - avg_abs_bias_diff   = 1.96852e-07
  - avg_abs_weight      = 0.0362499
  - avg_abs_weight_diff = 9.53579e-08
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 99.9939% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.466457
  - avg_abs_bias_diff   = 7.97229e-06
  - avg_abs_weight      = 0.14158
  - avg_abs_weight_diff = 3.73425e-06
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 96.8746% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.0473122
  - avg_abs_bias_diff   = 0.00914914
  - avg_abs_weight      = 0.076385
  - avg_abs_weight_diff = 6.35041e-06
.INFO (save_eval): Saving current evaluation file in fresh-network/5
INFO (save_eval): Finished saving evaluation file in fresh-network/5
INFO (learning_rate):
  - loss = 0.416752 >= best (0.405943), rejected
  - reducing learning rate from 0.25 to 0.125 (2 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:09:40 2021, 1200000 sfens, 63911 sfens/second, epoch 6
  - learning rate = 0.125
  - startpos eval = 120
  - val_loss       = 0.379456
  - train_loss       = 0.389374
  - train_grad_norm  = 0.589321
  - norm = 231767
  - move accuracy = 4.25%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 36839 (out of 43979) features
  - (min, max) of pre-activations = -0.219392, 1.0801 (limit = 258.008)
  - largest min activation = 0.235672 , smallest max activation = 0.74145
  - avg_abs_bias   = 0.494117
  - avg_abs_weight = 0.015312
  - clipped 0.00141113% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.585581
  - avg_abs_bias_diff   = 9.92744e-05
  - avg_abs_weight      = 0.0366705
  - avg_abs_weight_diff = 4.79341e-05
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 99.3282% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.468666
  - avg_abs_bias_diff   = 0.000355264
  - avg_abs_weight      = 0.142015
  - avg_abs_weight_diff = 0.000162865
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 90.8695% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.180674
  - avg_abs_bias_diff   = 0.0119275
  - avg_abs_weight      = 0.0968762
  - avg_abs_weight_diff = 0.000349115
.INFO (save_eval): Saving current evaluation file in fresh-network/6
INFO (save_eval): Finished saving evaluation file in fresh-network/6
INFO (learning_rate):
  - loss = 0.379456 < best (0.405943), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:42 2021, 1400000 sfens, 67915 sfens/second, epoch 7
  - learning rate = 0.125
  - startpos eval = -210
  - val_loss       = 0.267728
  - train_loss       = 0.325339
  - train_grad_norm  = 0.493963
  - norm = 735858
  - move accuracy = 7.75%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 37149 (out of 43979) features
  - (min, max) of pre-activations = -0.205239, 1.04001 (limit = 258.008)
  - largest min activation = 0.210378 , smallest max activation = 0.712174
  - avg_abs_bias   = 0.484659
  - avg_abs_weight = 0.0168286
  - clipped 0.00717188% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.58554
  - avg_abs_bias_diff   = 0.000303167
  - avg_abs_weight      = 0.0373809
  - avg_abs_weight_diff = 0.000137568
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 97.5786% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.472723
  - avg_abs_bias_diff   = 0.000706124
  - avg_abs_weight      = 0.142515
  - avg_abs_weight_diff = 0.00033628
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0.060689 , smallest max activation = 0
  - clipped 84.2233% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.462488
  - avg_abs_bias_diff   = 0.0125747
  - avg_abs_weight      = 0.121592
  - avg_abs_weight_diff = 0.000759557
.INFO (save_eval): Saving current evaluation file in fresh-network/7
INFO (save_eval): Finished saving evaluation file in fresh-network/7
INFO (learning_rate):
  - loss = 0.267728 < best (0.379456), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:44 2021, 1600000 sfens, 71317 sfens/second, epoch 8
  - learning rate = 0.125
  - startpos eval = 163
  - val_loss       = 0.261218
  - train_loss       = 0.261386
  - train_grad_norm  = 0.403635
  - norm = 801397
  - move accuracy = 10.05%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 37405 (out of 43979) features
  - (min, max) of pre-activations = -0.294544, 0.951142 (limit = 258.008)
  - largest min activation = 0.155974 , smallest max activation = 0.659514
  - avg_abs_bias   = 0.471469
  - avg_abs_weight = 0.0184298
  - clipped 0.109976% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.58633
  - avg_abs_bias_diff   = 0.000437064
  - avg_abs_weight      = 0.0381593
  - avg_abs_weight_diff = 0.000173221
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 94.9548% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.473089
  - avg_abs_bias_diff   = 0.0010488
  - avg_abs_weight      = 0.143618
  - avg_abs_weight_diff = 0.000486951
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0.41457 , smallest max activation = 0
  - clipped 82.4149% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.606025
  - avg_abs_bias_diff   = 0.0145985
  - avg_abs_weight      = 0.135376
  - avg_abs_weight_diff = 0.00126558
.INFO (save_eval): Saving current evaluation file in fresh-network/8
INFO (save_eval): Finished saving evaluation file in fresh-network/8
INFO (learning_rate):
  - loss = 0.261218 < best (0.267728), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:46 2021, 1800000 sfens, 74177 sfens/second, epoch 9
  - learning rate = 0.125
  - startpos eval = -319
  - val_loss       = 0.159683
  - train_loss       = 0.206607
  - train_grad_norm  = 0.337158
  - norm = 1.07094e+06
  - move accuracy = 14.15%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 37577 (out of 43979) features
  - (min, max) of pre-activations = -0.420278, 0.850462 (limit = 258.008)
  - largest min activation = 0.0575945 , smallest max activation = 0.62396
  - avg_abs_bias   = 0.45627
  - avg_abs_weight = 0.0199819
  - clipped 1.10194% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.584842
  - avg_abs_bias_diff   = 0.000637732
  - avg_abs_weight      = 0.0388539
  - avg_abs_weight_diff = 0.000194225
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 88.4383% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.476298
  - avg_abs_bias_diff   = 0.00123829
  - avg_abs_weight      = 0.144648
  - avg_abs_weight_diff = 0.000550119
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 80.9084% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.769338
  - avg_abs_bias_diff   = 0.0146618
  - avg_abs_weight      = 0.152401
  - avg_abs_weight_diff = 0.00131879
.INFO (save_eval): Saving current evaluation file in fresh-network/9
INFO (save_eval): Finished saving evaluation file in fresh-network/9
INFO (learning_rate):
  - loss = 0.159683 < best (0.261218), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:48 2021, 2000000 sfens, 76645 sfens/second, epoch 10
  - learning rate = 0.125
  - startpos eval = -184
  - val_loss       = 0.125427
  - train_loss       = 0.136081
  - train_grad_norm  = 0.253947
  - norm = 1.40338e+06
  - move accuracy = 14.6%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 37761 (out of 43979) features
  - (min, max) of pre-activations = -0.561741, 0.765331 (limit = 258.008)
  - largest min activation = 0 , smallest max activation = 0.592094
  - avg_abs_bias   = 0.443094
  - avg_abs_weight = 0.0212794
  - clipped 5.52607% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.582962
  - avg_abs_bias_diff   = 0.000829241
  - avg_abs_weight      = 0.0393626
  - avg_abs_weight_diff = 0.000190581
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 77.9441% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.478039
  - avg_abs_bias_diff   = 0.00124159
  - avg_abs_weight      = 0.145515
  - avg_abs_weight_diff = 0.000530369
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 73.1073% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.884985
  - avg_abs_bias_diff   = 0.0124117
  - avg_abs_weight      = 0.170559
  - avg_abs_weight_diff = 0.00132573
.INFO (save_eval): Saving current evaluation file in fresh-network/10
INFO (save_eval): Finished saving evaluation file in fresh-network/10
INFO (learning_rate):
  - loss = 0.125427 < best (0.159683), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:50 2021, 2200000 sfens, 78827 sfens/second, epoch 11
  - learning rate = 0.125
  - startpos eval = -345
  - val_loss       = 0.0865551
  - train_loss       = 0.0963315
  - train_grad_norm  = 0.203269
  - norm = 1.57992e+06
  - move accuracy = 14.55%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 37887 (out of 43979) features
  - (min, max) of pre-activations = -0.579598, 0.824825 (limit = 258.008)
  - largest min activation = 0 , smallest max activation = 0.569385
  - avg_abs_bias   = 0.436094
  - avg_abs_weight = 0.0220826
  - clipped 9.94394% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.581345
  - avg_abs_bias_diff   = 0.000691517
  - avg_abs_weight      = 0.0396269
  - avg_abs_weight_diff = 0.000138613
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 72.4492% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.47895
  - avg_abs_bias_diff   = 0.000930835
  - avg_abs_weight      = 0.146011
  - avg_abs_weight_diff = 0.000390152
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0 , smallest max activation = 0
  - clipped 69.0309% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 0.980369
  - avg_abs_bias_diff   = 0.00809311
  - avg_abs_weight      = 0.186263
  - avg_abs_weight_diff = 0.00104845
.INFO (save_eval): Saving current evaluation file in fresh-network/11
INFO (save_eval): Finished saving evaluation file in fresh-network/11
INFO (learning_rate):
  - loss = 0.0865551 < best (0.125427), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:51 2021, 2400000 sfens, 80577 sfens/second, epoch 12
  - learning rate = 0.125
  - startpos eval = 241
  - val_loss       = 0.0960863
  - train_loss       = 0.0774279
  - train_grad_norm  = 0.175477
  - norm = 1.64095e+06
  - move accuracy = 16%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 37999 (out of 43979) features
  - (min, max) of pre-activations = -0.609198, 0.855343 (limit = 258.008)
  - largest min activation = 0 , smallest max activation = 0.563336
  - avg_abs_bias   = 0.431323
  - avg_abs_weight = 0.0226676
  - clipped 12.2125% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.581352
  - avg_abs_bias_diff   = 0.000641459
  - avg_abs_weight      = 0.0397609
  - avg_abs_weight_diff = 0.000116381
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 70.5055% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.479023
  - avg_abs_bias_diff   = 0.000776562
  - avg_abs_weight      = 0.14622
  - avg_abs_weight_diff = 0.000321951
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0.0982055 , smallest max activation = 0
  - clipped 67.4807% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 1.03502
  - avg_abs_bias_diff   = 0.00619873
  - avg_abs_weight      = 0.196501
  - avg_abs_weight_diff = 0.000932214
.INFO (save_eval): Saving current evaluation file in fresh-network/12
INFO (save_eval): Finished saving evaluation file in fresh-network/12
INFO (learning_rate):
  - loss = 0.0960863 >= best (0.0865551), rejected
  - reducing learning rate from 0.125 to 0.0625 (3 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:09:53 2021, 2600000 sfens, 82223 sfens/second, epoch 13
  - learning rate = 0.0625
  - startpos eval = 70
  - val_loss       = 0.0618871
  - train_loss       = 0.068422
  - train_grad_norm  = 0.161779
  - norm = 1.64347e+06
  - move accuracy = 16.65%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 38101 (out of 43979) features
  - (min, max) of pre-activations = -0.636511, 0.885022 (limit = 258.008)
  - largest min activation = 0 , smallest max activation = 0.558438
  - avg_abs_bias   = 0.428319
  - avg_abs_weight = 0.0231015
  - clipped 13.7755% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.580535
  - avg_abs_bias_diff   = 0.00055079
  - avg_abs_weight      = 0.0398027
  - avg_abs_weight_diff = 9.24211e-05
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 69.0439% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.479244
  - avg_abs_bias_diff   = 0.000614391
  - avg_abs_weight      = 0.146297
  - avg_abs_weight_diff = 0.000254153
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0.192297 , smallest max activation = 0
  - clipped 66.3492% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 1.07182
  - avg_abs_bias_diff   = 0.00464471
  - avg_abs_weight      = 0.20347
  - avg_abs_weight_diff = 0.000759814
.INFO (save_eval): Saving current evaluation file in fresh-network/13
INFO (save_eval): Finished saving evaluation file in fresh-network/13
INFO (learning_rate):
  - loss = 0.0618871 < best (0.0865551), accepted

PROGRESS (calc_loss): Tue Mar 09 18:09:55 2021, 2800000 sfens, 83722 sfens/second, epoch 14
  - learning rate = 0.0625
  - startpos eval = -81
  - val_loss       = 0.064333
  - train_loss       = 0.0333138
  - train_grad_norm  = 0.104749
  - norm = 1.78335e+06
  - move accuracy = 15.7%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 38202 (out of 43979) features
  - (min, max) of pre-activations = -0.651166, 0.750784 (limit = 258.008)
  - largest min activation = 0 , smallest max activation = 0.544983
  - avg_abs_bias   = 0.429044
  - avg_abs_weight = 0.023067
  - clipped 12.7396% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.580056
  - avg_abs_bias_diff   = 6.91663e-05
  - avg_abs_weight      = 0.0397866
  - avg_abs_weight_diff = 1.2565e-05
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 70.2084% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.479494
  - avg_abs_bias_diff   = 6.29022e-05
  - avg_abs_weight      = 0.146309
  - avg_abs_weight_diff = 2.70082e-05
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0.262042 , smallest max activation = 0
  - clipped 66.9023% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 1.09208
  - avg_abs_bias_diff   = 0.000426594
  - avg_abs_weight      = 0.211248
  - avg_abs_weight_diff = 8.71273e-05
.INFO (save_eval): Saving current evaluation file in fresh-network/14
INFO (save_eval): Finished saving evaluation file in fresh-network/14
INFO (learning_rate):
  - loss = 0.064333 >= best (0.0618871), rejected
  - reducing learning rate from 0.0625 to 0.03125 (3 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:09:57 2021, 3000000 sfens, 84981 sfens/second, epoch 15
  - learning rate = 0.03125
  - startpos eval = 338
  - val_loss       = 0.0707723
  - train_loss       = 0.0323261
  - train_grad_norm  = 0.103057
  - norm = 1.82902e+06
  - move accuracy = 16.2%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 38281 (out of 43979) features
  - (min, max) of pre-activations = -0.645263, 0.767126 (limit = 258.008)
  - largest min activation = 0 , smallest max activation = 0.546313
  - avg_abs_bias   = 0.429273
  - avg_abs_weight = 0.0230761
  - clipped 11.9205% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.579621
  - avg_abs_bias_diff   = 7.83599e-05
  - avg_abs_weight      = 0.0397591
  - avg_abs_weight_diff = 1.41908e-05
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 72.254% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.479458
  - avg_abs_bias_diff   = 7.24238e-05
  - avg_abs_weight      = 0.146289
  - avg_abs_weight_diff = 3.01774e-05
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0.300609 , smallest max activation = 0
  - clipped 67.6739% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 1.10851
  - avg_abs_bias_diff   = 0.00046963
  - avg_abs_weight      = 0.216324
  - avg_abs_weight_diff = 8.73834e-05
.INFO (save_eval): Saving current evaluation file in fresh-network/15
INFO (save_eval): Finished saving evaluation file in fresh-network/15
INFO (learning_rate):
  - loss = 0.0707723 >= best (0.0618871), rejected
  - reducing learning rate from 0.03125 to 0.015625 (2 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:09:59 2021, 3200000 sfens, 86183 sfens/second, epoch 16
  - learning rate = 0.015625
  - startpos eval = 16
  - val_loss       = 0.0620044
  - train_loss       = 0.0259137
  - train_grad_norm  = 0.0884716
  - norm = 1.836e+06
  - move accuracy = 16.4%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 38360 (out of 43979) features
  - (min, max) of pre-activations = -0.646309, 0.760772 (limit = 258.008)
  - largest min activation = 0 , smallest max activation = 0.557711
  - avg_abs_bias   = 0.429469
  - avg_abs_weight = 0.0230555
  - clipped 11.4834% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.579492
  - avg_abs_bias_diff   = 2.58113e-05
  - avg_abs_weight      = 0.0397463
  - avg_abs_weight_diff = 5.13404e-06
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 73.1835% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.479408
  - avg_abs_bias_diff   = 2.46542e-05
  - avg_abs_weight      = 0.146274
  - avg_abs_weight_diff = 1.02848e-05
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0.316902 , smallest max activation = 0
  - clipped 68.001% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 1.11675
  - avg_abs_bias_diff   = 0.000159962
  - avg_abs_weight      = 0.218584
  - avg_abs_weight_diff = 3.00839e-05
.INFO (save_eval): Saving current evaluation file in fresh-network/16
INFO (save_eval): Finished saving evaluation file in fresh-network/16
INFO (learning_rate):
  - loss = 0.0620044 >= best (0.0618871), rejected
  - reducing learning rate from 0.015625 to 0.0078125 (1 more trials)

PROGRESS (calc_loss): Tue Mar 09 18:10:01 2021, 3400000 sfens, 87313 sfens/second, epoch 17
  - learning rate = 0.0078125
  - startpos eval = 71
  - val_loss       = 0.0619761
  - train_loss       = 0.0251817
  - train_grad_norm  = 0.0861851
  - norm = 1.85571e+06
  - move accuracy = 16.65%
INFO (check_health): layer 0 - HalfKP(Friend)[41024->256x2]
  - observed 38434 (out of 43979) features
  - (min, max) of pre-activations = -0.654235, 0.736818 (limit = 258.008)
  - largest min activation = 0 , smallest max activation = 0.555133
  - avg_abs_bias   = 0.429525
  - avg_abs_weight = 0.023046
  - clipped 11.303% of outputs
INFO (check_health): layer 2 - AffineTransform[32<-512]
  - avg_abs_bias        = 0.579507
  - avg_abs_bias_diff   = 1.18133e-05
  - avg_abs_weight      = 0.0397382
  - avg_abs_weight_diff = 2.39722e-06
INFO (check_health): layer 3 - ClippedReLU[32]
  - largest min activation = 1 , smallest max activation = 0
  - clipped 73.7253% of outputs
INFO (check_health): layer 4 - AffineTransform[32<-32]
  - avg_abs_bias        = 0.47944
  - avg_abs_bias_diff   = 1.14416e-05
  - avg_abs_weight      = 0.146266
  - avg_abs_weight_diff = 4.76686e-06
INFO (check_health): layer 5 - ClippedReLU[32]
  - largest min activation = 0.322341 , smallest max activation = 0
  - clipped 68.1017% of outputs
INFO (check_health): layer 6 - AffineTransform[1<-32]
  - avg_abs_bias        = 1.12038
  - avg_abs_bias_diff   = 7.326e-05
  - avg_abs_weight      = 0.219646
  - avg_abs_weight_diff = 1.36197e-05
.INFO (save_eval): Saving current evaluation file in fresh-network/17
INFO (save_eval): Finished saving evaluation file in fresh-network/17
INFO (learning_rate):
  - loss = 0.0619761 >= best (0.0618871), rejected
  - converged
INFO (save_eval): Saving current evaluation file in fresh-network/final
INFO (save_eval): Finished saving evaluation file in fresh-network/final

C:\Users\pecas\Ajedrez\nnue\NNUE>POPD

C:\Users\pecas\Ajedrez\nnue\NNUE>PAUSE
Presione una tecla para continuar . . .
<snip>
However I have a problem, if you run the script again, it is possible that the network you get is a network that you have not learned. I may have run the same script 100 times and 3 times it learned and the rest it didn't, the usual is:
<snip>
You only want to build a fresh network once.
Use this setting on the first net you are creating: setoption name SkipLoadingEval value true

On all other nets , use the last net created as your base
setoption name SkipLoadingEval value false
setoption name EvalFile value nn.bin > last nn.bin created from previous training run

I have a script that automates this whole process.
Image
JohnS
Posts: 215
Joined: Sun Feb 24, 2008 2:08 am

Re: A random walk down NNUE street ….

Post by JohnS »

MikeB wrote: Mon Mar 08, 2021 7:02 am
call this by using "source whatyounameit.sh" on mingw

Code: Select all

########################################################################
#                       Reinforcement Training                         #
########################################################################
for p in `seq 1 $bigloop`;
do
########################################################################
#                                                                      #
#                    Cycle Through Pure and Combo                      #

  NT=( "pure" "combo" )                                                #
########################################################################
#                                                                      #
#                    Cycle Through Lammda  1.0 and 0.5                 #

  LA=( "0.5" "1.0"  )                                                  #
########################################################################
## rm outpul directories from previous run
  let ntcycle=2 ## set to zero to skip
  for k in `seq 1 $ntcycle`;
  do
    let lacycle=2 ## set to zero to skip
    for m in `seq 1 $lacycle`;
    do
      let loops=2 ## set to zero to skip
      z=0
      for i in `seq 1 $loops`;
      do
        z=$((z+1))

        echo -e "\n  Cycle:         $p\n  NNUE Value: ${NT[${k}-1]}\n  Lamda:       ${LA[${m}-1]}\n  Round:         $z\n"
        cd "/c/Users/MichaelB7/home/nnue-gui.1.5/reinforce-network/final/"
        nnbin=nn.bin
        name=nn-$(sha256sum ${nnbin} | cut -c1-12).nnue
        echo ${name}
        mv ${nnbin} ${name}
        cp ${name} ../../
        cd ../../
        sleep 3 # to pause

        threads=50
        valfile=M1_D8
        options="uci
        setoption name Use NNUE value ${NT[${k}-1]}
        setoption name Hash value 10240
        setoption name Threads value $threads
        setoption name EvalSaveDir value reinforce-network
        setoption name SkipLoadingEval value false
        setoption name EvalFile value $name
        setoption name SyzygyPath value c:/syzygy
        isready
        learn targetdir training validation_set_file_name validation/$valfile.binpack set_recommended_uci_options use_draw_in_training 1 use_draw_in_validation 1 eval_limit 32000 epochs 1000 lr 1 lambda ${LA[${m}-1]} nn_batch_size 1000 batchsize 200000 eval_save_interval 200000 loss_output_interval 200000 newbob_decay 0.5 newbob_num_trials 4
        quit"

        printf "$options" | ./stockfish
      done
    done
  done
done
This looks great, Mike, what do you use to generate the first net before calling the script. Can you just keep running the script over and over again to get a hopefully better net. Interested to try this just for fun but don't expect to push Stockfish off the number 1 spot :D :D I'll be happy if it plays a decent game.
jp
Posts: 1470
Joined: Mon Apr 23, 2018 7:54 am

Re: A random walk down NNUE street ….

Post by jp »

smatovic wrote: Sun Mar 07, 2021 10:32 am
jp wrote: Sun Mar 07, 2021 4:30 am
MikeB wrote: Sat Mar 06, 2021 5:28 pm Quantum computing for chess seems to be a a long way off, if ever. I'm not expecting to see anything with my remaining days on earth, but who knows, maybe I will.
If you look at that blog post, you'll see that it has nothing to do with chess. The author is just using a bizarre analogy.

As I've said before, there is no [known] quantum algorithm for chess, and there is no reason to believe that one would exist.
Give me a billion qubits and I will do it ;)
No, you won't. :!: 8-)

We're not talking about implementing the program on hardware. We're just talking about writing correct code that would do the job given the hardware. There is no known algorithm.
smatovic
Posts: 2645
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: A random walk down NNUE street ….

Post by smatovic »

jp wrote: Wed Mar 10, 2021 4:10 am
smatovic wrote: Sun Mar 07, 2021 10:32 am
jp wrote: Sun Mar 07, 2021 4:30 am
MikeB wrote: Sat Mar 06, 2021 5:28 pm Quantum computing for chess seems to be a a long way off, if ever. I'm not expecting to see anything with my remaining days on earth, but who knows, maybe I will.
If you look at that blog post, you'll see that it has nothing to do with chess. The author is just using a bizarre analogy.

As I've said before, there is no [known] quantum algorithm for chess, and there is no reason to believe that one would exist.
Give me a billion qubits and I will do it ;)
No, you won't. :!: 8-)

We're not talking about implementing the program on hardware. We're just talking about writing correct code that would do the job given the hardware. There is no known algorithm.
We still have no von Neumann architecture for quantum computers, so your
analogy with code/program and hardware lacks, quantum-algorithms are
implemented as quantum-circuits, there is currently no quantum-code beside
languages which describe these circuits, and yes - no, I will not do a paper
machine with billions of qubits ;)

--
Srdja
smatovic
Posts: 2645
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: A random walk down NNUE street ….

Post by smatovic »

smatovic wrote: Wed Mar 10, 2021 8:22 am
jp wrote: Wed Mar 10, 2021 4:10 am
smatovic wrote: Sun Mar 07, 2021 10:32 am
jp wrote: Sun Mar 07, 2021 4:30 am
MikeB wrote: Sat Mar 06, 2021 5:28 pm Quantum computing for chess seems to be a a long way off, if ever. I'm not expecting to see anything with my remaining days on earth, but who knows, maybe I will.
If you look at that blog post, you'll see that it has nothing to do with chess. The author is just using a bizarre analogy.

As I've said before, there is no [known] quantum algorithm for chess, and there is no reason to believe that one would exist.
Give me a billion qubits and I will do it ;)
No, you won't. :!: 8-)

We're not talking about implementing the program on hardware. We're just talking about writing correct code that would do the job given the hardware. There is no known algorithm.
We still have no von Neumann architecture for quantum computers, so your
analogy with code/program and hardware lacks, quantum-algorithms are
implemented as quantum-circuits, there is currently no quantum-code beside
languages which describe these circuits, and yes - no, I will not do a paper
machine with billions of qubits ;)

--
Srdja
Alright, seems I have to dig deeper....

"Implementing the Quantum von Neumann Architecture with Superconducting Circuits"
...
The ability to store entanglement in the memories, which are characterized by much longer coherence times
than the qubits, is key to the quantum von Neumann architecture.
...
https://arxiv.org/pdf/1109.3743.pdf

maybe I will give it a try for TicTacToe some day.

--
Srdja
User avatar
towforce
Posts: 11572
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: A random walk down NNUE street ….

Post by towforce »

smatovic wrote: Wed Mar 10, 2021 11:04 am
smatovic wrote: Wed Mar 10, 2021 8:22 am
jp wrote: Wed Mar 10, 2021 4:10 am
smatovic wrote: Sun Mar 07, 2021 10:32 am
jp wrote: Sun Mar 07, 2021 4:30 am
MikeB wrote: Sat Mar 06, 2021 5:28 pm Quantum computing for chess seems to be a a long way off, if ever. I'm not expecting to see anything with my remaining days on earth, but who knows, maybe I will.
If you look at that blog post, you'll see that it has nothing to do with chess. The author is just using a bizarre analogy.

As I've said before, there is no [known] quantum algorithm for chess, and there is no reason to believe that one would exist.
Give me a billion qubits and I will do it ;)
No, you won't. :!: 8-)

We're not talking about implementing the program on hardware. We're just talking about writing correct code that would do the job given the hardware. There is no known algorithm.
We still have no von Neumann architecture for quantum computers, so your
analogy with code/program and hardware lacks, quantum-algorithms are
implemented as quantum-circuits, there is currently no quantum-code beside
languages which describe these circuits, and yes - no, I will not do a paper
machine with billions of qubits ;)

--
Srdja
Alright, seems I have to dig deeper....

"Implementing the Quantum von Neumann Architecture with Superconducting Circuits"
...
The ability to store entanglement in the memories, which are characterized by much longer coherence times
than the qubits, is key to the quantum von Neumann architecture.
...
https://arxiv.org/pdf/1109.3743.pdf

maybe I will give it a try for TicTacToe some day.

--
Srdja

I don't understand why you'd want a von Neumann architecture? You already have that. You're using it right now!

My understanding is that the strength of a quantum computer is its ability to simultaneously set a large number of variables to values which collectively meet a condition. If you can come up with an algorithm that applies this to chess, then there are algorithms that will enable you to do that better on a current computer. If quantum computers grow in power in the same way current computers have, then one day they will be better at solving this kind of problem than current computers are.
Writing is the antidote to confusion.
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!
smatovic
Posts: 2645
Joined: Wed Mar 10, 2010 10:18 pm
Location: Hamburg, Germany
Full name: Srdja Matovic

Re: A random walk down NNUE street ….

Post by smatovic »

towforce wrote: Wed Mar 10, 2021 11:31 am
smatovic wrote: Wed Mar 10, 2021 11:04 am
smatovic wrote: Wed Mar 10, 2021 8:22 am
jp wrote: Wed Mar 10, 2021 4:10 am
smatovic wrote: Sun Mar 07, 2021 10:32 am
jp wrote: Sun Mar 07, 2021 4:30 am
MikeB wrote: Sat Mar 06, 2021 5:28 pm Quantum computing for chess seems to be a a long way off, if ever. I'm not expecting to see anything with my remaining days on earth, but who knows, maybe I will.
If you look at that blog post, you'll see that it has nothing to do with chess. The author is just using a bizarre analogy.

As I've said before, there is no [known] quantum algorithm for chess, and there is no reason to believe that one would exist.
Give me a billion qubits and I will do it ;)
No, you won't. :!: 8-)

We're not talking about implementing the program on hardware. We're just talking about writing correct code that would do the job given the hardware. There is no known algorithm.
We still have no von Neumann architecture for quantum computers, so your
analogy with code/program and hardware lacks, quantum-algorithms are
implemented as quantum-circuits, there is currently no quantum-code beside
languages which describe these circuits, and yes - no, I will not do a paper
machine with billions of qubits ;)

--
Srdja
Alright, seems I have to dig deeper....

"Implementing the Quantum von Neumann Architecture with Superconducting Circuits"
...
The ability to store entanglement in the memories, which are characterized by much longer coherence times
than the qubits, is key to the quantum von Neumann architecture.
...
https://arxiv.org/pdf/1109.3743.pdf

maybe I will give it a try for TicTacToe some day.

--
Srdja

I don't understand why you'd want a von Neumann architecture? You already have that. You're using it right now!

My understanding is that the strength of a quantum computer is its ability to simultaneously set a large number of variables to values which collectively meet a condition. If you can come up with an algorithm that applies this to chess, then there are algorithms that will enable you to do that better on a current computer. If quantum computers grow in power in the same way current computers have, then one day they will be better at solving this kind of problem than current computers are.
Quantum Computers can run classic algorithms (von Neumann) via Toffoli gates in an non quantum-supremacy manner, well known, they can emulate classic computing so to speak. Up to now my understanding was that you need to build parallel quantum-circuits from qubits to implement an quantum-algorithm, a ASIC with qubits so to speak. With an von Neumann architecture for (not in) Quantum Computers I have to rethink my estimate of billions of qubits for chess and a perfect play TicTacToe engine might be in reach during my life time, maybe....there are other quantum developments going on, like qudits and NNs on quantum-computers or hyper-turing-machines, maybe also of interest for game tree search.

--
Srdja
User avatar
MikeB
Posts: 4889
Joined: Thu Mar 09, 2006 6:34 am
Location: Pen Argyl, Pennsylvania

Re: A random walk down NNUE street ….

Post by MikeB »

in any event, after completing training to depth 11, I was still falling short to Zappa Mexico II by about 40 Elo. That;s at 1 min 1 sec tc. At short time controls, 20 sec + 0.333 icr . Zappa still rules all over my net. Zappa scores over 80% at stc, but it's almost dead even , I'm about 40 Elo short at 1 min _ 1.0 sec inc. So I am going back and redoing depth 4, 6 , 8 and 10 and for little extra reinforcement training a, that looks like , it might pick up an additional 20 Elo ro so - not much, probably was not worth the time.

Code: Select all

PGN File: c:/cluster.mfb/pgn/03100655.pgn
Time Control: Time Control-> base+inc: 20+0.333
Games: 2000
Threads: 1
Hash: 128

Current date : time (EDST)
Date: 03/10/21 : 07:52:35

Projected-> Time: 1h:2m:11s
     Run -> Time: 0h:57m:33s

2000 game(s) loaded
Rank Name                       Rating   Δ     +    -     #     Σ    Σ%     W    L    D   W%    =%   OppR
---------------------------------------------------------------------------------------------------------

   1 Stockfish-13-13c00d15d4ad   3509   0.0   13   13  2000 1053.0  52.6  788  682  530  39.4  26.5  3491
   2 Stockfish-13-6228043f44f7   3491  18.4   13   13  2000  947.0  47.3  682  788  530  34.1  26.5  3509
---------------------------------------------------------------------------------------------------------

  Δ = delta from the next higher rated opponent
  # = number of games played
  Σ = total score, 1 point for win, 1/2 point for draw

LOS:
                           St St
Stockfish-13-13c00d15d4ad     99
Stockfish-13-6228043f44f7   0

#########################################################################################################
###                                                End                                                ###
#########################################################################################################


2000 game(s) loaded
Once that completes I will start long the longer journey of training at the higher depths of 12 , 14 and 16. Depth 12 will take at least one full day on a hreadripper 3970x, Depth 14 will take 48 at least and depth 16 will take at least 96 hours, One can see how the math is going . One reason why it's taking longer , besides the obvious increase for the greater depth , is that I cannot tie up the computer for simply one task. Hence, whereas before I could utilize the PC 100% for these shorter depths, so I'm allocating half of the cores to net generation and the rest are for business as usual.

After depth 16, I will keep it going, depth 18 etc , but in the near future, we will not be talking weeks but months and eventually several months to get to the depth, The completed net at depth 12 will be shared exclusively through the Honey engines and also available as a separate download. The higher depth nets will be released, but more slowly. As an example, after I complete D18, I will release D14 publicly, etc. I have script that will do all the work from beginning to end, but it's bit like bitcoin, each depth gets harder and harder. I am looking to upgrade my hardware this summer to help keep it going. The nets may never get as strong as other nets, but that is not the point, the net is being developed totally independent from other any other net and hopefully has its own unique style for human play. I will not being used the millions of games or fens out available for training, I will be keeping a pure self taught style of play.

Not that anybody has rated Honey in past - but I will request any rating by CCRL be kept separate from Stockfish as long as they they are keeping FF2 separate , since under their rules , I would qualify for a separate rating.
(Totally tongue-in-check , they will never test my engine with a hand built net. There is nothing in it for them, plus they do not like me, which I am ok with)
Image