Page 8 of 8

Re: Scorpio 2.8.7 MCTS+NN windows version

Posted: Sat Sep 29, 2018 7:48 pm
by Daniel Shawul
Joerg Oster wrote: Sat Sep 29, 2018 7:33 pm Arghh, stupid me! :shock:
The VC Redistributable package was missing.

Now it works, finally!

Code: Select all

C:\Users\XXXXX\Downloads\scorpio288-mcts-nn\bin\Windows>scorpio go quit
feature done=0
ht 33554432 X 16 = 512.0 MB
eht 524288 X 8 = 8.0 MB
pht 32768 X 24 = 0.8 MB
treeht 26843136 X 40 = 1024.0 MB
processors [3]
EgbbProbe 4.2 by Daniel Shawul
70 egbbs loaded !
Loading neural network ...
Loading graph on /cpu:0
Neural network loaded !
loading_time = 0s
[st = 11114ms, mt = 29250ms , hply = 0 , moves_left 10]
63 25 112 4389  d2-d4 d7-d5 Nb1-c3 Ng8-f6 Ng1-f3
64 18 225 9502  d2-d4 d7-d5 Ng1-f3 Ng8-f6 Nb1-c3
65 17 337 14453  d2-d4 d7-d5 Ng1-f3 Ng8-f6 Nb1-c3
66 18 450 19424  d2-d4 d7-d5 Ng1-f3 Ng8-f6 Nb1-c3 Nb8-c6
67 18 562 24067  d2-d4 d7-d5 Ng1-f3 Ng8-f6 g2-g3 Nb8-c6
68 18 675 28520  d2-d4 d7-d5 Nb1-c3 Nb8-c6 g2-g3 Ng8-f6 Bf1-g2 e7-e6
69 18 787 32639  d2-d4 d7-d5 Nb1-c3 Ng8-f6 Ng1-f3 Nb8-c6 Bc1-e3 h7-h6 h2-h3
70 18 900 36410  d2-d4 d7-d5 Nb1-c3 Ng8-f6 Bc1-f4 Bc8-d7 f2-f3 e7-e5 d4xe5 Nf6-h5 e2-e3 Qd8-h4 g2-g3 Qh4-d8 Qd1xd5 Nh5xf4 e3xf4
71 18 1012 40446  d2-d4 d7-d5 Nb1-c3 Ng8-f6 Bc1-f4 Bc8-f5 a2-a3 Nb8-c6 e2-e3 e7-e6 Ng1-f3 Bf8-d6 Bf1-e2 a7-a6 Ke1-g1 Ke8-g8 Bf4xd6 c7xd6 Be2-d3 Bf5-e4 h2-h3 Be4xd3 Qd1xd3

#  1      9   10311 e2-e4 d7-d5 d2-d4 d5xe4 Bc1-e3 Ng8-f6 Nb1-c3 Bc8-e6 a2-a3 Nb8-c6 Ng1-e2 Qd8-d7 f2-f3 Nf6-d5 Be3-f2 e4xf3 g2xf3 Ke8-c8 Bf1-g2 Be6-f5 Ke1-g1 e7-e6 Rf1-e1 Bf8-e7 Ne2-g3 Be7-h4 Qd1-d2 g7-g6 f3-f4 Rh8-g8 Ra1-d1 Bf5-g4 Rd1-c1 Kc8-b8 h2-h3 Bg4-f5 Nc3xd5 e6xd5 Ng3xf5 Bh4xf2 Kg1xf2 g6xf5 c2-c4 d5xc4 d4-d5 Rd8-e8 Bg2-f3 Re8-e6 Rc1xc4 Qd7-d6 Re1-g1 Re6-g6 Rg1-g5 a7-a6 Rc4-c1 f7-f6 Rg5xf5 Rg8-e8 Kf2-f1 Rg6-g3
#  2     19   24686 d2-d4 d7-d5 Nb1-c3 Ng8-f6 Bc1-f4 Bc8-f5 Nc3-b5 Nb8-a6 c2-c3 e7-e6 e2-e3 Bf8-e7 Ng1-e2 h7-h6 f2-f3 Ke8-g8 Bf4-e5 c7-c6 g2-g4 Bf5-g6 g4-g5 h6xg5 Nb5-a3 Be7-d6 Bf1-g2 Qd8-d7 Na3-c2 Na6-c7 e3-e4 Ra8-d8 Ke1-g1 Qd7-e7 Ne2-g3 d5xe4 Be5xd6 Rd8xd6 f3xe4 Rd6-d8
#  3      4     666 Nb1-c3 e7-e5 d2-d4 d7-d6 e2-e4 e5xd4
#  4      8    2599 Ng1-f3 d7-d5 d2-d4 Ng8-f6 g2-g3 c7-c5 e2-e3 c5xd4 e3xd4 Nb8-c6 Nb1-c3 a7-a6 Bf1-g2 e7-e6 Ke1-g1 h7-h6 Rf1-e1 Bf8-d6 Bc1-d2 Ke8-g8 a2-a3 Bc8-d7 h2-h3 Qd8-c7 Ra1-c1 Rf8-e8 Rc1-a1 Re8-f8
#  5      0     447 e2-e3 d7-d5 c2-c4 Ng8-f6 c4xd5
#  6      3     590 d2-d3 e7-e5 e2-e4 Nb8-c6 Ng1-f3
#  7      0     413 g2-g3 e7-e5 Bf1-g2 d7-d5 d2-d3
#  8     -4     303 b2-b3 e7-e5 e2-e4 Ng8-f6 Nb1-c3
#  9     -7     235 f2-f3 e7-e5 e2-e4 Bf8-c5 Ng1-h3
# 10     -9     215 h2-h3 d7-d5 d2-d4 Ng8-f6 Ng1-f3
# 11     -9     215 c2-c3 d7-d5 d2-d4 Ng8-f6 Ng1-f3
# 12     -8     218 a2-a3 d7-d5 d2-d4 Ng8-f6
# 13     -7     233 Nb1-a3 d7-d5 d2-d4 Ng8-f6
# 14     -9     207 Ng1-h3 d7-d5 d2-d4 e7-e6
# 15     -4     292 f2-f4 d7-d5 Ng1-f3 Ng8-f6
# 16     -1     386 c2-c4 e7-e5 d2-d3 d7-d6 e2-e4
# 17    -10     201 h2-h4 e7-e5 e2-e4 Ng8-f6 Nb1-c3
# 18     -8     229 a2-a4 e7-e5 e2-e4 d7-d6 Nb1-c3 Ng8-f6
# 19    -14     157 g2-g4 d7-d5 d2-d4 Bc8xg4
# 20    -10     194 b2-b4 e7-e5 Bc1-b2 Bf8xb4 Bb2xe5

nodes = 7160847 <90% qnodes> time = 11187ms nps = 640104 eps = 491450 nneps = 3185
Tree: nodes = 1312836 depth = 62 pps = 3823 visits = 42778
      qsearch_calls = 426627 search_calls = 0
move d2d4
Bye Bye
I am glad it works now !

Btw you need to launch multiple threads (atleast mt=16) so that it can do batching. I am using multi-threaded
batching approach that would work for any kind of algorithm including alpha-beta.

Do you still need a linux binary -- I can provide it atleast for the CPU.

Daniel

Re: Scorpio 2.8.7 MCTS+NN windows version

Posted: Sat Sep 29, 2018 8:28 pm
by Joerg Oster
Daniel Shawul wrote: Sat Sep 29, 2018 7:48 pm
Joerg Oster wrote: Sat Sep 29, 2018 7:33 pm Arghh, stupid me! :shock:
The VC Redistributable package was missing.

Now it works, finally!

Code: Select all

C:\Users\XXXXX\Downloads\scorpio288-mcts-nn\bin\Windows>scorpio go quit
feature done=0
ht 33554432 X 16 = 512.0 MB
eht 524288 X 8 = 8.0 MB
pht 32768 X 24 = 0.8 MB
treeht 26843136 X 40 = 1024.0 MB
processors [3]
EgbbProbe 4.2 by Daniel Shawul
70 egbbs loaded !
Loading neural network ...
Loading graph on /cpu:0
Neural network loaded !
loading_time = 0s
[st = 11114ms, mt = 29250ms , hply = 0 , moves_left 10]
63 25 112 4389  d2-d4 d7-d5 Nb1-c3 Ng8-f6 Ng1-f3
64 18 225 9502  d2-d4 d7-d5 Ng1-f3 Ng8-f6 Nb1-c3
65 17 337 14453  d2-d4 d7-d5 Ng1-f3 Ng8-f6 Nb1-c3
66 18 450 19424  d2-d4 d7-d5 Ng1-f3 Ng8-f6 Nb1-c3 Nb8-c6
67 18 562 24067  d2-d4 d7-d5 Ng1-f3 Ng8-f6 g2-g3 Nb8-c6
68 18 675 28520  d2-d4 d7-d5 Nb1-c3 Nb8-c6 g2-g3 Ng8-f6 Bf1-g2 e7-e6
69 18 787 32639  d2-d4 d7-d5 Nb1-c3 Ng8-f6 Ng1-f3 Nb8-c6 Bc1-e3 h7-h6 h2-h3
70 18 900 36410  d2-d4 d7-d5 Nb1-c3 Ng8-f6 Bc1-f4 Bc8-d7 f2-f3 e7-e5 d4xe5 Nf6-h5 e2-e3 Qd8-h4 g2-g3 Qh4-d8 Qd1xd5 Nh5xf4 e3xf4
71 18 1012 40446  d2-d4 d7-d5 Nb1-c3 Ng8-f6 Bc1-f4 Bc8-f5 a2-a3 Nb8-c6 e2-e3 e7-e6 Ng1-f3 Bf8-d6 Bf1-e2 a7-a6 Ke1-g1 Ke8-g8 Bf4xd6 c7xd6 Be2-d3 Bf5-e4 h2-h3 Be4xd3 Qd1xd3

#  1      9   10311 e2-e4 d7-d5 d2-d4 d5xe4 Bc1-e3 Ng8-f6 Nb1-c3 Bc8-e6 a2-a3 Nb8-c6 Ng1-e2 Qd8-d7 f2-f3 Nf6-d5 Be3-f2 e4xf3 g2xf3 Ke8-c8 Bf1-g2 Be6-f5 Ke1-g1 e7-e6 Rf1-e1 Bf8-e7 Ne2-g3 Be7-h4 Qd1-d2 g7-g6 f3-f4 Rh8-g8 Ra1-d1 Bf5-g4 Rd1-c1 Kc8-b8 h2-h3 Bg4-f5 Nc3xd5 e6xd5 Ng3xf5 Bh4xf2 Kg1xf2 g6xf5 c2-c4 d5xc4 d4-d5 Rd8-e8 Bg2-f3 Re8-e6 Rc1xc4 Qd7-d6 Re1-g1 Re6-g6 Rg1-g5 a7-a6 Rc4-c1 f7-f6 Rg5xf5 Rg8-e8 Kf2-f1 Rg6-g3
#  2     19   24686 d2-d4 d7-d5 Nb1-c3 Ng8-f6 Bc1-f4 Bc8-f5 Nc3-b5 Nb8-a6 c2-c3 e7-e6 e2-e3 Bf8-e7 Ng1-e2 h7-h6 f2-f3 Ke8-g8 Bf4-e5 c7-c6 g2-g4 Bf5-g6 g4-g5 h6xg5 Nb5-a3 Be7-d6 Bf1-g2 Qd8-d7 Na3-c2 Na6-c7 e3-e4 Ra8-d8 Ke1-g1 Qd7-e7 Ne2-g3 d5xe4 Be5xd6 Rd8xd6 f3xe4 Rd6-d8
#  3      4     666 Nb1-c3 e7-e5 d2-d4 d7-d6 e2-e4 e5xd4
#  4      8    2599 Ng1-f3 d7-d5 d2-d4 Ng8-f6 g2-g3 c7-c5 e2-e3 c5xd4 e3xd4 Nb8-c6 Nb1-c3 a7-a6 Bf1-g2 e7-e6 Ke1-g1 h7-h6 Rf1-e1 Bf8-d6 Bc1-d2 Ke8-g8 a2-a3 Bc8-d7 h2-h3 Qd8-c7 Ra1-c1 Rf8-e8 Rc1-a1 Re8-f8
#  5      0     447 e2-e3 d7-d5 c2-c4 Ng8-f6 c4xd5
#  6      3     590 d2-d3 e7-e5 e2-e4 Nb8-c6 Ng1-f3
#  7      0     413 g2-g3 e7-e5 Bf1-g2 d7-d5 d2-d3
#  8     -4     303 b2-b3 e7-e5 e2-e4 Ng8-f6 Nb1-c3
#  9     -7     235 f2-f3 e7-e5 e2-e4 Bf8-c5 Ng1-h3
# 10     -9     215 h2-h3 d7-d5 d2-d4 Ng8-f6 Ng1-f3
# 11     -9     215 c2-c3 d7-d5 d2-d4 Ng8-f6 Ng1-f3
# 12     -8     218 a2-a3 d7-d5 d2-d4 Ng8-f6
# 13     -7     233 Nb1-a3 d7-d5 d2-d4 Ng8-f6
# 14     -9     207 Ng1-h3 d7-d5 d2-d4 e7-e6
# 15     -4     292 f2-f4 d7-d5 Ng1-f3 Ng8-f6
# 16     -1     386 c2-c4 e7-e5 d2-d3 d7-d6 e2-e4
# 17    -10     201 h2-h4 e7-e5 e2-e4 Ng8-f6 Nb1-c3
# 18     -8     229 a2-a4 e7-e5 e2-e4 d7-d6 Nb1-c3 Ng8-f6
# 19    -14     157 g2-g4 d7-d5 d2-d4 Bc8xg4
# 20    -10     194 b2-b4 e7-e5 Bc1-b2 Bf8xb4 Bb2xe5

nodes = 7160847 <90% qnodes> time = 11187ms nps = 640104 eps = 491450 nneps = 3185
Tree: nodes = 1312836 depth = 62 pps = 3823 visits = 42778
      qsearch_calls = 426627 search_calls = 0
move d2d4
Bye Bye
I am glad it works now !

Btw you need to launch multiple threads (atleast mt=16) so that it can do batching. I am using multi-threaded
batching approach that would work for any kind of algorithm including alpha-beta.

Do you still need a linux binary -- I can provide it atleast for the CPU.

Daniel
Thanks!
A Linux binary would still be very welcome for my other box.

Raising mt certainly helps.
The same small net as before:

Code: Select all

feature done=0
ht 33554432 X 16 = 512.0 MB
eht 524288 X 8 = 8.0 MB
pht 32768 X 24 = 0.8 MB
treeht 26843136 X 40 = 1024.0 MB
processors [16]
EgbbProbe 4.2 by Daniel Shawul
70 egbbs loaded !
Loading neural network ...
Loading graph on /cpu:0
Neural network loaded !
loading_time = 0s
[st = 11114ms, mt = 29250ms , hply = 0 , moves_left 10]
63 19 112 6500  d2-d4 d7-d5 Nb1-c3 Ng8-f6 Ng1-f3
64 16 224 15731  d2-d4 d7-d5 e2-e4 d5xe4 Nb1-c3 Ng8-f6
65 17 337 25420  d2-d4 d7-d5 Ng1-f3 Ng8-f6 g2-g3 Nb8-c6
66 17 449 34767  d2-d4 d7-d5 Ng1-f3 Ng8-f6 g2-g3 Nb8-c6
67 18 562 43662  d2-d4 d7-d5 Ng1-f3 Ng8-f6 a2-a3 Nb8-c6 Nb1-c3
68 18 674 52518  d2-d4 d7-d5 Nb1-c3 Ng8-f6 Bc1-f4 Bc8-g4 f2-f3 Bg4-d7 e2-e4
69 18 787 60910  d2-d4 d7-d5 Nb1-c3 Ng8-f6 Bc1-f4 c7-c5 e2-e3 c5xd4 Bf1-b5 Nb8-c6 e3xd4
70 18 899 68397  d2-d4 d7-d5 Nb1-c3 Ng8-f6 Bc1-f4 c7-c5 e2-e3 c5-c4 Bf4-e5 e7-e6 Ng1-f3 Nb8-c6
71 18 1012 75071  d2-d4 d7-d5 Nb1-c3 Ng8-f6 Bc1-f4 e7-e6 e2-e3 Bf8-d6 Bf1-d3 Nb8-c6 Ng1-f3 a7-a6 Ke1-g1 Ke8-g8 a2-a3 Bc8-d7 h2-h3 h7-h6 Nf3-e5 Nc6xe5 d4xe5

#  1     10    6765 e2-e4 d7-d5 d2-d4 d5xe4 Nb1-c3 Ng8-f6 a2-a3 Bc8-e6 Bc1-e3 Nb8-d7 Ng1-e2 Ra8-c8 h2-h3 h7-h6 Ne2-g3 c7-c5 Ng3xe4 Nf6xe4 Nc3xe4 Qd8-a5 c2-c3 c5xd4 Qd1xd4 Be6-d5 Ra1-d1 e7-e6 Be3-f4 Qa5-d8 Bf1-b5
#  2     22   52981 d2-d4 d7-d5 Nb1-c3 Ng8-f6 Bc1-f4 e7-e6 e2-e3 Bf8-d6 Bf1-d3 Bc8-d7 Ng1-f3 Nb8-c6 Bf4xd6 c7xd6 Ke1-g1 Ke8-g8 Nf3-g5 h7-h6 Ng5-f3 e6-e5 Qd1-e2 e5-e4 Nc3xd5
#  3      3    4437 Nb1-c3 g7-g6 d2-d4 d7-d5 Bc1-f4 Bf8-g7 e2-e3 Ng8-f6 Bf1-b5 Nb8-c6 Ng1-f3 Ke8-g8 Ke1-g1 e7-e6 Nf3-e5 Bc8-d7 Bb5-d3 Qd8-e7 a2-a3 a7-a6 h2-h3 h7-h6 Rf1-e1 g6-g5 Ne5xc6 Bd7xc6 Bf4-e5 Nf6-e4 Nc3xe4 d5xe4 Bd3-e2 Rf8-d8 Be5xg7 Kg8xg7 c2-c3 e6-e5 Qd1-b3 e5xd4 e3xd4 e4-e3 Be2-d3 e3xf2 Kg1xf2 Qe7-f6 Kf2-g1 Qf6-f4 Re1-e7 Bc6-d5 Qb3-d1 Kg7-g8 Qd1-h5 Kg8-f8 Ra1-e1 Bd5-e6 Re1xe6
#  4     11    6291 Ng1-f3 d7-d5 d2-d4 Ng8-f6 Bc1-f4 Bc8-d7 Nb1-c3 Nb8-c6 h2-h3 e7-e6 e2-e3 a7-a6 a2-a3 h7-h6 Nf3-e5 Bf8-d6 Bf1-e2 Nc6-e7 Bf4-g3 Ke8-g8 Ke1-g1 Ne7-f5 Qd1-d2 Bd7-b5 Be2xb5 a6xb5 Nc3xb5 Nf6-e4 Qd2-d3 Ne4xg3 f2xg3 Nf5xg3 Rf1-f4 Ng3-e4 Ra1-f1 Bd6xe5 d4xe5 c7-c6 Nb5-d6 Ne4xd6 e5xd6 Qd8xd6 Qd3-d4
#  5    -11     770 e2-e3 d7-d5 f2-f4 Ng8-f6 Ng1-f3 Nb8-c6 Nb1-c3 e7-e6 d2-d4 h7-h6 a2-a3 a7-a6
#  6     -3    2915 d2-d3 d7-d5 Ng1-f3 Ng8-f6 c2-c4 d5xc4 d3xc4 Nb8-c6 Nb1-c3 e7-e6 e2-e4 h7-h6 h2-h3 a7-a6 a2-a3 Bf8-c5 Bc1-e3 Bc5-b6 Qd1xd8 Ke8xd8
#  7     -6    1479 g2-g3 e7-e5 c2-c4 d7-d5 c4xd5 Qd8xd5 Ng1-f3 Ng8-f6 Nb1-c3 Qd5-c5 Bf1-g2 Nb8-c6 Ke1-g1 Bc8-e6 d2-d4 e5xd4 Nc3-a4 Qc5-b4 b2-b3 Ke8-c8 Bc1-b2 Bf8-e7 Ra1-c1 Kc8-b8 Nf3-g5 Be6-d5 Ng5-f3 h7-h6 h2-h3 a7-a6
#  8    -14     363 b2-b3 e7-e5 e2-e4 Ng8-f6 Ng1-f3 Nb8-c6
#  9    -14     324 f2-f3 e7-e5 e2-e4 Bf8-c5 Ng1-h3 d7-d5
# 10    -16     279 h2-h3 e7-e5 e2-e4 Ng8-f6 d2-d3
# 11    -15     322 c2-c3 d7-d5 d2-d4 Ng8-f6 Ng1-f3 Nb8-c6
# 12    -15     309 a2-a3 d7-d5 d2-d4 Ng8-f6 Ng1-f3
# 13    -14     331 Nb1-a3 d7-d5 d2-d4 Ng8-f6
# 14    -17     265 Ng1-h3 d7-d5 d2-d4 Ng8-f6 Nh3-g5
# 15    -12     482 f2-f4 d7-d5 Ng1-f3 Ng8-f6 e2-e3 Nb8-c6 Nb1-c3
# 16     -9     959 c2-c4 d7-d5 c4xd5 e7-e6 Qd1-a4 Bc8-d7 Qa4-b3 e6xd5 Qb3xd5 Nb8-c6 d2-d4 Ng8-f6 Qd5-c4 Bf8-d6 Ng1-f3 Bd7-e6 Qc4-c2 Ke8-g8
# 17    -16     275 h2-h4 e7-e5 e2-e4 Ng8-f6 Nb1-c3
# 18    -15     302 a2-a4 e7-e5 e2-e4 Nb8-c6
# 19    -20     200 g2-g4 d7-d5 d2-d4 Bc8xg4 Ng1-f3
# 20    -17     253 b2-b4 e7-e5 Bc1-b2 Bf8xb4 Bb2xe5

nodes = 12026511 <91% qnodes> time = 11202ms nps = 1073603 eps = 843073 nneps = 6806
Tree: nodes = 2452332 depth = 54 pps = 7166 visits = 80283
      qsearch_calls = 150239 search_calls = 0
move d2d4
Bye Bye
More than double the nneps, and almost double the visits.
I'll have to figure out optimal settings.
However, the larger nets are really slow, more than I expected.

Next step will be figuring out the creation and training of a net.
I'm almost sure more questions will pop up.