Any news of a Komodo update in sight?

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Dann Corbit, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
jpqy
Posts: 532
Joined: Thu Apr 24, 2008 7:31 am
Location: Belgium

Re: Any news of a Komodo update in sight?

Post by jpqy » Tue Dec 12, 2017 7:05 pm

Asked Ipman to run this position on his system!!
Well what do you think here from ;)
In zero seconds!!

rnb2r2/p3bpkp/1ppq3N/6p1/Q7/2P3P1/P4PBP/R1B2RK1 w - - 0 1

Analysis by Komodo 1964.00 64-bitx bmi2 8c:

19.Te1 Pa6 20.Dc2 Dxh6 21.Txe7 Df6 22.La3 Td8 23.Tae1 Lf5 24.Dc1 Pc5 25.Lxc5 bxc5
+- (2.77 --) Diepte: 13 00:00:00 776KN
19.Te1 Pa6 20.h4
+- (2.81 ++) Diepte: 13 00:00:00 960KN
19.Te1 Pa6 20.h4
+- (2.93 ++) Diepte: 13 00:00:00 968KN
19.Te1 Pa6 20.h4
+- (3.09 ++) Diepte: 13 00:00:00 971KN
19.Te1 Pa6 20.h4
+- (3.31 ++) Diepte: 13 00:00:00 983KN
19.Te1 Pa6 20.h4
+- (3.16 --) Diepte: 13 00:00:00 991KN
19.Te1 Pa6 20.h4
+- (3.38 ++) Diepte: 13 00:00:00 996KN
19.Te1 Pa6 20.h4
+- (3.96 ++) Diepte: 13 00:00:00 1055KN
19.Te1 Pa6 20.h4 Pc5 21.Dc2 Pd3 22.Lxg5 Lxg5 23.hxg5 Le6 24.Te3 Tfd8 25.Td1 Pb4 26.Txd6 Pxc2 27.Texe6
+- (3.78) Diepte: 13 00:00:00 1063KN
19.Te1 Pa6 20.h4 Pc5 21.Dc2 Pd3 22.Lxg5 Lxg5 23.hxg5 Le6 24.Te3 Tfd8 25.Td1 Pb4 26.Txd6 Pxc2 27.Texe6
+- (3.72 --) Diepte: 14 00:00:00 1096KN
19.Te1 Pa6 20.h4 Pc5 21.Dc2 Pd3 22.Lxg5 Lxg5 23.hxg5 Le6 24.Te3 Tfd8 25.Td1 Pb4 26.Txd6 Pxc2 27.Texe6
+- (3.63 --) Diepte: 14 00:00:00 1123KN
19.Te1 Pa6 20.h4 Pc5 21.Dc2 Pd3 22.Lxg5 Lxg5 23.hxg5 Le6 24.Te3 Tfd8 25.Td1 Pb4 26.Txd6 Pxc2 27.Texe6
+- (3.51 --) Diepte: 14 00:00:00 1127KN
19.Te1 Pa6 20.h4 Pc5 21.Dc2 Pd3 22.Lxg5 Lxg5 23.hxg5 Le6 24.Te3 Tfd8 25.Td1 Pb4 26.Txd6 Pxc2 27.Texe6
+- (3.35 --) Diepte: 14 00:00:00 1133KN
19.Te1 Pa6 20.h4 Pc5 21.Dc2 Pd3 22.Lxg5 Lxg5 23.hxg5 Le6 24.Te3 Tfd8 25.Td1 Pb4 26.Txd6 Pxc2 27.Texe6
+- (3.13 --) Diepte: 14 00:00:00 1149KN
19.Te1 Pa6 20.h4 Pc5 21.Dc2 Pd3 22.Lxg5 Lxg5 23.hxg5 Le6 24.Te3 Tfd8 25.Td1 Pb4 26.Txd6 Pxc2 27.Texe6
+- (2.83 --) Diepte: 14 00:00:00 1189KN
19.Te1 Pa6 20.h4 Pc5 21.Dc2 Pd3 22.Lxg5 Lxg5 23.hxg5 Le6 24.Te3 Tfd8 25.Td1 Pb4 26.Txd6 Pxc2 27.Texe6
+- (2.41 --) Diepte: 14 00:00:00 1232KN
19.Te1 Pa6 20.h4 Pc5 21.Dc2 Pd3 22.Lxg5 Lxg5 23.hxg5 Le6 24.Te3 Tfd8 25.Td1 Pb4 26.Txd6 Pxc2 27.Texe6
+- (2.70 ++) Diepte: 14 00:00:00 1525KN
19.Te1 Pa6 20.h4 Pc5 21.Dc2 Dxh6 22.Txe7 Dg6 23.Dd2 Kg8 24.Dxg5 Dxg5 25.Lxg5 Le6 26.Td1 Tfd8 27.Td4 Tdc8
+- (2.64) Diepte: 14 00:00:00 1687KN
19.Te1 Td8 20.h4
+- (2.72 ++) Diepte: 15 00:00:00 1716KN
19.Te1 Dxh6 20.Txe7
+- (2.81 ++) Diepte: 15 00:00:00 1731KN
19.Te1 Pa6 20.h4
+- (2.93 ++) Diepte: 15 00:00:00 1757KN
19.Te1 Pa6 20.h4
+- (3.09 ++) Diepte: 15 00:00:00 1775KN
19.Te1 Pa6 20.h4
+- (2.98 --) Diepte: 15 00:00:00 1845KN
19.Te1 Pa6 20.h4
+- (2.68 --) Diepte: 15 00:00:00 1952KN
19.Te1 Pa6 20.h4 Pc5 21.Dc2 Dxh6 22.Txe7 Dg6 23.De2 Kg8 24.Lxg5 Le6 25.Td1 Tad8 26.Txd8 Txd8 27.Lf1 Dg7 28.Txe6
+- (2.37) Diepte: 15 00:00:00 2015KN
19.Te1 a5 20.De4
+- (2.43 ++) Diepte: 16 00:00:00 2109KN
19.Te1 Pa6 20.h4
+- (2.51 ++) Diepte: 16 00:00:00 2188KN
19.Te1 Pa6 20.h4
+- (2.46 --) Diepte: 16 00:00:00 2392KN
19.Te1 Pa6 20.h4
+- (2.31 --) Diepte: 16 00:00:00 2727KN
19.Te1 Pa6 20.h4
+- (2.10 --) Diepte: 16 00:00:00 3095KN
19.Te1 Pa6 20.h4
+- (2.25 ++) Diepte: 16 00:00:00 3376KN
19.Te1 Pa6 20.h4
+- (2.05 --) Diepte: 16 00:00:00 3402KN
19.Te1 Pa6 20.Dc2
+- (2.34 ++) Diepte: 16 00:00:00 4152KN
19.Te1 Pa6 20.Dc2
+- (1.95 --) Diepte: 16 00:00:00 4254KN
19.Te1 Pa6 20.h4 Pc5 21.Dc2 Dxh6 22.Txe7 Dg6 23.De2 Kg8 24.Lxg5 h6 25.Ld2 Le6 26.Te1 Tfd8 27.Lf4 Pd3 28.h5 Dg7
+- (2.44) Diepte: 16 00:00:00 5918KN
19.Te1 Pa6 20.h4 Pc5 21.Dc2 Dxh6 22.Txe7 Dg6 23.De2 Kg8 24.Lxg5 h6 25.Ld2 Le6 26.Te1 Tfd8 27.Lf4 Pd3 28.h5 Dg7
+- (2.38 --) Diepte: 17 00:00:00 6039KN
19.Te1 Pa6 20.h4 Pc5 21.Dc2 Dxh6 22.Txe7 Dg6 23.De2 Kg8 24.Lxg5 h6 25.Ld2 Le6 26.Te1 Tfd8 27.Lf4 Pd3 28.h5 Dg7
+- (2.30 --) Diepte: 17 00:00:00 6649KN
19.Te1 Pa6 20.h4 Pc5 21.Dc2 Dxh6 22.Txe7 Dg6 23.De2 Kg8 24.Lxg5 h6 25.Ld2 Le6 26.Te1 Tfd8 27.Lf4 Pd3 28.h5 Dg7
+- (2.19 --) Diepte: 17 00:00:00 6745KN
19.Te1 Pa6 20.h4 Pc5 21.Dc2 Dxh6 22.Txe7 Dg6 23.De2 Kg8 24.Lxg5 h6 25.Ld2 Le6 26.Te1 Tfd8 27.Lf4 Pd3 28.h5 Dg7
+- (2.04 --) Diepte: 17 00:00:00 7031KN
19.Te1 Pa6 20.Lxc6
+- (2.14 ++) Diepte: 17 00:00:00 7754KN
19.Te1 Pa6 20.Lxc6
+- (2.43 ++) Diepte: 17 00:00:00 8197KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Lf6 23.Lxa8 Le6 24.Lg2 Lxd4 25.Lxg5 Lxa1 26.Txa1 Kg6 27.Lf4 Td8 28.h4 Pd3 29.Le4+ f5 30.Lxd3 Txd3 31.Te1 Ld5 32.h5+ Kxh5 33.Pxf5
+- (2.59) Diepte: 17 00:00:00 8315KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Lf6 23.Lxa8 Le6 24.Lg2 Lxd4 25.Lxg5 Lxa1 26.Txa1 Kg6 27.Lf4 Td8 28.h4 Pd3 29.Le4+ f5 30.Lxd3 Txd3 31.Te1 Ld5 32.h5+ Kxh5 33.Pxf5
+- (2.53 --) Diepte: 18 00:00:01 8526KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Lf6 23.Lxa8 Le6 24.Lg2 Lxd4 25.Lxg5 Lxa1 26.Txa1 Kg6 27.Lf4 Td8 28.h4 Pd3 29.Le4+ f5 30.Lxd3 Txd3 31.Te1 Ld5 32.h5+ Kxh5 33.Pxf5
+- (2.45 --) Diepte: 18 00:00:01 8700KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Lf6 23.Lxa8 Le6 24.Lg2 Lxd4 25.Lxg5 Lxa1 26.Txa1 Kg6 27.Lf4 Td8 28.h4 Pd3 29.Le4+ f5 30.Lxd3 Txd3 31.Te1 Ld5 32.h5+ Kxh5 33.Pxf5
+- (2.34 --) Diepte: 18 00:00:01 8878KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Lf6 23.Lxa8 Le6 24.Lg2 Lxd4 25.Lxg5 Lxa1 26.Txa1 Kg6 27.Lf4 Td8 28.h4 Pd3 29.Le4+ f5 30.Lxd3 Txd3 31.Te1 Ld5 32.h5+ Kxh5 33.Pxf5
+- (2.41 ++) Diepte: 18 00:00:01 9169KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Lf6 23.Lxa8 Le6 24.Lg2 Lxd4 25.Lxg5 Lxa1 26.Txa1 Kg6 27.Lf4 Td8 28.h4 Pd3 29.Le4+ f5 30.Lxd3 Txd3 31.Te1 Ld5 32.h5+ Kxh5 33.Pxf5
+- (2.31 --) Diepte: 18 00:00:01 9810KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Lf6 23.Lxa8 Le6 24.Lg2 Lxd4 25.Lxg5 Lxa1 26.Txa1 Kg6 27.Lf4 Td8 28.h4 Pd3 29.Le4+ f5 30.Lxd3 Txd3 31.Te1 Ld5 32.h5+ Kxh5 33.Pxf5
+- (2.45 ++) Diepte: 18 00:00:01 10102KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Lf6 23.Lxa8 Le6 24.Lg2 Lxd4 25.Lxg5 Lxa1 26.Txa1 Kg6 27.Lf4 Td8 28.h4 Pd3 29.Le4+ f5 30.Lxd3 Txd3 31.Te1 Ld5 32.h5+ Kxh5 33.Pxf5
+- (2.25 --) Diepte: 18 00:00:01 10231KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Le6 23.Lxa8 Txa8 24.Pf5+ Lxf5 25.Txe7 Kf6 26.Lxg5+ Kxg5 27.dxc5 bxc5 28.Tc1 Tc8 29.Txf7 c4 30.Tc3 a5 31.Kg2 h6 32.f4+ Kg6 33.Ta7 Tc5 34.Kf2
+- (2.39) Diepte: 18 00:00:01 10255KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Le6 23.Lxa8 Txa8 24.Pf5+ Lxf5 25.Txe7 Kf6 26.Lxg5+ Kxg5 27.dxc5 bxc5 28.Tc1 Tc8 29.Txf7 c4 30.Tc3 a5 31.Kg2 h6 32.f4+ Kg6 33.Ta7 Tc5 34.Kf2
+- (2.33 --) Diepte: 19 00:00:01 10963KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Le6 23.Lxa8 Txa8 24.Pf5+ Lxf5 25.Txe7 Kf6 26.Lxg5+ Kxg5 27.dxc5 bxc5 28.Tc1 Tc8 29.Txf7 c4 30.Tc3 a5 31.Kg2 h6 32.f4+ Kg6 33.Ta7 Tc5 34.Kf2
+- (2.45 ++) Diepte: 19 00:00:01 11884KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Le6 23.Lxa8 Txa8 24.Pf5+ Lxf5 25.Txe7 Kf6 26.Lxg5+ Kxg5 27.dxc5 bxc5 28.Tc1 Tc8 29.Txf7 c4 30.Tc3 a5 31.Kg2 Kg6 32.Ta7 Tc6 33.Txa5 Ld3
+- (2.45) Diepte: 19 00:00:01 11891KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Le6 23.Lxa8 Txa8 24.Pf5+ Lxf5 25.Txe7 Kf6 26.Lxg5+ Kxg5 27.dxc5 bxc5 28.Tc1 Tc8 29.Txf7 c4 30.Tc3 a5 31.Kg2 Kg6 32.Ta7 Tc6 33.Txa5 Ld3
+- (2.39 --) Diepte: 20 00:00:01 13472KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Le6 23.Lxa8 Txa8 24.Pf5+ Lxf5 25.Txe7 Kf6 26.Lxg5+ Kxg5 27.dxc5 bxc5 28.Tc1 Tc8 29.Txf7 Kg6 30.Txa7 c4 31.Tc3 Kf6 32.f4 Ld3 33.Kf2 Te8
+- (2.40) Diepte: 20 00:00:01 15574KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Le6 23.Lxa8 Txa8 24.Pf5+ Lxf5 25.Txe7 Kf6 26.Lxg5+ Kxg5 27.dxc5 bxc5 28.Tc1 Tc8 29.Txf7 Kg6 30.Txa7 c4 31.Tc3 Kf6 32.f4 Ld3 33.Kf2 Te8
+- (2.34 --) Diepte: 21 00:00:01 19397KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Le6 23.Lxa8 Txa8 24.Pf5+ Lxf5 25.Txe7 Kf6 26.Lxg5+ Kxg5 27.dxc5 bxc5 28.Tc1 Tc8 29.Txf7 Kg6 30.Txa7 c4 31.Tc3 Kf6 32.f4 Ld3 33.Kf2 Te8
+- (2.26 --) Diepte: 21 00:00:01 21638KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Le6 23.Lxa8 Txa8 24.Pf5+ Lxf5 25.Txe7 Kf6 26.Lxg5+ Kxg5 27.dxc5 bxc5 28.Tc1 Tc8 29.Txf7 Kg6 30.Txa7 c4 31.Tc3 Kf6 32.f4 Ld3 33.Kf2 Te8
+- (2.15 --) Diepte: 21 00:00:02 26021KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Le6 23.Lxa8 Txa8 24.Pf5+ Lxf5 25.Txe7 Kf6 26.Lxg5+ Kxg5 27.dxc5 bxc5 28.Tc1 Tc8 29.Txf7 Kg6 30.Txa7 c4 31.Tc3 Kf6 32.f4 Ld3 33.Kf2 Te8
+- (2.22 ++) Diepte: 21 00:00:02 28124KN
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Le6 23.Lxa8 Txa8 24.Pf5+ Lxf5 25.Txe7 Kf6 26.Lxg5+ Kxg5 27.dxc5 bxc5 28.Tc1 Tc8 29.Txf7 Kg6 30.Txa7 c4 31.Tc3 Kf6 32.f4 Ld3 33.Kf2 Te8
+- (2.43 ++) Diepte: 21 00:00:02 30470KN, tb=12
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Lf6 23.Lxa8 Le6 24.Lg2 Lxd4 25.Lxg5 Pd3 26.Te3 Pxf2 27.Tae1 Ph3+ 28.Lxh3 Lxh3 29.g4 Kg6 30.Lf4 Te8 31.Kf2 Lxe3+ 32.Lxe3 Te6 33.Kg3 f5 34.Kxh3 f4 35.Pf5 fxe3 36.Kg3
+- (2.58) Diepte: 21 00:00:02 31093KN, tb=12
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Lf6 23.Lxa8 Le6 24.Lg2 Lxd4 25.Lxg5 Pd3 26.Te3 Pxf2 27.Tae1 Ph3+ 28.Lxh3 Lxh3 29.g4 Kg6 30.Lf4 Te8 31.Kf2 Lxe3+ 32.Lxe3 Te6 33.Kg3 f5 34.Kxh3 f4 35.Pf5 fxe3 36.Kg3
+- (2.52 --) Diepte: 22 00:00:02 38831KN, tb=17
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Lf6 23.Lxa8 Le6 24.Lg2 Lxd4 25.Lxg5 Pd3 26.Te3 Pxf2 27.Tae1 Ph3+ 28.Lxh3 Lxh3 29.g4 Kg6 30.Lf4 Te8 31.Kf2 Lxe3+ 32.Lxe3 Te6 33.Kg3 f5 34.Kxh3 f4 35.Pf5 fxe3 36.Kg3
+- (2.44 --) Diepte: 22 00:00:02 41770KN, tb=17
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Lf6 23.Lxa8 Le6 24.Lg2 Lxd4 25.Lxg5 Pd3 26.Te3 Pxf2 27.Tae1 Ph3+ 28.Lxh3 Lxh3 29.g4 Kg6 30.Lf4 Te8 31.Kf2 Lxe3+ 32.Lxe3 Te6 33.Kg3 f5 34.Kxh3 f4 35.Pf5 fxe3 36.Kg3
+- (2.33 --) Diepte: 22 00:00:03 46832KN, tb=18
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Lf6 23.Lxa8 Le6 24.Lg2 Lxd4 25.Lxg5 Pd3 26.Te3 Pxf2 27.Tae1 Ph3+ 28.Lxh3 Lxh3 29.g4 Kg6 30.Lf4 Te8 31.Kf2 Lxe3+ 32.Lxe3 Te6 33.Kg3 f5 34.Kxh3 f4 35.Pf5 fxe3 36.Kg3
+- (2.40 ++) Diepte: 22 00:00:03 48335KN, tb=18
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Lf6 23.Lxa8 Le6 24.Lg2 Lxd4 25.Lxg5 Pd3 26.Te3 Pxf2 27.Tae1 Ph3+ 28.Lxh3 Lxh3 29.g4 Kg6 30.Lf4 Te8 31.Kf2 Lxe3+ 32.Lxe3 Te6 33.Kg3 f5 34.Kxh3 f4 35.Pf5 fxe3 36.Kg3
+- (2.30 --) Diepte: 22 00:00:03 49452KN, tb=18
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Lf6 23.Lxa8 Le6 24.Lg2 Lxd4 25.Lxg5 Pd3 26.Te3 Pxf2 27.Tae1 Ph3+ 28.Lxh3 Lxh3 29.g4 Kg6 30.Lf4 Te8 31.Kf2 Lxe3+ 32.Lxe3 Te6 33.Kg3 f5 34.Kxh3 f4 35.Pf5 fxe3 36.Kg3
+- (2.44 ++) Diepte: 22 00:00:03 50173KN, tb=18
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Lf6 23.Lxa8 Le6 24.Lg2 Lxd4 25.Lxg5 Pd3 26.Te3 Pxf2 27.Tae1 Ph3+ 28.Lxh3 Lxh3 29.g4 Kg6 30.Lf4 Te8 31.Kf2 Lxe3+ 32.Lxe3 Te6 33.Kg3 f5 34.Kxh3 f4 35.Pf5 fxe3 36.Kg3
+- (2.24 --) Diepte: 22 00:00:03 51374KN, tb=18
19.Te1 Pa6 20.Lxc6 Pc5 21.Dd4+ Dxd4 22.cxd4 Lf6 23.Lxa8 Le6 24.Lg2 Lxd4 25.Lxg5 Pd3 26.Te3 Pxf2 27.Tae1 Ph3+ 28.Lxh3 Lxh3 29.g4 Kg6 30.Lf4 Te8 31.Kf2 Lxe3+ 32.Lxe3 Te6 33.Kg3 f5 34.Kxh3 f4 35.Pf5 fxe3 36.Kg3
+- (1.68 --) Diepte: 22 00:00:05 75832KN, tb=24

(, 12.12.2017)

JP.

User avatar
Laskos
Posts: 10949
Joined: Wed Jul 26, 2006 8:21 pm
Full name: Kai Laskos

Re: Any news of a Komodo update in sight?

Post by Laskos » Tue Dec 12, 2017 7:47 pm

abulmo2 wrote:
Laskos wrote:
abulmo2 wrote:
Leo wrote:
Werewolf wrote:Sorry that was a typo - I meant 1 GB of Hash.

And why did they use such a tiny amount?
To limit the strength of SF.
This have to be proved. I would like to see a SF 8 vs SF 8 (or Komodo/Houdini) match with different hash sizes to see the impact of the hash size to the elo, on the machine used by the alphaZero team. Unfortunately, I do not have a computer powerful enough to do any test, so I cannot do anything meaningful myself.
See here, for example:
http://www.talkchess.com/forum/viewtopi ... t&start=20

In the conditions of the match with A0, probably the necessary Hash was 32 or 64GB, and by looking at the link, it seems, SF8 had some 10% time-to-depth disadvantage. How it translates in Elo points there I don't know, Elo becomes a bit obsolete when the score is 28-0 with 72 draws. Maybe with 64GB Hash SF8 would have gotten 1-2 more draws. This Hash issue is a bit irrelevant, when we are talking of such hard to compare hardware used by A0 and SF8.
I am not convinced by the previous tests. For obvious reasons, they are very far from the conditions of the AlphaZero vs Stockfish match.
I don't think it's that relevant to know the precise effect. Let's say it's not 10% time-to-depth, but 30%. And all combined a factor of 2 handicap for Stockfish 8 in the stated conditions. First, what do you think of +28 -0 =72 score with the weaker side having a factor of 2 in speed? My guess is that it will still lose significantly. Second, when in real terms (or maybe price terms) the hardware difference between SF and A0 might be factors of hundreds, a total factor of 2 here for many different handicaps to SF is a bit of nitpicking.

Werewolf
Posts: 1349
Joined: Thu Sep 18, 2008 8:24 pm

Re: Any news of a Komodo update in sight?

Post by Werewolf » Tue Dec 12, 2017 10:59 pm

72 draws and 28 wins. What elo difference is that? About 100 elo?

Changing Stockfish to:
- A faster dual xeon with HT off
- A great opening book
- Bigger hash
- More flexible time control

How much elo does that add? It must be quite a bit. I wonder if a really deep line in the opening could catch Alpha Zero out, who knows?

The elo difference between them would be smaller and AZ is using much more powerful hardware.

Rodolfo Leoni
Posts: 545
Joined: Tue Jun 06, 2017 2:49 pm
Location: Italy

Re: Any news of a Komodo update in sight?

Post by Rodolfo Leoni » Wed Dec 13, 2017 12:08 am

Michael Sherwin wrote:
mjlef wrote:
shrapnel wrote:
Jesse Gersenson wrote:If people want learning in Komodo, let your voice be heard; Mark and Larry are open to feature requests, especially those requested by a lot of people.
'.
So Reinforcement Learning will be introduced by Komodo team (if they are capable of it) only on REQUEST.
Looks like the thrashing stockfish received still hasn't convinced them.
*** Unbelievable.
Of course we listen to requests. But I do not think the Romi learning is anything like the learning Google/DeepMind did. They used 5000 special TensorFlow Processing units (TPUs), each costing thousands of dollars. Right now, this is way beyond our resources. Romi learning is most likely simply saving important positions in a persistant hash. In future games, these are reloaded into the main hash table, so the new game, if ir encouters one of these positions, alread has deep search information ofr them. The thing is, this helps not at all if a different lien is played. My old program NOW had this feature, but I did it not so much to make it stronger, but instead to avoid losing lines during tournaments. You can think of it as a self-correcting book, which benefits if the same line it tried on Komodo.

Larry and I often discuss Monte Carlo Tree Search, and are interested in trying this. We have also discussed uses for neural networks. Small nns could be useful in present PCs, but the massive nn used in AlphaGo Zero is currently beyond what we, and most chess engine users can afford.

We listen, and try to add what we think people want. But we do not have endless resources. We can afford to buy roughly 1 new server each year. It is not a matter of being "convinced". We just cannot afford it currently. Hopefully, GPUs in graphics cards will get faster and perhaps nn hardware will get added to future PCs or at least be much less expensive.
Mark Lefler was kind enough to ask me in a pm to error check the above! :D
But I do not think the Romi learning is anything like the learning Google/DeepMind did.
There is circumstantial evidence that it is similar in nature. Reinforcement learning involves accumulated rewards (and penalties) for every position in a database. It could be a persistent hash like suggested.
Romi learning is most likely simply saving important positions in a persistant hash.
Or the way Romi actually does it in a tree of all played games. This is superior to a persistent hash as only the subtree from the current position is loaded into the game hash. This has the advantage that only useful information ends up in the game hash.
The thing is, this helps not at all if a different lien is played.
Not true. Rewards and penalties are greater near the leaves and over time are back propagated to the root. Since every node is a root to a subtree, every move benefits from backpropagation. This results in a meaningful differentiation resulting in a gradual determination of which move gives better results. So for example at the actual root Romi will settle on say 1.e4 or 1.d4 etc. as being best and will always play that opening move. Just like AlphaZ always played 1.d4.
You can think of it as a self-correcting book, which benefits if the same line it tried on Komodo.
Philosophically I would argue that what RomiChess does has nothing to do with a book. In a book an engine usually can choose randomly between acceptable moves. In RomiChess only the learned best move is played in its Monkey See Monkey Do "book". That is separate from Romi's reinforcement learning. It just so happens that the stored tree of all Romi's games can handle both a "book" and the rewards/penalties for reinforcement learning.

Thanks Mark for inviting me to share some details! :D
:idea: Idea :!: :idea:

If K team decides to adopt Romi learning it could train for next WCCC with learning as an extension of Erdo's book! Then, AlphaZ could be invited to participate...

The "invitation" could sound as "If you don't register the whole world will know you're afraid to challenge the Champion in charge"...........

I bet a pizza on Komodo. If Komodo will win I'll offer a pizza to the whole team! :D

(Ehm... How many in K team? :? )
F.S.I. Chess Teacher

Michael Sherwin
Posts: 3196
Joined: Fri May 26, 2006 1:00 am
Location: WY, USA
Full name: Michael Sherwin

Re: Any news of a Komodo update in sight?

Post by Michael Sherwin » Wed Dec 13, 2017 12:22 am

Rodolfo Leoni wrote:
Michael Sherwin wrote:
mjlef wrote:
shrapnel wrote:
Jesse Gersenson wrote:If people want learning in Komodo, let your voice be heard; Mark and Larry are open to feature requests, especially those requested by a lot of people.
'.
So Reinforcement Learning will be introduced by Komodo team (if they are capable of it) only on REQUEST.
Looks like the thrashing stockfish received still hasn't convinced them.
*** Unbelievable.
Of course we listen to requests. But I do not think the Romi learning is anything like the learning Google/DeepMind did. They used 5000 special TensorFlow Processing units (TPUs), each costing thousands of dollars. Right now, this is way beyond our resources. Romi learning is most likely simply saving important positions in a persistant hash. In future games, these are reloaded into the main hash table, so the new game, if ir encouters one of these positions, alread has deep search information ofr them. The thing is, this helps not at all if a different lien is played. My old program NOW had this feature, but I did it not so much to make it stronger, but instead to avoid losing lines during tournaments. You can think of it as a self-correcting book, which benefits if the same line it tried on Komodo.

Larry and I often discuss Monte Carlo Tree Search, and are interested in trying this. We have also discussed uses for neural networks. Small nns could be useful in present PCs, but the massive nn used in AlphaGo Zero is currently beyond what we, and most chess engine users can afford.

We listen, and try to add what we think people want. But we do not have endless resources. We can afford to buy roughly 1 new server each year. It is not a matter of being "convinced". We just cannot afford it currently. Hopefully, GPUs in graphics cards will get faster and perhaps nn hardware will get added to future PCs or at least be much less expensive.
Mark Lefler was kind enough to ask me in a pm to error check the above! :D
But I do not think the Romi learning is anything like the learning Google/DeepMind did.
There is circumstantial evidence that it is similar in nature. Reinforcement learning involves accumulated rewards (and penalties) for every position in a database. It could be a persistent hash like suggested.
Romi learning is most likely simply saving important positions in a persistant hash.
Or the way Romi actually does it in a tree of all played games. This is superior to a persistent hash as only the subtree from the current position is loaded into the game hash. This has the advantage that only useful information ends up in the game hash.
The thing is, this helps not at all if a different lien is played.
Not true. Rewards and penalties are greater near the leaves and over time are back propagated to the root. Since every node is a root to a subtree, every move benefits from backpropagation. This results in a meaningful differentiation resulting in a gradual determination of which move gives better results. So for example at the actual root Romi will settle on say 1.e4 or 1.d4 etc. as being best and will always play that opening move. Just like AlphaZ always played 1.d4.
You can think of it as a self-correcting book, which benefits if the same line it tried on Komodo.
Philosophically I would argue that what RomiChess does has nothing to do with a book. In a book an engine usually can choose randomly between acceptable moves. In RomiChess only the learned best move is played in its Monkey See Monkey Do "book". That is separate from Romi's reinforcement learning. It just so happens that the stored tree of all Romi's games can handle both a "book" and the rewards/penalties for reinforcement learning.

Thanks Mark for inviting me to share some details! :D
:idea: Idea :!: :idea:

If K team decides to adopt Romi learning it could train for next WCCC with learning as an extension of Erdo's book! Then, AlphaZ could be invited to participate...

The "invitation" could sound as "If you don't register the whole world will know you're afraid to challenge the Champion in charge"...........

I bet a pizza on Komodo. If Komodo will win I'll offer a pizza to the whole team! :D

(Ehm... How many in K team? :? )
Do I get to go to the pizza party? :lol: Anyway, I think I read that AlphaZ trained 44 million games. I'm not sure about that number but the point is that K would have to train at least a percentage of the games that A did or K will still find itself 'unprepared'. That many training games would take an enormous amount of hardware and would require a huge public cooperative effort. Like in most world changing endeavors money paves the way. But we could still enjoy that pizza party Rodolfo :D and I think the K team would not mind if you were there also. 8-)
If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through

Rodolfo Leoni
Posts: 545
Joined: Tue Jun 06, 2017 2:49 pm
Location: Italy

Re: Any news of a Komodo update in sight?

Post by Rodolfo Leoni » Wed Dec 13, 2017 12:30 am

Rodolfo Leoni wrote:
Michael Sherwin wrote:
mjlef wrote:
shrapnel wrote:
Jesse Gersenson wrote:If people want learning in Komodo, let your voice be heard; Mark and Larry are open to feature requests, especially those requested by a lot of people.
'.
So Reinforcement Learning will be introduced by Komodo team (if they are capable of it) only on REQUEST.
Looks like the thrashing stockfish received still hasn't convinced them.
*** Unbelievable.
Of course we listen to requests. But I do not think the Romi learning is anything like the learning Google/DeepMind did. They used 5000 special TensorFlow Processing units (TPUs), each costing thousands of dollars. Right now, this is way beyond our resources. Romi learning is most likely simply saving important positions in a persistant hash. In future games, these are reloaded into the main hash table, so the new game, if ir encouters one of these positions, alread has deep search information ofr them. The thing is, this helps not at all if a different lien is played. My old program NOW had this feature, but I did it not so much to make it stronger, but instead to avoid losing lines during tournaments. You can think of it as a self-correcting book, which benefits if the same line it tried on Komodo.

Larry and I often discuss Monte Carlo Tree Search, and are interested in trying this. We have also discussed uses for neural networks. Small nns could be useful in present PCs, but the massive nn used in AlphaGo Zero is currently beyond what we, and most chess engine users can afford.

We listen, and try to add what we think people want. But we do not have endless resources. We can afford to buy roughly 1 new server each year. It is not a matter of being "convinced". We just cannot afford it currently. Hopefully, GPUs in graphics cards will get faster and perhaps nn hardware will get added to future PCs or at least be much less expensive.
Mark Lefler was kind enough to ask me in a pm to error check the above! :D
But I do not think the Romi learning is anything like the learning Google/DeepMind did.
There is circumstantial evidence that it is similar in nature. Reinforcement learning involves accumulated rewards (and penalties) for every position in a database. It could be a persistent hash like suggested.
Romi learning is most likely simply saving important positions in a persistant hash.
Or the way Romi actually does it in a tree of all played games. This is superior to a persistent hash as only the subtree from the current position is loaded into the game hash. This has the advantage that only useful information ends up in the game hash.
The thing is, this helps not at all if a different lien is played.
Not true. Rewards and penalties are greater near the leaves and over time are back propagated to the root. Since every node is a root to a subtree, every move benefits from backpropagation. This results in a meaningful differentiation resulting in a gradual determination of which move gives better results. So for example at the actual root Romi will settle on say 1.e4 or 1.d4 etc. as being best and will always play that opening move. Just like AlphaZ always played 1.d4.
You can think of it as a self-correcting book, which benefits if the same line it tried on Komodo.
Philosophically I would argue that what RomiChess does has nothing to do with a book. In a book an engine usually can choose randomly between acceptable moves. In RomiChess only the learned best move is played in its Monkey See Monkey Do "book". That is separate from Romi's reinforcement learning. It just so happens that the stored tree of all Romi's games can handle both a "book" and the rewards/penalties for reinforcement learning.

Thanks Mark for inviting me to share some details! :D
:idea: Idea :!: :idea:

If K team decides to adopt Romi learning it could train for next WCCC with learning as an extension of Erdo's book! Then, AlphaZ could be invited to participate...

The "invitation" could sound as "If you don't register the whole world will know you're afraid to challenge the Champion in charge"...........

I bet a pizza on Komodo. If Komodo will win I'll offer a pizza to the whole team! :D

(Ehm... How many in K team? :? )
Of course you'd be considered a K team member if they decide to use your system. :D And I think to beat AlphaZ would give Komodo such an advertising to greatly compensate the expenses. The pizza would be an extra! :lol: About me, I just got a graduation as Federal Chess Teacher and I have a lot of requests from schools and boys/girls parents. I can plan nothing for the immediate future. :?

But we'll have that opportunity, sooner or later. :D
F.S.I. Chess Teacher

Michael Sherwin
Posts: 3196
Joined: Fri May 26, 2006 1:00 am
Location: WY, USA
Full name: Michael Sherwin

Re: Any news of a Komodo update in sight?

Post by Michael Sherwin » Wed Dec 13, 2017 2:51 pm

Michael Sherwin wrote:
mjlef wrote:
shrapnel wrote:
Jesse Gersenson wrote:If people want learning in Komodo, let your voice be heard; Mark and Larry are open to feature requests, especially those requested by a lot of people.
'.
So Reinforcement Learning will be introduced by Komodo team (if they are capable of it) only on REQUEST.
Looks like the thrashing stockfish received still hasn't convinced them.
*** Unbelievable.
Of course we listen to requests. But I do not think the Romi learning is anything like the learning Google/DeepMind did. They used 5000 special TensorFlow Processing units (TPUs), each costing thousands of dollars. Right now, this is way beyond our resources. Romi learning is most likely simply saving important positions in a persistant hash. In future games, these are reloaded into the main hash table, so the new game, if ir encouters one of these positions, alread has deep search information ofr them. The thing is, this helps not at all if a different lien is played. My old program NOW had this feature, but I did it not so much to make it stronger, but instead to avoid losing lines during tournaments. You can think of it as a self-correcting book, which benefits if the same line it tried on Komodo.

Larry and I often discuss Monte Carlo Tree Search, and are interested in trying this. We have also discussed uses for neural networks. Small nns could be useful in present PCs, but the massive nn used in AlphaGo Zero is currently beyond what we, and most chess engine users can afford.

We listen, and try to add what we think people want. But we do not have endless resources. We can afford to buy roughly 1 new server each year. It is not a matter of being "convinced". We just cannot afford it currently. Hopefully, GPUs in graphics cards will get faster and perhaps nn hardware will get added to future PCs or at least be much less expensive.
Mark Lefler was kind enough to ask me in a pm to error check the above! :D
But I do not think the Romi learning is anything like the learning Google/DeepMind did.
There is circumstantial evidence that it is similar in nature. Reinforcement learning involves accumulated rewards (and penalties) for every position in a database. It could be a persistent hash like suggested.
Romi learning is most likely simply saving important positions in a persistant hash.
Or the way Romi actually does it in a tree of all played games. This is superior to a persistent hash as only the subtree from the current position is loaded into the game hash. This has the advantage that only useful information ends up in the game hash.
The thing is, this helps not at all if a different lien is played.
Not true. Rewards and penalties are greater near the leaves and over time are back propagated to the root. Since every node is a root to a subtree, every move benefits from backpropagation. This results in a meaningful differentiation resulting in a gradual determination of which move gives better results. So for example at the actual root Romi will settle on say 1.e4 or 1.d4 etc. as being best and will always play that opening move. Just like AlphaZ always played 1.d4.
You can think of it as a self-correcting book, which benefits if the same line it tried on Komodo.
Philosophically I would argue that what RomiChess does has nothing to do with a book. In a book an engine usually can choose randomly between acceptable moves. In RomiChess only the learned best move is played in its Monkey See Monkey Do "book". That is separate from Romi's reinforcement learning. It just so happens that the stored tree of all Romi's games can handle both a "book" and the rewards/penalties for reinforcement learning.

Thanks Mark for inviting me to share some details! :D
There was a misunderstanding on my part. It was not Mark that sent me the pm to check the above post for errors. It was Ozymandias. I should have suspected something was wrong when it appeared to me that a fellow engine author would so magnanimously request that I error check what he wrote about my work, lol. :roll:
If you are on a sidewalk and the covid goes beep beep
Just step aside or you might have a bit of heat
Covid covid runs through the town all day
Can the people ever change their ways
Sherwin the covid's after you
Sherwin if it catches you you're through

Jesse Gersenson
Posts: 584
Joined: Sat Aug 20, 2011 7:43 am
Contact:

Re: Any news of a Komodo update in sight?

Post by Jesse Gersenson » Wed Dec 13, 2017 5:46 pm

Rodolfo Leoni wrote:(Ehm... How many in K team? :? )
Larry, Don, Mark and the legion of people who support the project in various ways:
  • financial support (buying the engine, and also those who make substantial donations),
    engine testers,
    website testers,
    mac and android compilers
It would certainally be a lot of pizza.

Rodolfo Leoni
Posts: 545
Joined: Tue Jun 06, 2017 2:49 pm
Location: Italy

Re: Any news of a Komodo update in sight?

Post by Rodolfo Leoni » Wed Dec 13, 2017 6:16 pm

Jesse Gersenson wrote:
Rodolfo Leoni wrote:(Ehm... How many in K team? :? )
Larry, Don, Mark and the legion of people who support the project in various ways:
  • financial support (buying the engine, and also those who make substantial donations),
    engine testers,
    website testers,
    mac and android compilers
It would certainally be a lot of pizza.
Painful! But customers and donations... Not enough money.
I'd be happy to offer a pizza to Don, but he's dead. :(
F.S.I. Chess Teacher

Rodolfo Leoni
Posts: 545
Joined: Tue Jun 06, 2017 2:49 pm
Location: Italy

Re: Any news of a Komodo update in sight?

Post by Rodolfo Leoni » Wed Dec 13, 2017 7:54 pm

I decided! If Komodo (with Mike's learning) beats AlphaZ at WCCC I'll offer a giant pizza to Mark, Larry, Erdo and Mike! There's crisis, I need to save money. :wink:
F.S.I. Chess Teacher

Post Reply