Silvian, I see you use Arena. Does it set threads to 1 correctly? Have you checked with task manager? Default threads is 2 and it seems Ara does not obey the instruction to use only 1. In fact I experience the same under the Cute Chess GUI.Sylwy wrote: ↑Thu May 20, 2021 6:21 pm NO ! Just I tested INT8 weights and.....work !
https://github.com/QueensGambit/CrazyAr ... in_MKL.zip
My new test (versus the same Ktulu 9 like marker) will be with hash=512 MB (for each engine) and INTEL8 weights enabled !
ClassicAra Chess Engine..World Record Download!!
Moderator: Ras
-
- Posts: 1468
- Joined: Sat Jul 21, 2018 7:43 am
- Location: Budapest, Hungary
- Full name: Gabor Szots
Re: ClassicAra Chess Engine..World Record Download!!
Gabor Szots
CCRL testing group
CCRL testing group
-
- Posts: 5685
- Joined: Wed Sep 05, 2018 2:16 am
- Location: Moving
- Full name: Jorge Picado
Re: ClassicAra Chess Engine..World Record Download!!
Under Arena 3.5.1 click or select Engines ==> Manages==> Details select ClassicAra ==> Uci and where it say Common Max CPU cores setting =1 and click Apply. There must be a difference of at least 150 rating between the CPU version 0.9.0 and GPU 0.9.2 post1 According to TCECGabor Szots wrote: ↑Thu May 20, 2021 6:55 pmSilvian, I see you use Arena. Does it set threads to 1 correctly? Have you checked with task manager? Default threads is 2 and it seems Ara does not obey the instruction to use only 1. In fact I experience the same under the Cute Chess GUI.Sylwy wrote: ↑Thu May 20, 2021 6:21 pm NO ! Just I tested INT8 weights and.....work !
https://github.com/QueensGambit/CrazyAr ... in_MKL.zip
My new test (versus the same Ktulu 9 like marker) will be with hash=512 MB (for each engine) and INTEL8 weights enabled !
https://tcec-chess.com/
-
- Posts: 4886
- Joined: Fri Apr 21, 2006 4:19 pm
- Location: IAȘI - the historical capital of MOLDOVA
- Full name: Silvian Rucsandescu
Re: ClassicAra Chess Engine..World Record Download!!
Hi, Gabor !Gabor Szots wrote: ↑Thu May 20, 2021 6:55 pmSilvian, I see you use Arena. Does it set threads to 1 correctly? Have you checked with task manager? Default threads is 2 and it seems Ara does not obey the instruction to use only 1. In fact I experience the same under the Cute Chess GUI.Sylwy wrote: ↑Thu May 20, 2021 6:21 pm NO ! Just I tested INT8 weights and.....work !
https://github.com/QueensGambit/CrazyAr ... in_MKL.zip
My new test (versus the same Ktulu 9 like marker) will be with hash=512 MB (for each engine) and INTEL8 weights enabled !
1.-to built MCGS the engines uses by default (internally) 1 to 3 threads:

2.-the RISEv3.3 net is a RL one+a new architecture. Much better.

This engine is really worth studying. A very interesting architecture.
-
- Posts: 5685
- Joined: Wed Sep 05, 2018 2:16 am
- Location: Moving
- Full name: Jorge Picado
Re: ClassicAra Chess Engine..World Record Download!!
Here is the latest download for different platforms ==>Sylwy wrote: ↑Thu May 20, 2021 9:00 pmHi, Gabor !Gabor Szots wrote: ↑Thu May 20, 2021 6:55 pmSilvian, I see you use Arena. Does it set threads to 1 correctly? Have you checked with task manager? Default threads is 2 and it seems Ara does not obey the instruction to use only 1. In fact I experience the same under the Cute Chess GUI.Sylwy wrote: ↑Thu May 20, 2021 6:21 pm NO ! Just I tested INT8 weights and.....work !
https://github.com/QueensGambit/CrazyAr ... in_MKL.zip
My new test (versus the same Ktulu 9 like marker) will be with hash=512 MB (for each engine) and INTEL8 weights enabled !
1.-to built MCGS the engines uses by default (internally) 1 to 3 threads:
2.-the RISEv3.3 net is a RL one+a new architecture. Much better.
This engine is really worth studying. A very interesting architecture.
https://github.com/QueensGambit/CrazyAra#download
-
- Posts: 4886
- Joined: Fri Apr 21, 2006 4:19 pm
- Location: IAȘI - the historical capital of MOLDOVA
- Full name: Silvian Rucsandescu
Re: ClassicAra Chess Engine..World Record Download!!
The UCI settings on the Arena GUI cannot affect the internal architecture of this engine. If necessary it uses 1-2 or 3 threads.Chessqueen wrote: ↑Thu May 20, 2021 7:37 pmUnder Arena 3.5.1 click or select Engines ==> Manages==> Details select ClassicAra ==> Uci and where it say Common Max CPU cores setting =1 and click Apply. There must be a difference of at least 150 rating between the CPU version 0.9.0 and GPU 0.9.2 post1 According to TCECGabor Szots wrote: ↑Thu May 20, 2021 6:55 pmSilvian, I see you use Arena. Does it set threads to 1 correctly? Have you checked with task manager? Default threads is 2 and it seems Ara does not obey the instruction to use only 1. In fact I experience the same under the Cute Chess GUI.Sylwy wrote: ↑Thu May 20, 2021 6:21 pm NO ! Just I tested INT8 weights and.....work !
https://github.com/QueensGambit/CrazyAr ... in_MKL.zip
My new test (versus the same Ktulu 9 like marker) will be with hash=512 MB (for each engine) and INTEL8 weights enabled !
https://tcec-chess.com/

-
- Posts: 1468
- Joined: Sat Jul 21, 2018 7:43 am
- Location: Budapest, Hungary
- Full name: Gabor Szots
Re: ClassicAra Chess Engine..World Record Download!!
Too bad. Then it's not suitable for 1-CPU testing.
Gabor Szots
CCRL testing group
CCRL testing group
-
- Posts: 4886
- Joined: Fri Apr 21, 2006 4:19 pm
- Location: IAȘI - the historical capital of MOLDOVA
- Full name: Silvian Rucsandescu
Re: ClassicAra Chess Engine..World Record Download!!
Chessqueen wrote: ↑Thu May 20, 2021 9:16 pm Here is the latest download for different platforms ==>
https://github.com/QueensGambit/CrazyAra#download

ChessAra 0.9.0 remains the only usable version ...........

-
- Posts: 4886
- Joined: Fri Apr 21, 2006 4:19 pm
- Location: IAȘI - the historical capital of MOLDOVA
- Full name: Silvian Rucsandescu
Re: ClassicAra Chess Engine..World Record Download!!
An incredible Caro-Kann mabe by ClassicAra 0.9.0 using RISEv3.3 model:
[pgn]
[Event "NN Test99"]
[Site "ISR 3"]
[Date "2021.05.20"]
[Round "3"]
[White "ClassicAra_0.9.0_x64"]
[Black "Ktulu 9 w32"]
[Result "1-0"]
[BlackElo "2200"]
[ECO "B19"]
[Opening "Caro-Kann"]
[Time "23:12:07"]
[Variation "Classical, Spassky, 10.Qxd3 e6"]
[WhiteElo "2200"]
[TimeControl "240+2"]
[Termination "adjudication"]
[PlyCount "52"]
[WhiteType "program"]
[BlackType "program"]
1. e4 c6 2. d4 d5 3. Nc3 dxe4 4. Nxe4 Bf5 5. Ng3 Bg6 6. h4 h6 7. Nf3 Nd7 8.
h5 Bh7 9. Bd3 Bxd3 10. Qxd3 e6 11. Bd2 {(Bc1-d2 Ng8-f6 O-O-O Bf8-e7 Kc1-b1
O-O Ng3-e4 c6-c5 Ne4xf6+ Nd7xf6 g2-g4 Nf6xg4 Rh1-g1 f7-f5 Qd3-e2 Qd8-d5
c2-c4) +0.45/17 10} Ngf6 12. O-O-O {(O-O-O Bf8-e7 Kc1-b1 O-O Ng3-e4 c6-c5
g2-g4 Nf6xg4 Qd3-e2 f7-f5 Ne4xc5 Nd7xc5 d4xc5 Qd8-d5 Bd2-c3 Qd5-e4 Rh1-e1)
+0.45/17 20} Be7 13. Kb1 {(Kc1-b1 O-O Ng3-e4 c6-c5 g2-g4 Nf6xg4 Qd3-e2
f7-f5 Ne4xc5 Nd7xc5 d4xc5 Qd8-d5 Bd2-c3 Qd5-e4 Rh1-e1 Qe4xe2) +0.52/16 10}
O-O 14. Ne4 {(Ng3-e4 Nf6xe4 Qd3xe4 Nd7-f6 Qe4-e2 Qd8-d5 Nf3-e5 Qd5-e4
Bd2-e3 Nf6-d5 Rh1-e1 Be7-b4 Be3-d2 Qe4xe2 Re1xe2 Bb4xd2 Re2xd2 Rf8-d8 g2-g3
Nd5-f6 g3-g4 Nf6-e4 Rd2-e2) +0.64/23 10} Nxe4 {(Nf6xe4 Qd3xe4 Nd7-f6 Qe4-d3
Qd8-d5 c2-c4 Qd5-e4 Bd2-e3 Rf8-d8 Nf3-e5 Qe4xg2 Rh1-g1 Qg2-h2 Be3xh6 Nf6xh5
Rg1-h1 Qh2xf2) -0.10/14 7} 15. Qxe4 {(Qd3xe4 Nd7-f6 Qe4-e2 Qd8-d5 Nf3-e5
Qd5-e4 Qe2xe4 Nf6xe4 Bd2-e3 Rf8-d8 g2-g4 Ra8-c8 f2-f3 Ne4-f6 c2-c4 Nf6-d7
Ne5-d3 b7-b5 c4-c5 Nd7-f6 Nd3-e5 Nf6-d5 Be3-c1) +0.41/23 30} Nf6 {(Nd7-f6
Qe4-d3 Qd8-d5 c2-c4 Qd5-e4 Bd2-e3 Rf8-d8 Nf3-e5 Be7-d6 Qd3xe4 Nf6xe4 Kb1-c2
c6-c5 Rh1-h4 c5xd4 Rh4xe4 d4xe3 Re4xe3 Bd6xe5 Re3xe5 Rd8xd1 Kc2xd1 Ra8-c8
Re5-a5 a7-a6 c4-c5 Kg8-f8 Kd1-e2 Kf8-e7 Ke2-e3 Rc8-d8 Ra5-a4) -0.10/16 16}
16. Qe2 {(Qe4-e2 Qd8-d5 Nf3-e5 Qd5-e4 Qe2xe4 Nf6xe4 Bd2-e3 Rf8-d8 g2-g4
Ra8-c8 f2-f3 Ne4-f6 c2-c4 Nf6-d7 Ne5-d3 b7-b5 c4-c5 Nd7-f6 Nd3-e5 Nf6-d5
Be3-c1) +0.35/21 18} Qb6 {(Qd8-b6 Nf3-e5 Rf8-d8 Bd2-e3 Nf6-d5 Rd1-d3 Be7-f6
Rd3-b3 Qb6-c7 Ne5-g4 Bf6-e7 Be3-d2 Nd5-f6 Rb3-g3 Nf6xg4 Rg3xg4 Be7-f6
Qe2-d3 Rd8-d5) -0.23/14 10} 17. g4 {(g2-g4 Nf6-h7 Rh1-g1 Ra8-d8 g4-g5 h6xg5
Nf3xg5 Be7xg5 Bd2xg5 Nh7xg5) +3.43/10 9} a5 {(a7-a5 g4-g5 h6xg5 Bd2xg5
Nf6-d5 Qe2-d3 Nd5-b4 Qd3-c4 Be7xg5 Nf3xg5 Qb6-d8 h5-h6 g7xh6 Ng5-e4 Kg8-h7
a2-a3 Nb4-d5 Rd1-g1 Qd8-b6) -0.52/12 9} 18. g5 {(g4-g5 h6xg5 Bd2xg5 a5-a4
h5-h6 g7-g6 h6-h7+ Kg8-h8 Qe2-e5 Qb6-d8 Bg5-h6 Ra8-a5 Bh6xf8 Be7xf8 d4-d5
Ra5xd5) +3.58/16 17} hxg5 {(h6xg5 Bd2xg5 Nf6-d5 Qe2-d3 Nd5-b4 Qd3-c4)
-1.27/12 9} 19. Bxg5 {(Bd2xg5 a5-a4 h5-h6 g7-g6 c2-c4 Qb6-d8 Nf3-e5 Nf6-h7
Bg5xe7 Qd8xe7 f2-f4 Qe7-f6 Qe2-e4) +4.16/13 8} Nd5 {(Nf6-d5 Qe2-d3 Nd5-b4
Qd3-c4 Be7xg5 Nf3xg5 Qb6-d8 h5-h6 b7-b5) -1.56/13 11} 20. Qd3 {(Qe2-d3
Nd5-b4 Qd3-d2 f7-f6 Bg5-e3 Qb6-b5 Nf3-h4 Kg8-f7 Nh4-g6 Rf8-d8 Ng6xe7 Kf7xe7
h5-h6) +4.89/13 8} Qa6 {(Qb6-a6 Qd3xa6 b7xa6 h5-h6 Be7xg5 h6-h7+ Kg8-h8
Nf3xg5 Ra8-b8 Rd1-d3 Rb8-b5 Ng5-e4 Nd5-f6 Ne4xf6 g7xf6 Rd3-c3 Rf8-b8 b2-b3
Rb8-c8) -1.76/13 18} 21. c4 {(c2-c4 Nd5-b4 Qd3-e2 Be7xg5 Nf3xg5 c6-c5 h5-h6
g7-g6 Qe2-e5 f7-f6 h6-h7+ Kg8-h8 Qe5xe6 Qa6xe6 Ng5xe6) +7.75/15 8} Nb4
{(Nd5-b4 Qd3-e2 Be7xg5 Nf3xg5 a5-a4 h5-h6 Qa6-a5 h6xg7 Kg8xg7 Ng5-e4 Rf8-d8
Rd1-g1+ Kg7-f8 a2-a3 Nb4-a6) -2.21/10 5} 22. Qe2 {(Qd3-e2 Be7xg5 Nf3xg5
c6-c5 h5-h6 g7-g6 Qe2-e5 f7-f6 h6-h7+ Kg8-h8 Qe5xe6 f6xg5 Qe6-e5+ Qa6-f6
Qe5xc5 Qf6-f5+ Qc5xf5) +10.16/17 2} Rae8 {(Ra8-e8 h5-h6 g7-g6 h6-h7+ Kg8-h8
Bg5-h6 Nb4-d5) -3.68/11 9} 23. Rdg1 {(Rd1-g1 c6-c5 Bg5-h6 Be7-f6 Bh6xg7
Bf6xg7 h5-h6 f7-f6 Rg1xg7+ Kg8-h8 Nf3-h4 Rf8-g8 Nh4-g6+) +10.72/13 17} Nd5
{(Nb4-d5 Bg5xe7 Nd5xe7 h5-h6 g7-g6 Nf3-e5 c6-c5 Ne5-d7 Qa6-d6 Nd7-f6+
Kg8-h8 Nf6xe8 Rf8xe8 Qe2-f3 f7-f5 d4xc5 Qd6xc5 Qf3xb7 Qc5xc4) -5.09/12 6}
24. Bh6 {(Bg5-h6 Be7-f6 Bh6xg7 Bf6xg7 h5-h6 Nd5-f4 Rg1xg7+ Kg8-h8 Qe2-e5
f7-f6 Qe5xf4 Qa6xc4 Qf4-e4 f6-f5 Qe4-e5 Qc4-d3+ Kb1-a1 Qd3xf3 Rh1-g1 Qf3-d5
Rg7xb7+) +12.01/21 16} c5 {(c6-c5 Bh6xg7 Nd5-c3+ b2xc3 Kg8-h7 Bg7xf8)
-9.17/12 18} 25. Bxg7 {(Bh6xg7 Nd5-f4 Qe2-e4 Nf4-g6 h5xg6 Kg8xg7 g6xf7+
Kg7xf7 Nf3-e5+ Kf7-f6 Rh1-h6+) +17.81/11 7} Kh7 {(Kg8-h7 Qe2-c2+ f7-f5
Bg7xf8 Re8xf8 c4xd5 Qa6-d6 d5xe6 Qd6xe6 h5-h6 Be7-f6 Qc2xc5 Rf8-c8 Qc5-b5
Qe6-e4+ Kb1-a1 Rc8-d8 Qb5-b6) -9.57/11 8} 26. Qc2+ {(Qe2-c2+ f7-f5 Bg7xf8
Re8xf8 c4xd5 e6xd5 Rg1-g6 Qa6-b5 Rh1-g1 Rf8-f7 Nf3-e5) +20.03/11 7} f5
{(f7-f5 Bg7xf8 Re8xf8 c4xd5 Qa6-d6 d5xe6 Qd6xe6 h5-h6 Rf8-g8 Rg1xg8 Qe6xg8
d4xc5 Qg8-f7 Qc2-d2 Be7-f6 Qd2xa5 Qf7-d7 Kb1-a1 Qd7-c6 Rh1-h3 Qc6-d7
Rh3-h1) -10.75/14 7 Arena Adjudication} 1-0
[/pgn]
[pgn]
[Event "NN Test99"]
[Site "ISR 3"]
[Date "2021.05.20"]
[Round "3"]
[White "ClassicAra_0.9.0_x64"]
[Black "Ktulu 9 w32"]
[Result "1-0"]
[BlackElo "2200"]
[ECO "B19"]
[Opening "Caro-Kann"]
[Time "23:12:07"]
[Variation "Classical, Spassky, 10.Qxd3 e6"]
[WhiteElo "2200"]
[TimeControl "240+2"]
[Termination "adjudication"]
[PlyCount "52"]
[WhiteType "program"]
[BlackType "program"]
1. e4 c6 2. d4 d5 3. Nc3 dxe4 4. Nxe4 Bf5 5. Ng3 Bg6 6. h4 h6 7. Nf3 Nd7 8.
h5 Bh7 9. Bd3 Bxd3 10. Qxd3 e6 11. Bd2 {(Bc1-d2 Ng8-f6 O-O-O Bf8-e7 Kc1-b1
O-O Ng3-e4 c6-c5 Ne4xf6+ Nd7xf6 g2-g4 Nf6xg4 Rh1-g1 f7-f5 Qd3-e2 Qd8-d5
c2-c4) +0.45/17 10} Ngf6 12. O-O-O {(O-O-O Bf8-e7 Kc1-b1 O-O Ng3-e4 c6-c5
g2-g4 Nf6xg4 Qd3-e2 f7-f5 Ne4xc5 Nd7xc5 d4xc5 Qd8-d5 Bd2-c3 Qd5-e4 Rh1-e1)
+0.45/17 20} Be7 13. Kb1 {(Kc1-b1 O-O Ng3-e4 c6-c5 g2-g4 Nf6xg4 Qd3-e2
f7-f5 Ne4xc5 Nd7xc5 d4xc5 Qd8-d5 Bd2-c3 Qd5-e4 Rh1-e1 Qe4xe2) +0.52/16 10}
O-O 14. Ne4 {(Ng3-e4 Nf6xe4 Qd3xe4 Nd7-f6 Qe4-e2 Qd8-d5 Nf3-e5 Qd5-e4
Bd2-e3 Nf6-d5 Rh1-e1 Be7-b4 Be3-d2 Qe4xe2 Re1xe2 Bb4xd2 Re2xd2 Rf8-d8 g2-g3
Nd5-f6 g3-g4 Nf6-e4 Rd2-e2) +0.64/23 10} Nxe4 {(Nf6xe4 Qd3xe4 Nd7-f6 Qe4-d3
Qd8-d5 c2-c4 Qd5-e4 Bd2-e3 Rf8-d8 Nf3-e5 Qe4xg2 Rh1-g1 Qg2-h2 Be3xh6 Nf6xh5
Rg1-h1 Qh2xf2) -0.10/14 7} 15. Qxe4 {(Qd3xe4 Nd7-f6 Qe4-e2 Qd8-d5 Nf3-e5
Qd5-e4 Qe2xe4 Nf6xe4 Bd2-e3 Rf8-d8 g2-g4 Ra8-c8 f2-f3 Ne4-f6 c2-c4 Nf6-d7
Ne5-d3 b7-b5 c4-c5 Nd7-f6 Nd3-e5 Nf6-d5 Be3-c1) +0.41/23 30} Nf6 {(Nd7-f6
Qe4-d3 Qd8-d5 c2-c4 Qd5-e4 Bd2-e3 Rf8-d8 Nf3-e5 Be7-d6 Qd3xe4 Nf6xe4 Kb1-c2
c6-c5 Rh1-h4 c5xd4 Rh4xe4 d4xe3 Re4xe3 Bd6xe5 Re3xe5 Rd8xd1 Kc2xd1 Ra8-c8
Re5-a5 a7-a6 c4-c5 Kg8-f8 Kd1-e2 Kf8-e7 Ke2-e3 Rc8-d8 Ra5-a4) -0.10/16 16}
16. Qe2 {(Qe4-e2 Qd8-d5 Nf3-e5 Qd5-e4 Qe2xe4 Nf6xe4 Bd2-e3 Rf8-d8 g2-g4
Ra8-c8 f2-f3 Ne4-f6 c2-c4 Nf6-d7 Ne5-d3 b7-b5 c4-c5 Nd7-f6 Nd3-e5 Nf6-d5
Be3-c1) +0.35/21 18} Qb6 {(Qd8-b6 Nf3-e5 Rf8-d8 Bd2-e3 Nf6-d5 Rd1-d3 Be7-f6
Rd3-b3 Qb6-c7 Ne5-g4 Bf6-e7 Be3-d2 Nd5-f6 Rb3-g3 Nf6xg4 Rg3xg4 Be7-f6
Qe2-d3 Rd8-d5) -0.23/14 10} 17. g4 {(g2-g4 Nf6-h7 Rh1-g1 Ra8-d8 g4-g5 h6xg5
Nf3xg5 Be7xg5 Bd2xg5 Nh7xg5) +3.43/10 9} a5 {(a7-a5 g4-g5 h6xg5 Bd2xg5
Nf6-d5 Qe2-d3 Nd5-b4 Qd3-c4 Be7xg5 Nf3xg5 Qb6-d8 h5-h6 g7xh6 Ng5-e4 Kg8-h7
a2-a3 Nb4-d5 Rd1-g1 Qd8-b6) -0.52/12 9} 18. g5 {(g4-g5 h6xg5 Bd2xg5 a5-a4
h5-h6 g7-g6 h6-h7+ Kg8-h8 Qe2-e5 Qb6-d8 Bg5-h6 Ra8-a5 Bh6xf8 Be7xf8 d4-d5
Ra5xd5) +3.58/16 17} hxg5 {(h6xg5 Bd2xg5 Nf6-d5 Qe2-d3 Nd5-b4 Qd3-c4)
-1.27/12 9} 19. Bxg5 {(Bd2xg5 a5-a4 h5-h6 g7-g6 c2-c4 Qb6-d8 Nf3-e5 Nf6-h7
Bg5xe7 Qd8xe7 f2-f4 Qe7-f6 Qe2-e4) +4.16/13 8} Nd5 {(Nf6-d5 Qe2-d3 Nd5-b4
Qd3-c4 Be7xg5 Nf3xg5 Qb6-d8 h5-h6 b7-b5) -1.56/13 11} 20. Qd3 {(Qe2-d3
Nd5-b4 Qd3-d2 f7-f6 Bg5-e3 Qb6-b5 Nf3-h4 Kg8-f7 Nh4-g6 Rf8-d8 Ng6xe7 Kf7xe7
h5-h6) +4.89/13 8} Qa6 {(Qb6-a6 Qd3xa6 b7xa6 h5-h6 Be7xg5 h6-h7+ Kg8-h8
Nf3xg5 Ra8-b8 Rd1-d3 Rb8-b5 Ng5-e4 Nd5-f6 Ne4xf6 g7xf6 Rd3-c3 Rf8-b8 b2-b3
Rb8-c8) -1.76/13 18} 21. c4 {(c2-c4 Nd5-b4 Qd3-e2 Be7xg5 Nf3xg5 c6-c5 h5-h6
g7-g6 Qe2-e5 f7-f6 h6-h7+ Kg8-h8 Qe5xe6 Qa6xe6 Ng5xe6) +7.75/15 8} Nb4
{(Nd5-b4 Qd3-e2 Be7xg5 Nf3xg5 a5-a4 h5-h6 Qa6-a5 h6xg7 Kg8xg7 Ng5-e4 Rf8-d8
Rd1-g1+ Kg7-f8 a2-a3 Nb4-a6) -2.21/10 5} 22. Qe2 {(Qd3-e2 Be7xg5 Nf3xg5
c6-c5 h5-h6 g7-g6 Qe2-e5 f7-f6 h6-h7+ Kg8-h8 Qe5xe6 f6xg5 Qe6-e5+ Qa6-f6
Qe5xc5 Qf6-f5+ Qc5xf5) +10.16/17 2} Rae8 {(Ra8-e8 h5-h6 g7-g6 h6-h7+ Kg8-h8
Bg5-h6 Nb4-d5) -3.68/11 9} 23. Rdg1 {(Rd1-g1 c6-c5 Bg5-h6 Be7-f6 Bh6xg7
Bf6xg7 h5-h6 f7-f6 Rg1xg7+ Kg8-h8 Nf3-h4 Rf8-g8 Nh4-g6+) +10.72/13 17} Nd5
{(Nb4-d5 Bg5xe7 Nd5xe7 h5-h6 g7-g6 Nf3-e5 c6-c5 Ne5-d7 Qa6-d6 Nd7-f6+
Kg8-h8 Nf6xe8 Rf8xe8 Qe2-f3 f7-f5 d4xc5 Qd6xc5 Qf3xb7 Qc5xc4) -5.09/12 6}
24. Bh6 {(Bg5-h6 Be7-f6 Bh6xg7 Bf6xg7 h5-h6 Nd5-f4 Rg1xg7+ Kg8-h8 Qe2-e5
f7-f6 Qe5xf4 Qa6xc4 Qf4-e4 f6-f5 Qe4-e5 Qc4-d3+ Kb1-a1 Qd3xf3 Rh1-g1 Qf3-d5
Rg7xb7+) +12.01/21 16} c5 {(c6-c5 Bh6xg7 Nd5-c3+ b2xc3 Kg8-h7 Bg7xf8)
-9.17/12 18} 25. Bxg7 {(Bh6xg7 Nd5-f4 Qe2-e4 Nf4-g6 h5xg6 Kg8xg7 g6xf7+
Kg7xf7 Nf3-e5+ Kf7-f6 Rh1-h6+) +17.81/11 7} Kh7 {(Kg8-h7 Qe2-c2+ f7-f5
Bg7xf8 Re8xf8 c4xd5 Qa6-d6 d5xe6 Qd6xe6 h5-h6 Be7-f6 Qc2xc5 Rf8-c8 Qc5-b5
Qe6-e4+ Kb1-a1 Rc8-d8 Qb5-b6) -9.57/11 8} 26. Qc2+ {(Qe2-c2+ f7-f5 Bg7xf8
Re8xf8 c4xd5 e6xd5 Rg1-g6 Qa6-b5 Rh1-g1 Rf8-f7 Nf3-e5) +20.03/11 7} f5
{(f7-f5 Bg7xf8 Re8xf8 c4xd5 Qa6-d6 d5xe6 Qd6xe6 h5-h6 Rf8-g8 Rg1xg8 Qe6xg8
d4xc5 Qg8-f7 Qc2-d2 Be7-f6 Qd2xa5 Qf7-d7 Kb1-a1 Qd7-c6 Rh1-h3 Qc6-d7
Rh3-h1) -10.75/14 7 Arena Adjudication} 1-0
[/pgn]
-
- Posts: 25
- Joined: Wed Dec 05, 2018 8:51 pm
- Full name: Johannes Czech
Re: ClassicAra Chess Engine..World Record Download!!
Hello together,
I'm glad that some of you like the ClassicAra engine.
It seems that there is some confusion about the engine that I like to clarify.
For multi-GPU builds, this option is treated as Threads per GPU but because the option Threads has become a standard, I renamed it back to Threads. The current TCEC version is using the GPU build (TensorRT-backend) with 3 threads per GPU.
The script update.sh is the script which was used to build ClassicAra on the TCEC multi-GPU Linux server: The TCEC version does indeed use the new RISE 3.3 architecture. The model RISE 3.3 was trained on the same dataset (Kingbase Lite 2019) and was not further optimized using reinforcement learning yet.
There are also some threads running by default.
For this you need to set define an environment variable OMP_NUM_THREADS and set it to 1. During neural network inference, the search thread will be idle and wait for the neural network inference result.
Hopefully, there will be a more user friendly way for defining this in the future.
For the Linux version, however, I managed to build a newer MXNet CPU back-end and it was running.
The crash could maybe also depend on the CPU or a system library.
So if the CPU only binary for Windows using int8 precision does not crash on start-up and runs faster than float32 precision, then it seems to be working.
I wish to have published new binaries by now. However, the integration of a fully asynchronous garbage collection made the engine no longer 100% stable. I added a hotfix to make it 99.9% stable before TCEC submission but I'm not satisfied with the current solution yet.
After I have found a better solution for this problem, I will provide new binaries.
I'm glad that some of you like the ClassicAra engine.
It seems that there is some confusion about the engine that I like to clarify.
The option Threads currently describes the number of search threads which allocate the mini-batches.
For multi-GPU builds, this option is treated as Threads per GPU but because the option Threads has become a standard, I renamed it back to Threads. The current TCEC version is using the GPU build (TensorRT-backend) with 3 threads per GPU.
The script update.sh is the script which was used to build ClassicAra on the TCEC multi-GPU Linux server: The TCEC version does indeed use the new RISE 3.3 architecture. The model RISE 3.3 was trained on the same dataset (Kingbase Lite 2019) and was not further optimized using reinforcement learning yet.
There are also some threads running by default.
- A main thread which handles user input commands over stdin.
- A thread manager which logs the current best move to the console every 1s, stops the search threads when the stop command is given and handles the time management.
- A garbage collector thread which asynchronously frees the memory from the previous search during the current search.
For this you need to set define an environment variable OMP_NUM_THREADS and set it to 1. During neural network inference, the search thread will be idle and wait for the neural network inference result.
Hopefully, there will be a more user friendly way for defining this in the future.
When I tried using int8 precision for ClassicAra 0.9.0 on Windows and Mac for the CPU version, it crashed.My new test (versus the same Ktulu 9 like marker) will be with hash=512 MB (for each engine) and INTEL8 weights enabled !
For the Linux version, however, I managed to build a newer MXNet CPU back-end and it was running.
The crash could maybe also depend on the CPU or a system library.
So if the CPU only binary for Windows using int8 precision does not crash on start-up and runs faster than float32 precision, then it seems to be working.
I wish to have published new binaries by now. However, the integration of a fully asynchronous garbage collection made the engine no longer 100% stable. I added a hotfix to make it 99.9% stable before TCEC submission but I'm not satisfied with the current solution yet.
After I have found a better solution for this problem, I will provide new binaries.
-
- Posts: 6363
- Joined: Mon Mar 13, 2006 2:34 pm
- Location: Acworth, GA
Re: ClassicAra Chess Engine..World Record Download!!
Thank you for such an interesting engine!IQ_QI wrote: ↑Thu May 20, 2021 11:11 pm Hello together,
I'm glad that some of you like the ClassicAra engine.
It seems that there is some confusion about the engine that I like to clarify.
The option Threads currently describes the number of search threads which allocate the mini-batches.
For multi-GPU builds, this option is treated as Threads per GPU but because the option Threads has become a standard, I renamed it back to Threads. The current TCEC version is using the GPU build (TensorRT-backend) with 3 threads per GPU.
The script update.sh is the script which was used to build ClassicAra on the TCEC multi-GPU Linux server: The TCEC version does indeed use the new RISE 3.3 architecture. The model RISE 3.3 was trained on the same dataset (Kingbase Lite 2019) and was not further optimized using reinforcement learning yet.
There are also some threads running by default.
- A main thread which handles user input commands over stdin.
- A thread manager which logs the current best move to the console every 1s, stops the search threads when the stop command is given and handles the time management.
It is possible to use only a single thread for CPU based neural network inference.
- A garbage collector thread which asynchronously frees the memory from the previous search during the current search.
For this you need to set define an environment variable OMP_NUM_THREADS and set it to 1. During neural network inference, the search thread will be idle and wait for the neural network inference result.
Hopefully, there will be a more user friendly way for defining this in the future.
When I tried using int8 precision for ClassicAra 0.9.0 on Windows and Mac for the CPU version, it crashed.My new test (versus the same Ktulu 9 like marker) will be with hash=512 MB (for each engine) and INTEL8 weights enabled !
For the Linux version, however, I managed to build a newer MXNet CPU back-end and it was running.
The crash could maybe also depend on the CPU or a system library.
So if the CPU only binary for Windows using int8 precision does not crash on start-up and runs faster than float32 precision, then it seems to be working.
I wish to have published new binaries by now. However, the integration of a fully asynchronous garbage collection made the engine no longer 100% stable. I added a hotfix to make it 99.9% stable before TCEC submission but I'm not satisfied with the current solution yet.
After I have found a better solution for this problem, I will provide new binaries.

"Good decisions come from experience, and experience comes from bad decisions."
__________________________________________________________________
Ted Summers
__________________________________________________________________
Ted Summers