If you purchased Ethereal 14.00, or paid in advance the upgrade price from a previous version, then you should have received an email by now containing links to download the Ethereal 14.25 binaries and networks. If you've not, and you can't find it from andrewgrantethereal@gmail.com, then please reach out to me. That is best done in my direct email (andrew@grantnet.us), or Discord (agethereal), and not through talkchess.
As all of you surely know, Torch is the major engine I am working on now. I am exceptionally lucky that Chess.com decided they wanted to take a stab at things themselves, and even more lucky to have this chance to lead up a small but dedicated team of developers who you all know and love. And so of course my focus is on Torch and less so on Ethereal, but not entirely. Torch has plenty of secret sauce, but in the pursuit of that, I've learned a great deal of things which can be applied to Ethereal, and have been. I also now have the financial state to be able to invest in generating data for Ethereal's Networks, or for tuning Ethereal's search with SPSA.
But that being said... Ethereal has fallen slightly out of the game. For a lengthy period of time, Ethereal aggressively acted as a gatekeeper between the top engines ( Stockfish, Leela, Dragon ), and virtually everyone else. Recently it seems that Berserk has joined Ethereal in this position, and now possibly Rubi has too. I hope to get back into the swing of things, and reassert that strength, if time and energy allows for it. If not, then a temporary.... passing of the Torch... is nothing to be upset about
No one ever leaves computer chess. Which means that Ethereal will not be going anywhere anytime soon.
Changes since the 14.00 Release:
Code: Select all
14.01 Train with 8x32 instead of 16x16 for the final layers
14.02 Train with 2x1024 inputs instead of 2x512
14.03 Same as the above, but with combined 20b positions up from 10b
14.04 Split the accum accuracy tracking by colour
14.05 Prefetch even earlier ( Thanks Jay )
14.06 Speedup recursive update checking for NNUE
14.07 Perform negative singular extensions when tt-value <= alpha
14.08 Remove the PONDERLOCK variable
14.09 Optimize by not copying the accum on Null moves
14.10 Introduce Cutnodes, and perform Ed's IIR
14.11 Implement Finny tables to optimize Accum updates
14.12 Reduce from 2x1024 to 2x768, with additional data
14.13 Update default fischer network with new architecture
14.14 Fix build issue for pure-HCE versions
14.15 Fix build issues with pthreads on Windows
14.16 Don't overwrite the tt-move on upper bounds
14.17 Introduce adversarial data against Stockfish
14.18 Add links to chess.grantnet.us/Ethereal to the README
14.19 Don't accept tt cutoffs in all nodes
14.20 Update default network with another 777m Ethereal vs Stockfish data
14.21 Update Bench commit message
14.22 Update default network with another 1b Ethereal vs Stockfish data
14.23 Update Bench commit message
14.24 New default network, with 100 epochs of reduced LR
14.25 SPSA Tuning (61k iterations) and Eval Normalization
14.25 Fix bug when loading different NNUE weights
14.25 Additional SPSA Tuning (121k iterations)
Code: Select all
Standard, using Stefan Pohl's UHO book:
Elo | 61.49 +- 3.74 (95%)
Conf | 60.0+0.60s Threads=1 Hash=64MB
Games | N: 16750 W: 5690 L: 2756 D: 8304
http://chess.grantnet.us/test/34078/
Code: Select all
Fischer Random Chess using a very shallow book:
Elo | 30.12 +- 2.60 (95%)
Conf | 60.0+0.60s Threads=1 Hash=64MB
Games | N: 15736 W: 2518 L: 1157 D: 12061
http://chess.grantnet.us/test/34077/