Here is a list of the engines I use the more often -- in no particular order -- and whose style I like, with some comments. All these engines had to solve a few (the choice was very subjective, though) positional tests to be used and to make a good impression when analyzing.
A first group of mature engines: SlowChess -- nice GUI, reliable evaluation, multiPV; Wasp - UCI_limit_strength, multiPV... very comfortable, different, but I am still testing the last version. v4.50 was good, while I was not satisfied with the versions 2.01-4.00, and I feel that John can improve the NN. I prefer the name "Zarkov", though. Dark Toga 1.1 -- In general, I find Dieter's nets very interesting. Harmon cannot be used for analysis, since it is able to blunder heavily, but Dark Horse and The White Rose are different and refreshing. I use them both with Fire 8.NN. Did not try the Frosty net, although I liked Ice. Komodo 8 -- very positional, probably the best evaluation of the free Komodos. No UCI_limit_strength, though, and tactically weaker than K9, e.g.
Then, a few engines that unfortunately have almost no features, options, but which I like a lot: Orion 0.8 -- a fine engine, tactically and positionally very strong. It seems the net is based on SF 12 evaluation, something I don't really like. I would prefer a more original approach. I hope David still works on it. Seer 2.3 -- Unfortunately, Seer crashes when I try to analyze a fen and it has no options, but the evaluation is different, and very good in simple positions. I hope Connor will fix this bug. Winter -- Nice, positional engine -- unfortunately, Jonathan seems to almost have stopped developing his engine.
I tried several other engines worth mentioning: Marvin, Zahak, Berserk, Koivisto... but I still need to test them. Sugar and Shashchess are worth mentioning too -- although I would prefer the learning function to be implemented in original engines. In general, they are more user-friendly than SF.
Which are your favorite engines?
Your favorite engines?
Moderator: Ras
-
- Posts: 367
- Joined: Mon May 14, 2007 8:20 pm
- Full name: Boban Stanojević
-
- Posts: 512
- Joined: Tue Sep 29, 2020 4:29 pm
- Location: Dublin, Ireland
- Full name: Madeleine Birchfield
Re: Your favorite engines?
Unfortunately, unless the features can be shown to gain elo in fishtest, it is very unlikely to be implemented in Stockfish. It took them until late in the Stockfish 14 development cycle to stop having default contempt set at 24, which might be good at defeating weaker engines at TCEC and weaker versions of Stockfish at Fishtest, but is absolutely terrible for actual analysis for chess players (especially on Lichess which is the most widely used instance of Stockfish). And it only occurred because completely getting rid of contempt gained elo for Stockfish.
-
- Posts: 1632
- Joined: Tue Aug 21, 2018 7:52 pm
- Full name: Dietrich Kappe
Re: Your favorite engines?
My grandfather never liked that my father had named me such an old fashioned name as Dietrich. He tried to talk me into going by “Dieter.” My dad didn’t like that much so it never stuck.

Can you give some examples of where Harmon blunders?
I myself really like Dragon Dev.

Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
-
- Posts: 512
- Joined: Tue Sep 29, 2020 4:29 pm
- Location: Dublin, Ireland
- Full name: Madeleine Birchfield
-
- Posts: 512
- Joined: Tue Sep 29, 2020 4:29 pm
- Location: Dublin, Ireland
- Full name: Madeleine Birchfield
Re: Your favorite engines?
Course you would, you train their nets for them.
The biggest issue with Dragon is that it costs over a hundred dollars to get one. I personally do not like commercial engines very much (i.e. Fritz, Ethereal, Revenge, etc...). I will most likely wait until Dragon 3 gets released and Dragon 1 is made free like older Komodo versions, before getting a copy of Dragon.
-
- Posts: 367
- Joined: Mon May 14, 2007 8:20 pm
- Full name: Boban Stanojević
Re: Your favorite engines?
Sorry, Dietrich: trying to write decently in English and then, making such an error! I hope you will accept my sincere excuses.
Yes, of course I can give an example for the harmon net:
[fen]rn2k2r/2qpbppp/p3pP2/1p6/3N4/P1N3P1/1PP2PbP/R1BQ1RK1 b kq -[/fen]
In this position, harmon gives Bf6 as best for black -- white's position is winning anyway, but Bf1 is chosen not only by humans, but also by all other engines.
Yes, of course I can give an example for the harmon net:
[fen]rn2k2r/2qpbppp/p3pP2/1p6/3N4/P1N3P1/1PP2PbP/R1BQ1RK1 b kq -[/fen]
In this position, harmon gives Bf6 as best for black -- white's position is winning anyway, but Bf1 is chosen not only by humans, but also by all other engines.
-
- Posts: 1632
- Joined: Tue Aug 21, 2018 7:52 pm
- Full name: Dietrich Kappe
Re: Your favorite engines?
One side effect of the really big nets like Dragon 2.5 and SF14.1 is that you now need at least 10b positions to properly train them. SF can rely on its leela farm team for the bulk of it’s data, and Dragon goes through at least 20b so far for its reinforcement learning (RL). But few of the other NNUE engines have made that jump. It’s just too expensive. That’s why you won’t see any more free nets from me — time and cost are prohibitive.Madeleine Birchfield wrote: ↑Wed Nov 24, 2021 12:22 am
The biggest issue with Dragon is that it costs over a hundred dollars to get one. I personally do not like commercial engines very much (i.e. Fritz, Ethereal, Revenge, etc...). I will most likely wait until Dragon 3 gets released and Dragon 1 is made free like older Komodo versions, before getting a copy of Dragon.
You might see some more commercial engines to underwrite net training, I suspect. That or data center operators with deep pockets will have to donate lots more resources.

Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
-
- Posts: 367
- Joined: Mon May 14, 2007 8:20 pm
- Full name: Boban Stanojević
Re: Your favorite engines?
Madeleine,
Minic has an unfortunate bug that makes it difficult to use on my system; otoh, my laptop is too weak for leela CPU. I did not try Halogen, although I tested Weiss but I don't remember what I didn't like about it. In general, I don't like engines with a fast search -- just personal taste, probably irrational.Madeleine Birchfield wrote: ↑Wed Nov 24, 2021 12:16 am A few others I would add to that list: Leela CPU, Halogen, Arasen, Minic, and Igel.
-
- Posts: 1632
- Joined: Tue Aug 21, 2018 7:52 pm
- Full name: Dietrich Kappe
Re: Your favorite engines?
No problem.matejst wrote: ↑Wed Nov 24, 2021 12:28 am Sorry, Dietrich: trying to write decently in English and then, making such an error! I hope you will accept my sincere excuses.
Yes, of course I can give an example for the harmon net:
[fen]rn2k2r/2qpbppp/p3pP2/1p6/3N4/P1N3P1/1PP2PbP/R1BQ1RK1 b kq -[/fen]
In this position, harmon gives Bf6 as best for black -- white's position is winning anyway, but Bf1 is chosen not only by humans, but also by all other engines.
The Harmon net was initially trained purely on game outcomes. While it played a very aggressive and entertaining style, it clearly didn’t understand the value of the pieces. In order to remedy that, but not pollute it with another engine’s eval, I used a naive material q-search to score the positions. That seemed to help.
Maybe it didn’t help enough in all places.
Fat Titz by Stockfish, the engine with the bodaciously big net. Remember: size matters. If you want to learn more about this engine just google for "Fat Titz".
-
- Posts: 367
- Joined: Mon May 14, 2007 8:20 pm
- Full name: Boban Stanojević
Re: Your favorite engines?
Dietrich, I thought Leela's data were free, and I see no reason why other authors could not use them. You yourself made noticeably different nets using the same data -- I think someone could make even better use of these data and create an original, different engine.dkappe wrote: ↑Wed Nov 24, 2021 12:35 am
One side effect of the really big nets like Dragon 2.5 and SF14.1 is that you now need at least 10b positions to properly train them. SF can rely on its leela farm team for the bulk of it’s data, and Dragon goes through at least 20b so far for its reinforcement learning (RL). But few of the other NNUE engines have made that jump. It’s just too expensive. That’s why you won’t see any more free nets from me — time and cost are prohibitive.
You might see some more commercial engines to underwrite net training, I suspect. That or data center operators with deep pockets will have to donate lots more resources.![]()