SPCC: New Super 3 Tournament started

Discussion of computer chess matches and engine tournaments.

Moderator: Ras

User avatar
pohl4711
Posts: 2679
Joined: Sat Sep 03, 2011 7:25 am
Location: Berlin, Germany
Full name: Stefan Pohl

SPCC: New Super 3 Tournament started

Post by pohl4711 »

https://www.sp-cc.de/super3_tournament.htm

An endless RoundRobin-tournament with 3 engines, which are at the same level of strength (around 3400 Elo) but are completely different in their inner structure and their way of thinking.
Why? The strongest engine since more a decade (Stockfish) is open source, so many, many other engines are (at least) "strongly inspired" by Stockfish... And additionally, a lot of engines (including Stockfish!) are using Lc0-training-data for building their neural nets: The high-end computerchess has become very incestuous...
So, IMHO, it is very interesting to run a tournament with engines, which are completely different, not only in their playing-style, but also in their inner structure and way of thinking, but on a close level of playing-strength.
The Super 3 tournament is not about the results, but about generating interesting enginegames.

Code: Select all

             | Search    | Eval         | nps (early middlegame)
-------------|-----------|--------------|-------------------------------------
Lc0 CPU      | MCTS      | float-neural |      1.100
-------------|-----------|--------------|-------------------------------------
Revenge 1.0  | AlphaBeta | int-neural   | 11.000.000 (10.000x faster than Lc0)
-------------|-----------|--------------|-------------------------------------
Komodo 14.1  | AlphaBeta | Handcrafted  | 19.000.000 (17.300x faster than Lc0)
-------------|-----------|--------------|-------------------------------------
Hardware: AMD Ryzen 7840HS 8-core (16 threads) notebook with 32GB RAM. Turboboost off.
Speed: See above. Each engine uses 14 threads, when thinking (Lc0 cpu dnll has the UCI option "Threads" like any normal CPU-engine, so it uses the CPU like all other engines) - the GPU stays (of course) unused.
Hash: 8 GB per engine (20.000.000 NNCachesize for Lc0 - enough for storing all evaluated positions of a complete game)
GUI: CutechessGUI (GUI ends game, when a 6-piece endgame is on the board, all other games are played until mate or draw by chess-rules (3fold, 50-moves, stalemate))
Tablebases: None for engines, 6 Syzygy for CutechessGUI
Openings: My UHO_2024_8mvs_+085_+094.pgn openings are used (randomly mixed, each opening repeated with reversed colors, of course (=Gamepairs))
Ponder, Large Memory Pages & learning: Off
Thinking time: 10min+5sec per game/engine (average game-duration: 30 minutes), so only 50 games are played in 24 hours = high quality enginechess


(Perhaps you have to clear your browsercache with <strg>+<shift>+<delete> to reload the graphics/diagrams on my website)
jorose
Posts: 373
Joined: Thu Jan 22, 2015 3:21 pm
Location: Zurich, Switzerland
Full name: Jonathan Rosenthal

Re: SPCC: New Super 3 Tournament started

Post by jorose »

I find it somewhat sad that you complain that engines at the top are incestuous and proceed to pick 2 engines that are not open source, so we don't even really know what is inside them.

Do we even know for sure that Revenge is not based on Leela data at all? I remember Fabio mentioning that that he used Pedone evals to improve the labels, but that doesn't imply he did not use Leela data...

Your "Eval" column is also kind of funny to me; did you not want to specify the architectures used, instead of mentioning that revenge is using a quantized net? Also, did Fabio ever even confirm this for version 1.0? I think most devs started with floating point based nets initially, though it is possible Revenge was ahead of the curve in copying Stockfish as closely as possible. Again, his code has always been closed source, so I have no idea.

For future reference, you can write NNUE for Revenge, and Leela versions for quite a while now have been using Transformers.
-Jonathan
User avatar
pohl4711
Posts: 2679
Joined: Sat Sep 03, 2011 7:25 am
Location: Berlin, Germany
Full name: Stefan Pohl

Re: SPCC: New Super 3 Tournament started

Post by pohl4711 »

jorose wrote: Wed Jul 03, 2024 9:03 am

Your "Eval" column is also kind of funny to me; did you not want to specify the architectures used, instead of mentioning that revenge is using a quantized net? Also, did Fabio ever even confirm this for version 1.0? I think most devs started with floating point based nets initially, though it is possible Revenge was ahead of the curve in copying Stockfish as closely as possible. Again, his code has always been closed source, so I have no idea.
All nnue-nets use integer calculations instead of floating-point, otherwise they would run a lot,lot,lot slower than they do. This is the reason, why nnue-nets can be used in a fast alphabeta-search. Otherwise Lc0 would still be the only neural-net engine.

And if Revenge 1 uses Lc0 data for its net or not does not matter in this tournament: First of all, all 3 engines are extremly different in their way of calculating and second, Revenge 1 plays a very unique, very aggressive style (Revenge 1 has one of the highest EAS-Scores of all strong engines).
mar
Posts: 2641
Joined: Fri Nov 26, 2010 2:00 pm
Location: Czech Republic
Full name: Martin Sedlak

Re: SPCC: New Super 3 Tournament started

Post by mar »

pohl4711 wrote: Wed Jul 03, 2024 9:13 am
jorose wrote: Wed Jul 03, 2024 9:03 am

Your "Eval" column is also kind of funny to me; did you not want to specify the architectures used, instead of mentioning that revenge is using a quantized net? Also, did Fabio ever even confirm this for version 1.0? I think most devs started with floating point based nets initially, though it is possible Revenge was ahead of the curve in copying Stockfish as closely as possible. Again, his code has always been closed source, so I have no idea.
All nnue-nets use integer calculations instead of floating-point, otherwise they would run a lot,lot,lot slower than they do. This is the reason, why nnue-nets can be used in a fast alphabeta-search. Otherwise Lc0 would still be the only neural-net engine.
I started with floating point as well, it's just natural (if you go from scratch). lot lot lot - really? you can vectorize floats easily, only once you start using 16 bits (or fewer) you start to see a nice speedup (not "lot lot lot" - whatever measure that's even supposed to be)
the primary reason to switch to integers for me was not performance (I did 16-bit quantization with saturated relu much later),
but rather output stability - with floats the order of ops does matter - and I don't like roundoff errors
the real reason why NNUE is viable is layer1 cache (aka "UE") - and you can do even without it for much smaller nets, as my first implementations did.

as for bashing open source engines - this kind of became a hobby in itself...
while I respect that some people want to keep their source code closed, how do you know what's under the hood?
you don't, so claiming originality just because some engine is closed source - only if you're clairvoyant
a typical counterexample would be houdini
jorose
Posts: 373
Joined: Thu Jan 22, 2015 3:21 pm
Location: Zurich, Switzerland
Full name: Jonathan Rosenthal

Re: SPCC: New Super 3 Tournament started

Post by jorose »

The three are quite different for sure and I know you like Revenge, so that is fine. Its also your tournament so it doesn't need to be fine from my perspective.

Not all CPU neural network based engines are integer based. Even if it might be better, it is simply factually incorrect to say they all are.

Again, it is a minor detail compared to the fact that one is using a Transformer and the other is using an NNUE based architecture, which is what your table should say.
-Jonathan
User avatar
pohl4711
Posts: 2679
Joined: Sat Sep 03, 2011 7:25 am
Location: Berlin, Germany
Full name: Stefan Pohl

Re: SPCC: New Super 3 Tournament started

Post by pohl4711 »

mar wrote: Wed Jul 03, 2024 9:35 am as for bashing open source engines - this kind of became a hobby in itself...
while I respect that some people want to keep their source code closed, how do you know what's under the hood?
you don't, so claiming originality just because some engine is closed source - only if you're clairvoyant
a typical counterexample would be houdini
Are you kidding me? I am the biggest fan of open source and free software: All of my Tools (EAS Tool, Interesting Wins Search Tool etc.) are free and open source. All of my openings (UHO!) are free and open.
But it is normal, that everybody looks into the code of the strongest engine on the planet, when it is open. And take ideas from there (or other weaker open engines, when they have a nice, clean code (Fruit for example)). I never said, that this is a bad thing (or a good thing). It is just the reality, we live in today in high-end computerchess. Describing reality is not bashing!

And, finally, I never "claimed originality because an engine is closed source" or something like that.
mar
Posts: 2641
Joined: Fri Nov 26, 2010 2:00 pm
Location: Czech Republic
Full name: Martin Sedlak

Re: SPCC: New Super 3 Tournament started

Post by mar »

well you claim "but are completely different in their inner structure and their way of thinking" - two of them are closed source.
so how can you possibly know their inner structure - this seems like an originality claim to me as well
as "many others are strongly inspired by stockfish" - I assumed this second remark was targeted at open source engines,
because again you cannot possibly know how well this holds for closed source engines, except for proven clones.
jorose
Posts: 373
Joined: Thu Jan 22, 2015 3:21 pm
Location: Zurich, Switzerland
Full name: Jonathan Rosenthal

Re: SPCC: New Super 3 Tournament started

Post by jorose »

I am mostly fascinated that you start with this:
The strongest engine since more a decade (Stockfish) is open source, so many, many other engines are (at least) "strongly inspired" by Stockfish... And additionally, a lot of engines (including Stockfish!) are using Lc0-training-data for building their neural nets: The high-end computerchess has become very incestuous...
At least to me, when you make this statement you sound like you are implying that you are making an effort to support development of more original engines.

And then it turns out you are making a tournament where you could replace 2 of the engines with different Stockfish versions (1 pre and 1 post NNUE) and succeed equally in your proclaimed goal of having engines with different internal structures, while actually knowing for sure what went into them.

There are definitely engines that break the mold. E.g. KMCTS uses a hand crafted eval combined with a hybrid of MCTS and AB search. Winter uses a completely different, GNN based architecture, though I suppose you would categorize that as "float-neural". Drofa has a hybrid evaluation function with a neural net for pawn structure evaluation integrated in a hand-crafted evaluation function. I am sure there are many more examples of more recent engines that I am missing off the top of my head.
-Jonathan
User avatar
pohl4711
Posts: 2679
Joined: Sat Sep 03, 2011 7:25 am
Location: Berlin, Germany
Full name: Stefan Pohl

Re: SPCC: New Super 3 Tournament started

Post by pohl4711 »

jorose wrote: Wed Jul 03, 2024 9:41 am The three are quite different for sure and I know you like Revenge, so that is fine. Its also your tournament so it doesn't need to be fine from my perspective.

Not all CPU neural network based engines are integer based. Even if it might be better, it is simply factually incorrect to say they all are.
This will be my final post to this topic:

I looked in the excellent book of Dominik Klein "Neural Networks for Chess", which I own as a printed book.
Here the pdf:
https://github.com/asdfjkl/neural_netwo ... s/tag/v1.1

From there (page 206, about NNUE-nets):
"The first step in optimizing the network for fast computation with a CPU was thus to forget about floating point numbers. Each intermediate value as well as the weights are therefore expressed as 8 bit integer values."

Page 209:
"We can even make this faster by using SIMD instructions VPADDW and VPSUBW that bulk add or subtract values and are depicted in Figure 4.19. They come for different bit sizes; for simplicity assume they operate on 128 bit values. The bits of each 128 bit input are split into 16 bit subwords.
Each of these 16 bit subword is interpreted as an integer value in the range of 0 . . . 2 16 − 1 = 65535"
jorose
Posts: 373
Joined: Thu Jan 22, 2015 3:21 pm
Location: Zurich, Switzerland
Full name: Jonathan Rosenthal

Re: SPCC: New Super 3 Tournament started

Post by jorose »

This is like arguing that you have three vehicles. One of them is a ship and the other two are "non-ship, curved windshield" and "non-ship, flat windshield".

It turns out the "non-ships" are an airplane and a Tesla Cyber Truck respectively. Upon an automotive engineer explaining that the important part is whether it is an airplane or a truck and also pointing out that trucks can have curved windshields, you reference Elon Musk's twitter, declaring that the windshield being flat is of utmost importance.
-Jonathan