This classification is just another way for you to smear "option 2" engines ofcourse. Why do you have this urge to label others who don't do it "your way"? You are stuck on this illusion that your engine is the most unique in the world when in reality it is not.AndrewGrant wrote: ↑Sun Oct 11, 2020 7:40 amYes -- Exactly -- You've hit the nail on the head. I see two possible options, which are able to be consistent with their own rules.smatovic wrote: ↑Sun Oct 11, 2020 7:09 amHmm, how many Mega-Watt-hours went into the datasets of CCRL, chessdbcn, Lc0, etc.? How to catch up as independent developer without a CPU/GPU cluster? Fair competition vs. originality? Only the big players are able to pay the price for originality? Opening/sharing the datasets for free use and set up ways for own tuning, learning or implementation of NNs seems a way to go for me, or alike. There are plenty of varieties possible even with the same datasets or NNUE as architecture, imho.
[Option 1] Hold a Tournament where you force uniqueness and originality. Do this down to the bone. This means that engines cannot be using the Leela backends except for Leela. Engines cannot be using NNUE as of now except for Stockfish. Any Networks or even the main evaluation, need to be tuned on original data. This would make me happy, but its not going to happen. However, it is consistant. TCEC applies one set of rules for AlphaBeta engines, and another set for the GPU engines, due to their implicit $$$ bias.
[Option 2] Everything is up in the air. If you can provide an engine which has a large unique component, then it can compete. This means Allie can compete, as her search is different than Leela's, even if they used the same networks and back-ends. This means that all engines are free to use NNUE as a boost and share the same net. This means you can train on anything and everything. Its truly open, and allows unhinged innovation and building upon the work of others. Tournaments and Rating Lists would be responsible for figuring out what they believe is special enough to run.
I propose a third option, option 3, which I think results in the most original engine and is "down to the bone" unlike your option 1.
This criteria is an engine should not be strongly influenced by another engine.
How many engines do all the pruning and reductions that stockfish does step by step ?
Step 1, we do futility pruning and gained this much elo
Step 2, we do probcut and gained +10 elo according to 10000 games etc...
Step 3, we do LMR using a table 
Taking ideas is one thing, doing all of other engine's ideas step-by-step is another.
Note I am not opposed to this either way, but I value those who strive to come up with original ideas whether that is based upon existing work or not.
Have you ever heard of the motto "more science less programming".This lets people dabble in using all of the latest and greatest, but also sets aside a special place for authors who are reinventing the wheel on their own, producing far more diverse chess entities. If style is a real concept, style exists in Option 1 but not Option 2.
Reinventing the wheel does not make your engine original. You would simply be wasting your time redoing something instead of focusing on the most important aspects of your project. There is a lot of science to be done with neural networks, supervised and reinforcment learning, and yet you focus
on the programming aspect so much. This kind of thinking is typical of a compute programmer but chess programming is not only about programming.