+Allie's net is trained on T60 games (among others) and before some recent updates Allie's eval almost always mimicked Leela's eval. (Just inflated massively)Alayan wrote: ↑Wed Sep 30, 2020 6:45 amLeela-type nets feature a "policy head" that gives exploration weights to each possible move in a position. It plays a major role into the strength of Leela, as it's guiding the PUCT search and its pruning.Madeleine Birchfield wrote: ↑Wed Sep 30, 2020 2:56 am There are actually four levels of originality. There is completely original engines, as in search and evaluation are built from the ground up, or the search and evaluation has changed so much from an existing engine as to have no resemblance to the original engine anymore. There are neural network players, which are engines whose search is built from the ground up but whose evaluation is entirely a neural network architecture from an existing engine, that could be considered as a library. There are derivatives, as in the search and evaluation both are derived from an existing engine with enough changes to not be considered a clone, but not enough changes to be considered original. And then there are clones, which are engines almost exactly the same as an existing engine.
A strong engine that is using Leela-type nets MUST make use of this policy information.
The end result is that while Allie's search code has been written by gonzo and has some unique properties, it behaves largely the same as Leela because at the root it's PUCT guided by the net's policy head, even if Allie favors higher score over top visits and has some light min-maxing.
It's an example of how the use of a similar architecture in one place (eval) can enforce more similarity elsewhere (search).
If you replace SF's multithreading with ABDADA (or YBWC) and train a net a little differently you quickly have something that's just as *unique* as Allie.