Diversification and auto-diversification of NN's (evals)

Discussion of anything and everything relating to chess playing software and machines.

Moderators: hgm, Rebel, chrisw

Angle
Posts: 319
Joined: Sat Oct 31, 2020 1:04 am
Full name: Aleksey Glebov

Diversification and auto-diversification of NN's (evals)

Post by Angle »

I believe that the problem of diversification NN's (evals) should become the key task in the engine's development for the next few months (years?). Not everyone understand but this is not only the problem of engine's originality and style but also the problem of gaining additional strength. If all strong engines would use the same (almost the same) eval (say SF NNUE eval :lol:) then they would become blind in the same way and they would have no chance to reveal this blindness and panish for it by playing games to each other. In order to expose this blindness and to heal it one need to use NN's with completely different eval properties and priorities. Thus, we need to create a great variety of foundamentally different NN''s (evals) in order to get stronger engines. Ideally, we should come to algorithms/software for auto-diversifying (or human-managed-diversifying) of the nets training process and for auto-selection the best features of individual networks within a large collection of competing NN's. Here I anticipate a peculiar application of genetic algorithms of some kind which would generate/evolve/interbreed/select/inprove large populations of neural networks having essentially different ptoprties.
Incredibly fast systems miscount incredibly fast.