Question about implementation of NNUE

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

LeviGibson
Posts: 11
Joined: Sat Aug 07, 2021 3:41 pm
Full name: Levi Gibson

Question about implementation of NNUE

Post by LeviGibson »

I've had NNUE running in eggnog-chess-engine for a while. All the layers are sparse because it uses a clipped relu activation function.
Because of this the neural network does forward propagation in a different way than most implementations. For each neuron in layer one for example, it takes all the weights connected to that neuron and does the multiplication and adds it to each neuron in layer two respectively.
Is it better to calculate each neuron in layer two one at a time, or fire each neuron in layer one one at a time?
Thanks so much for reading through this! :D