Neural Networks weights type

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
Fabio Gobbato
Posts: 217
Joined: Fri Apr 11, 2014 10:45 am
Full name: Fabio Gobbato

Neural Networks weights type

Post by Fabio Gobbato »

I have seen in Stockfish NNUE that the network uses integer types for the weights instead of floating point types. One advantage is surely the speed but there could also be some drawbacks. What are the differences between integer and floating point networks? Is it possible to build a good net that runs on cpu with floating point weights or it's better to use integer weights?
Rémi Coulom
Posts: 438
Joined: Mon Apr 24, 2006 8:06 pm

Re: Neural Networks weights type

Post by Rémi Coulom »

8-bit accuracy is often accurate enough, and faster than floating point.

The tensor cores of the most recent NVIDIA GPUs can do 4-bit calculation (in addition to 8-bit interger, and 16-bit float). The next generation will also allow sparsity, which is another big potential for performance improvement. Training sparse 4-bit neural network is a bit tricky, though.

Some even do 1-bit neural networks:
https://jmlr.csail.mit.edu/papers/v18/16-456.html
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations

Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio; 18(187):1−30, 2018.

Abstract

We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.
User avatar
towforce
Posts: 11568
Joined: Thu Mar 09, 2006 12:57 am
Location: Birmingham UK

Re: Neural Networks weights type

Post by towforce »

It seems as though NNs don't require much accuracy. A couple of data types they tend to use:

* half precision - link
* brain float - link

TPU's are a bit like graphics cards, but they use low precision arithmetic, which enables them to do much more NN work for the same amount of hardware. It seems a natural step to have them using integers.
Writing is the antidote to confusion.
It's not "how smart you are", it's "how are you smart".
Your brain doesn't work the way you want, so train it!