Training with own writed Neural Network, I don't use machine learning platform.
This is very slow and by far not the best choice. This forced me to experiment with alternative solutions.
I wanted to get the best results as quickly as possible.
My rules:
1.) the smallest possible network -> because of the speed and *
2.) less training examples -> faster learning, less memory requirements
3.) less training period -> faster development
* more info for rule 1:
Using too many neurons in the hidden layers can result in several problems. First, too many neurons in the hidden layers may result in overfitting. Overfitting occurs when the neural network has so much information processing capacity that the limited amount of information contained in the training set is not enough to train all of the neurons in the hidden layers. The second problem can occur even when the training data is sufficient. An inordinately large number of neurons in the hidden layers can increase the time it takes to train the network.
Current results:
around +60 elo against tomitankChess 4.2 with a 768x32x1 network.
This network only augmented it the HCE, did not replace it.
This network - with a little exaggeration- only corrects bad positions. Because of this, much less epoch is needed. (~6 epoch)
I trained only with 2.7M example!!
I am convinced that with 10-30Million or even more example better results can be achieved. Without violating Rules 1 and 3. Unfortunately for me this is not an option. Not in NodeJs! It's pretty boring anyway..
I’ll probably try to remove HCE and train with more epoch, but I don’t think I’ll get better results with a small network and 2.7M examples.
This is probably already tried by the authors of Halogen.
I think this can be successful even with more powerful engines. Even for Stockfish. Network training and thus development could be accelerated.
For Andy:
And most importantly, the network could not be used in any other engine. It would depend on HCE.
-Tamás