Tactics in training data

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

jonkr
Posts: 178
Joined: Wed Nov 13, 2019 1:36 am
Full name: Jonathan Kreuzer

Re: Tactics in training data

Post by jonkr »

niel5946 wrote: Sat Jun 19, 2021 11:56 am The few amount of moves are because of the terribly slow implementation of my network. Are there any standard optimizations besides incremental updates that can help relieve this problem?
Simplest would be making network layout smaller as Connor suggested.
Also I found a decent speed up going from float to int16 (or even int8, but I never tried int8 myself.) However the limited value range means more things to consider and more possible bugs.

So the main thing I would suggest if you're not already doing it is, assuming the input layer is only 1 or 0, just converting from board -> first layer net values when you evaluate. You can memset the biases, then add in the weights corresponding to each "1" input of the board. (As opposed setting inputs, looping every single input and doing a multiply add.) In 8x8 checkers just this was pretty much equal terms to incremental updates, although in chess the incremental updates are a significant enough optimization that they should be on a ToDo list.
GregNeto
Posts: 35
Joined: Thu Sep 08, 2016 8:07 am

Re: Tactics in training data

Post by GregNeto »

Just wondering:

You described yor imput as "12 pieces * 64 squares. It goes: WP, WN, WB, WR, WQ, WK, BP, BN, BB, BR, BQ, BK, with a 1 for a piece present and 0 for no piece."

When I did a much smaller nn for connect4 I also had the empty square as 1 for empty and 0 for not empty, so in your case you could try with 13X64 imputs