Thanks!clumma wrote:Very interesting project. I have two questions:
1. Why did you bootstrap from material-only eval? Doesn't the CCRL dump contain evals by the best engines? Why not train on those?
2. Are you familiar with the idea of model compression -- training a small network to mimic a larger one? E.g.
and do you think this could be used to speed up Giraffe's eval?
1. That was the original plan. I originally modified Stockfish to label positions for bootstrapping. I ended up not doing that for a more philosophical reason than a technical one - I want to see what it can do with as little bootstrapping knowledge as possible. From my experiments, I have already found that even the material bootstrap weights are not really important either, as long as the relative order of the values are correct.
Eventually I want to switch to random initialization. It would probably take much longer to train, but I wouldn't be surprised if that actually works. I just don't have the spare CPU cycles right now. I have access to about 600 CPUs in about 150 quad cores, but for parallel back-propagation I need large shared-memory systems, and I only have access to 2-3 20-core machines. And they are all doing more useful stuff right now.
2. I am aware of model compression. Really cool stuff isn't it?
I was just talking to my supervisor about it a while ago. I probably won't have time to do it for the thesis, since there are still a million things I want to try, and not much time left.
It's definitely on my list of things to investigate later, though.