Texel tuning method question

Discussion of chess software programming and technical issues.

Moderators: hgm, Dann Corbit, Harvey Williamson

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
Posts: 4103
Joined: Fri Mar 10, 2006 4:23 am
Location: http://www.arasanchess.org

Re: Texel tuning method question

Post by jdart » Wed Aug 09, 2017 1:36 pm

I think you are on basically the right track.

http://www.derivative-calculator.net/ can calculate symbolic derivatives for you.

For Arasan, the error function is the mean-squared difference between the sigmoid function of the eval and the score.

So that is a fairly complex function but it is differentiable. computeTexelDeriv calculates that derivative.


Posts: 433
Joined: Fri Jan 16, 2015 3:02 pm

Re: Texel tuning method question

Post by brtzsnr » Wed Aug 09, 2017 8:58 pm

I did this in 100 lines of tensorflow, Since it runs on my GPU I train a new model in 5 minutes, tops.

https://bitbucket.org/zurichess/tuner/s ... ew-default

Posts: 104
Joined: Thu Sep 27, 2012 12:24 am

Re: Texel tuning method question

Post by Cheney » Mon Aug 14, 2017 11:47 pm

Here's the latest after another week of research, studying, and testing :).

From calculating the derivative, I am not sure if I should be calculating the dt of (result - Sigmoid(qs))^2 or 1/N*Sum(n=1,(result - Sigmoid(qs))^2). There are other sites that calculate derivatives and I do not seem to always get the same results for either of the above functions. So, I decided to stick with the last plan, which is following differentiation from first principles - f'(x) = (limit x->0) f(x + h) - f(x) / h where h is the delta for the parameter.

With this, I am able to calculate a dt of E w.r.t P (dtEP) but as this is a small fraction, figuring out a rate of change for the parameter P is not straightforward. After enough analysis, I tested scaling the dtEP by 10K and 100K, had a learning rate, and limited this change for control.

The whole process seems to work and I was very intrigued reviewing the output as this tuning method would seem to learn values even for a specific square in a PSQ :). Eventually, I had a set of tuned parameters to use. Unfortunately, the base version still wins by about 58% or more. I am not so sure what's going on. I have tested:
* Tuning just pieces
* Tuning pieces and psq (mg and eg and they end up no symmetrical)
* My psqs are all created by hand, so I created a set of psqs that were calculated, thus only a few parameters were exposed to the tuner. This was tested.
* I noted what direction a parameter wanted to go and I manually adjusted my values by a small amount and tested.

All tuned versions lose the same.

Someone asked in an earlier post how I tested. Hand tuning, I normally test base vs new, 4000 games with 4-moves in openings and 4000 games with 10-moves in opening with a set depth of 6 (and following Ed Schroder's ideas), the depth increases as pieces reduce. I would then test against other engines with a time control (like 0:02+1). This has been successful for me.

I have tried this same testing with the tuned/base engine tests. I have tried removing the depth constraints and put in a time control; still no positive winning rate for the tuned engine.

I have tested the tuning, for example, setting pieces 20 points lower than my base and 20 points higher than the tuned values - the tuner keeps coming back to the same tuned values. This seems to be a positive note.

I thought about the average error, being an average, if one position qscore is increased, the average E improves/decreases. If a single positions increases by 10 but 8 others decrease by 1, the average E will still decrease... but, this is 8 more bad positions. Is this bad? When counting the total of positions that were affected positively versus negatively, there is only about 500K difference (for the positive positions). That does not seem like enough to me to be successful improvements.

There's the latest. I am running some tests now as I write. I am not sure where to go next. I expect either my parameter values are already good or the math is off somewhere.

Posts: 104
Joined: Thu Sep 27, 2012 12:24 am

Re: Texel tuning method question

Post by Cheney » Sat Sep 02, 2017 2:03 pm

Thanks to all with your guidance and patience on this subject (Jon, Robert, Alvaro, etc. :) )

I think I finally got this working.

I first had some odd bugs which threw me off for weeks. All I wanted to test was piece values. Even though the testing function to search for better parameters appeared to be working, it was generating losing parameters. Here I though the issue was with my rate of learning or how to calculate the derivative, but it was simply with how I accessed the vector storing the positions to test. Once I fixed this, the tuned values would converge differently than before but the engine would lose. So back to the drawing board I went.

After trying different test sets to compare results, I realized if I change test sets I was not determining a new K value. So, I went back to my original test set, calculated a new K (which I originally did but lost it somewhere while debugging the vectors and derivatives), and jackpot! The convergence of the piece values went in different directions and the game tests against the previous untuned engine was +20 ELO.

I have exposed more parameters of the eval to the tuner and am up to +31 ELO.

Again, thank you :)

Post Reply