Help with Texel's tuning

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
maksimKorzh
Posts: 771
Joined: Sat Sep 08, 2018 5:37 pm
Location: Ukraine
Full name: Maksim Korzh

Re: Help with Texel's tuning

Post by maksimKorzh »

jdart wrote: Fri Jan 08, 2021 5:52 pm Texel tuning is basically supervised learning using logistic regression. There is a very large literature on this, outside of the field of chess.

For just one example: https://web.stanford.edu/~jurafsky/slp3/5.pdf
For noobs like me who especially has BIG troubles with math all these general considerations theories doesn't make any sense at all,
but when I finally realized the chess related pipeline suddenly both supervised learning and logistic regression became very clear!
I was reading general theory articles like you've posted but it only made me feel stupid and loosing a hope to implement texel's tuning)
User avatar
maksimKorzh
Posts: 771
Joined: Sat Sep 08, 2018 5:37 pm
Location: Ukraine
Full name: Maksim Korzh

Re: Help with Texel's tuning

Post by maksimKorzh »

After the very first successful tuning results I decided to share my experience with those who find existing
resources on Texel's tuning to be too complicated:
https://github.com/maksimKorzh/wukongJS ... _TUNING.MD
BrianNeal
Posts: 8
Joined: Sat Dec 26, 2020 5:58 pm
Full name: Brian Neal

Re: Help with Texel's tuning

Post by BrianNeal »

Ferdy wrote: Thu Jan 07, 2021 5:37 pm
maksimKorzh wrote: Thu Jan 07, 2021 3:31 pm
BrianNeal wrote: Thu Jan 07, 2021 3:16 pm Would shuffling the parameters before each iteration make sense (besides shuffling the training positions)?
You should ask someone more competent than I)
In my noob's understanding shuffling training positions makes perfect sense because the mean square error becomes more objective,
for instance imagine you have only opening positions to tune endgame parameters - mean square error can still be minimized but it
would result a bullshit values, well at least IMO

Shuffling parameters doesn't make sense because if we anyway loop over all of the parameters and adjust each of them to minimize
mean square error than the order doesn't matter. May be the order matters if gradient decent is used instead of incrementing/decrementing
params by one, but I don't know that becasue didn't yet have a look at the idea behind gradient decent. I'm happy to come up with the
simplest implementation possible for now.
The "shuffling the parameters" BrianNeal mentioned is not actually like shuffling randomly. It is more like "ordering the parameters". As you have noticed, in Texel tuning the first parameter changes the whole evaluation which would affect later parameters. For example you have the parameters to optimize.
1. PawnValue
2. PawnPST
3. QueenAttacksKing

First you try PawnValue with +/-1, error does not improve.
Second you try PawnPST ... error does not improve.
Third you try QueenAttacksKing with +1 and error improves.

Now you can change the order of tuning because QueenAttacksKing is more sensitive to error.

1. QueenAttacksKing
2. PawnValue
3. PawnPST

It might happen that after evaluating QueenAttacksKing first, the PawnValue or PawnPST might improve the error.

As you try more parameters, like two_bishop advantage and double_pawn_penalty and other important parameters like passed_pawn, kingattack, threats and mobility, it would be interesting to order your parameters based on how many times a parameter improves the error. If kingattack improves the error by 4/10 iterations and mobility improves the error by 2/10, order the param with kingattack on top of mobility for the next iteration.

It is also possible not to order your parameters dynamically, but try to order your parameters in the first place by importance.
* passed_pawn is more importan than PST
* material is more important than PST
* passed_pawn is more important than material

So that would be
1. passed_pawn
2. material
3. PST

or for other example, knight PST is more important than pawn PST, so knight first before pawn.
Your answer is interesting Ferdy and I think it makes more sense than my idea. Actually I really meant randomly shuffling the parameters. My rationale was that since most likely the params in our evaluation are not perfectly orthogonal if we test them always in the same order we could overfit some of them and leave others (which might be more important than the formers) without the possibility to improve.
Ferdy
Posts: 4833
Joined: Sun Aug 10, 2008 3:15 pm
Location: Philippines

Re: Help with Texel's tuning

Post by Ferdy »

BrianNeal wrote: Mon Jan 11, 2021 7:29 pm
Ferdy wrote: Thu Jan 07, 2021 5:37 pm
maksimKorzh wrote: Thu Jan 07, 2021 3:31 pm
BrianNeal wrote: Thu Jan 07, 2021 3:16 pm Would shuffling the parameters before each iteration make sense (besides shuffling the training positions)?
You should ask someone more competent than I)
In my noob's understanding shuffling training positions makes perfect sense because the mean square error becomes more objective,
for instance imagine you have only opening positions to tune endgame parameters - mean square error can still be minimized but it
would result a bullshit values, well at least IMO

Shuffling parameters doesn't make sense because if we anyway loop over all of the parameters and adjust each of them to minimize
mean square error than the order doesn't matter. May be the order matters if gradient decent is used instead of incrementing/decrementing
params by one, but I don't know that becasue didn't yet have a look at the idea behind gradient decent. I'm happy to come up with the
simplest implementation possible for now.
The "shuffling the parameters" BrianNeal mentioned is not actually like shuffling randomly. It is more like "ordering the parameters". As you have noticed, in Texel tuning the first parameter changes the whole evaluation which would affect later parameters. For example you have the parameters to optimize.
1. PawnValue
2. PawnPST
3. QueenAttacksKing

First you try PawnValue with +/-1, error does not improve.
Second you try PawnPST ... error does not improve.
Third you try QueenAttacksKing with +1 and error improves.

Now you can change the order of tuning because QueenAttacksKing is more sensitive to error.

1. QueenAttacksKing
2. PawnValue
3. PawnPST

It might happen that after evaluating QueenAttacksKing first, the PawnValue or PawnPST might improve the error.

As you try more parameters, like two_bishop advantage and double_pawn_penalty and other important parameters like passed_pawn, kingattack, threats and mobility, it would be interesting to order your parameters based on how many times a parameter improves the error. If kingattack improves the error by 4/10 iterations and mobility improves the error by 2/10, order the param with kingattack on top of mobility for the next iteration.

It is also possible not to order your parameters dynamically, but try to order your parameters in the first place by importance.
* passed_pawn is more importan than PST
* material is more important than PST
* passed_pawn is more important than material

So that would be
1. passed_pawn
2. material
3. PST

or for other example, knight PST is more important than pawn PST, so knight first before pawn.
Your answer is interesting Ferdy and I think it makes more sense than my idea. Actually I really meant randomly shuffling the parameters. My rationale was that since most likely the params in our evaluation are not perfectly orthogonal if we test them always in the same order we could overfit some of them and leave others (which might be more important than the formers) without the possibility to improve.
Random shuffling makes sense indeed specially when one is being aggressive in tuning like using steps that is more than 1cp.