Why I'm starting to dislike automated tuning

Discussion of chess software programming and technical issues.

Moderator: Ras

jordanbray
Posts: 52
Joined: Mon Aug 11, 2014 3:01 am

Why I'm starting to dislike automated tuning

Post by jordanbray »

Start of rant.

Here are some of the results from the king piece square tables.

Code: Select all

    .king_square[50] = { 567, 1200 },
    .king_square[51] = { 568, 1201 },
    .king_square[52] = { 568, 1202 },
    .king_square[53] = { 568, 1202 },
    .king_square[54] = { 568, 1202 },
    .king_square[55] = { 568, 1202 },
    .king_square[56] = { 566, 1176 },
    .king_square[57] = { 567, 1177 },
    .king_square[58] = { 567, 1178 },
    .king_square[59] = { 568, 1179 },
    .king_square[60] = { 568, 1180 },
    .king_square[61] = { 568, 1180 },
    .king_square[62] = { 568, 1180 },
    .king_square[63] = { 568, 1180 },
They are all like that. On the one hand, there will always be two kings on the board, so as long as all the squares in this table are equally messed up (which they are), it'll produce sane results from the evaluation function. (AKA, it'll only *really* look at the diffs between the two king squares.)

On the other hand, look at how ugly they are.

These crazy numbers came when I decided to have the piece square tables initialized by a function, where I only tune the parameters to that function, so that I wouldn't be overfitting the sample problem set.

Hopefully after more tuning of the tuner, it'll produce more sane results.

End of rant.
matthewlai
Posts: 793
Joined: Sun Aug 03, 2014 4:48 am
Location: London, UK

Re: Why I'm starting to dislike automated tuning

Post by matthewlai »

jordanbray wrote:Start of rant.

Here are some of the results from the king piece square tables.

Code: Select all

    .king_square[50] = { 567, 1200 },
    .king_square[51] = { 568, 1201 },
    .king_square[52] = { 568, 1202 },
    .king_square[53] = { 568, 1202 },
    .king_square[54] = { 568, 1202 },
    .king_square[55] = { 568, 1202 },
    .king_square[56] = { 566, 1176 },
    .king_square[57] = { 567, 1177 },
    .king_square[58] = { 567, 1178 },
    .king_square[59] = { 568, 1179 },
    .king_square[60] = { 568, 1180 },
    .king_square[61] = { 568, 1180 },
    .king_square[62] = { 568, 1180 },
    .king_square[63] = { 568, 1180 },
They are all like that. On the one hand, there will always be two kings on the board, so as long as all the squares in this table are equally messed up (which they are), it'll produce sane results from the evaluation function. (AKA, it'll only *really* look at the diffs between the two king squares.)

On the other hand, look at how ugly they are.

These crazy numbers came when I decided to have the piece square tables initialized by a function, where I only tune the parameters to that function, so that I wouldn't be overfitting the sample problem set.

Hopefully after more tuning of the tuner, it'll produce more sane results.

End of rant.
Why not just subtract everything by the average value each time?
Disclosure: I work for DeepMind on the AlphaZero project, but everything I say here is personal opinion and does not reflect the views of DeepMind / Alphabet.
jordanbray
Posts: 52
Joined: Mon Aug 11, 2014 3:01 am

Re: Why I'm starting to dislike automated tuning

Post by jordanbray »

matthewlai wrote: Why not just subtract everything by the average value each time?
That's a good idea. Even the minimum value would work.
matthewlai
Posts: 793
Joined: Sun Aug 03, 2014 4:48 am
Location: London, UK

Re: Why I'm starting to dislike automated tuning

Post by matthewlai »

jordanbray wrote:
matthewlai wrote: Why not just subtract everything by the average value each time?
That's a good idea. Even the minimum value would work.
Yeah any constant value works, so it just depends on what you want the values to look like.
Disclosure: I work for DeepMind on the AlphaZero project, but everything I say here is personal opinion and does not reflect the views of DeepMind / Alphabet.
lucasart
Posts: 3243
Joined: Mon May 31, 2010 1:29 pm
Full name: lucasart

Re: Why I'm starting to dislike automated tuning

Post by lucasart »

jordanbray wrote:Start of rant.

Here are some of the results from the king piece square tables.

Code: Select all

    .king_square[50] = { 567, 1200 },
    .king_square[51] = { 568, 1201 },
    .king_square[52] = { 568, 1202 },
    .king_square[53] = { 568, 1202 },
    .king_square[54] = { 568, 1202 },
    .king_square[55] = { 568, 1202 },
    .king_square[56] = { 566, 1176 },
    .king_square[57] = { 567, 1177 },
    .king_square[58] = { 567, 1178 },
    .king_square[59] = { 568, 1179 },
    .king_square[60] = { 568, 1180 },
    .king_square[61] = { 568, 1180 },
    .king_square[62] = { 568, 1180 },
    .king_square[63] = { 568, 1180 },
They are all like that. On the one hand, there will always be two kings on the board, so as long as all the squares in this table are equally messed up (which they are), it'll produce sane results from the evaluation function. (AKA, it'll only *really* look at the diffs between the two king squares.)

On the other hand, look at how ugly they are.

These crazy numbers came when I decided to have the piece square tables initialized by a function, where I only tune the parameters to that function, so that I wouldn't be overfitting the sample problem set.

Hopefully after more tuning of the tuner, it'll produce more sane results.

End of rant.
You can't tune King PST. You can only tune King PST modulo a constant! You even explained why yourself:
there will always be two kings on the board
You can't blame the auto-tuner if the problem is ill-defined. If your model is over-specified (too many parameters, some being function of others) then tuning can only give you a random walk in the space of optimal solutions (which is not a singleton).

If I were you, I would reduce the number of parameters, and impose a constraint that the PST is centered.
Theory and practice sometimes clash. And when that happens, theory loses. Every single time.
mvk
Posts: 589
Joined: Tue Jun 04, 2013 10:15 pm

Re: Why I'm starting to dislike automated tuning

Post by mvk »

jordanbray wrote: They are all like that. On the one hand, there will always be two kings on the board, so as long as all the squares in this table are equally messed up (which they are), it'll produce sane results from the evaluation function. (AKA, it'll only *really* look at the diffs between the two king squares.)

On the other hand, look at how ugly they are.
What I normally do is to keep one value fixed/constant. E.g King[E1] = 0, Bishop[F1] = 0, Knight[G1] = 0, etc.

Alternatively, you can normalise. This is probably better on the tuner.

Or just ignore of course. It is cosmetical after all.
[Account deleted]