tuning for the uninformed

Discussion of chess software programming and technical issues.

Moderators: hgm, Harvey Williamson, bob

Forum rules
This textbox is used to restore diagrams posted with the [d] tag before the upgrade.
flok

tuning for the uninformed

Post by flok » Thu Nov 23, 2017 10:37 am

Hi,

What about the following method for tuning evaluation:

- run through, say, 1000 positions
- generate a (semi-)random set of tuning parameters
- get their eval from your program
- get the eval from the must-be-good program (I'm using stockfish)
- compare those 1000 pairs by using https://en.wikipedia.org/wiki/Pearson_c ... oefficient
- coefficient > previous_coefficient? then remember this tuning parameters set

What do you think?

Henk
Posts: 5103
Joined: Mon May 27, 2013 8:31 am

Re: tuning for the uninformed

Post by Henk » Thu Nov 23, 2017 11:49 am

flok wrote:Hi,

What about the following method for tuning evaluation:

- run through, say, 1000 positions
- generate a (semi-)random set of tuning parameters
- get their eval from your program
- get the eval from the must-be-good program (I'm using stockfish)
- compare those 1000 pairs by using https://en.wikipedia.org/wiki/Pearson_c ... oefficient
- coefficient > previous_coefficient? then remember this tuning parameters set

What do you think?
Something is better then nothing.

Are these 1000 positions representative for every position that may occur when playing games. Probably not.

Don't you want to create something different than Stockfish. I already have a copy of Stockfish running on my machine.

But something is better than nothing. (Skipper is nothing). Or not. For instance. Why waste time on something which won't make it. Maybe only for generating better ideas.
Last edited by Henk on Thu Nov 23, 2017 11:52 am, edited 2 times in total.

Look
Posts: 110
Joined: Thu Jun 05, 2014 12:14 pm
Location: Iran
Contact:

Re: tuning for the uninformed

Post by Look » Thu Nov 23, 2017 11:49 am

flok wrote:Hi,

What about the following method for tuning evaluation:

- run through, say, 1000 positions
- generate a (semi-)random set of tuning parameters
- get their eval from your program
- get the eval from the must-be-good program (I'm using stockfish)
- compare those 1000 pairs by using https://en.wikipedia.org/wiki/Pearson_c ... oefficient
- coefficient > previous_coefficient? then remember this tuning parameters set

What do you think?
You can use Genetic Algorithm too. See this link for instance:

https://www.doc.ic.ac.uk/~nd/surprise_9 ... icle1.html
Mehdi Amini
www.my-c-codes.com/

Farewell.

Henk
Posts: 5103
Joined: Mon May 27, 2013 8:31 am

Re: tuning for the uninformed

Post by Henk » Thu Nov 23, 2017 11:53 am

Yes he is doing hill climbing now or perhaps only sampling.

flok

Re: tuning for the uninformed

Post by flok » Thu Nov 23, 2017 12:26 pm

Only sampling.

Henk
Posts: 5103
Joined: Mon May 27, 2013 8:31 am

Re: tuning for the uninformed

Post by Henk » Thu Nov 23, 2017 12:44 pm

Sampling is ok for tuning two parameters or so. Otherwise it is terribly slow.
O wait if you tune it badly it generalizes better. So it will do better on evaluating unseen positions. Somewhere there is an optimum between bad tuning and 'overtuning'.

Other constraint is that tuning should not cost too much time so better use hill climbing with restart. Or genetic algorithm (with restart?)

brtzsnr
Posts: 426
Joined: Fri Jan 16, 2015 3:02 pm
Contact:

Re: tuning for the uninformed

Post by brtzsnr » Thu Nov 23, 2017 2:07 pm

I tried pearson correlation in the past, but it's not a good measure. Right now my experiments with evolving the eval parameters use texel tuning on a set of 100K positions (so I can train ~6000 models per day).

In terms of the error rate, I got very close to the hand tuned model (stable version). If you want to see the fully automatic trained eval with almost 0 human intervention check [1] or [2]. The evolved version is 100 Elo weaker than the stable version of Zurichess. I need to add back the pawns cache, though.


[1] https://bitbucket.org/brtzsnr/zurichess ... ew-default
[2] https://bitbucket.org/brtzsnr/zurichess ... ew-default

flok

Re: tuning for the uninformed

Post by flok » Thu Nov 23, 2017 2:16 pm

Yeah I looked at texel tuning but it looked rather complicated. This pearson was implemented in an hour during the morning commute :D

flok

Re: tuning for the uninformed

Post by flok » Thu Nov 23, 2017 5:34 pm

If anyone is willing to explain the Texel tuning method tht would be great!

Sofar I understand I have to let it play (well, run QS + eval on FENs) millions of games and then do something with the evaluation-value. But what? I don't understand the wiki explanation.

sandermvdb
Posts: 122
Joined: Sat Jan 28, 2017 12:29 pm
Location: The Netherlands

Re: tuning for the uninformed

Post by sandermvdb » Thu Nov 23, 2017 6:47 pm

flok wrote:If anyone is willing to explain the Texel tuning method tht would be great!

Sofar I understand I have to let it play (well, run QS + eval on FENs) millions of games and then do something with the evaluation-value. But what? I don't understand the wiki explanation.
The basic idea is pretty simple: calculate the error of the evaluation when it is compared to the actual outcome of the positions. Lower a particular evaluation parameter and check if the error has improved, if not, higher the parameter, if again not improved, keep the original value. Do this for all parameters until you have reached the lowest error.

Post Reply