Page 2 of 7

Re: tuning for the uninformed

Posted: Thu Nov 23, 2017 8:21 pm
by Henk
I don't know how to get an estimate of the value of a position without using other engines.

Maybe use your own engine and re-search it on shallow depth. But who will say that that gives a right estimate if your current evaluation is bad.

You can not evaluate every test position manually for that is too much work.

Re: tuning for the uninformed

Posted: Thu Nov 23, 2017 8:26 pm
by sandermvdb
Henk wrote:I don't know how to get an estimate of the value of a position without using other engines.

Maybe use your own engine and re-search it on shallow depth. But who will say that that gives a right estimate if your current evaluation is bad.

You can not evaluate every test position manually for that is too much work.
quiet-labeled.epd contains the outcome of every position :)

Re: tuning for the uninformed

Posted: Thu Nov 23, 2017 8:36 pm
by Henk
sandermvdb wrote:
Henk wrote:I don't know how to get an estimate of the value of a position without using other engines.

Maybe use your own engine and re-search it on shallow depth. But who will say that that gives a right estimate if your current evaluation is bad.

You can not evaluate every test position manually for that is too much work.
quiet-labeled.epd contains the outcome of every position :)
Don't understand. What is quiet-labeled.epd?

Re: tuning for the uninformed

Posted: Thu Nov 23, 2017 8:43 pm
by sandermvdb
Henk wrote:
sandermvdb wrote:
Henk wrote: Maybe use your own engine and re-search it on shallow depth. But who will say that that gives a right estimate if your current evaluation is bad.

You can not evaluate every test position manually for that is too much work.
quiet-labeled.epd contains the outcome of every position :)
Don't understand. What is quiet-labeled.epd?
Sorry, that is one of the testsets by Alexandru Mosoi, the author of Zurichess. It conains quiet positions including the outcome of the game.

Re: tuning for the uninformed

Posted: Fri Nov 24, 2017 10:15 am
by flok
sandermvdb wrote:The basic idea is pretty simple: calculate the error of the evaluation when it is compared to the actual outcome of the positions.
What do you mean by that?
Do you mean the following:
- fen as input
- calc move with an eval val
- calc eval of the move that should've been moved
- compare these two (how? percentual difference? or what?)

Re: tuning for the uninformed

Posted: Fri Nov 24, 2017 10:30 am
by sandermvdb
flok wrote:
sandermvdb wrote:The basic idea is pretty simple: calculate the error of the evaluation when it is compared to the actual outcome of the positions.
What do you mean by that?
Do you mean the following:
- fen as input
- calc move with an eval val
- calc eval of the move that should've been moved
- compare these two (how? percentual difference? or what?)
No. I mean:
- fen as input
- calculate evaluation
- calculate error: compare evaluation score with the actual outcome. If the evaluation calculates that white has a big advantage but black wins -> big error. The exact formula is described on the cpw. This is my (pseudo) implementation, where K=1.3:

Code: Select all

public double calculateTotalError() {
	double totalError = 0;
	for &#40;Entry<String, Double> entry &#58; fens.entrySet&#40;)) &#123; // fens contains all positions, including the outcome
		ChessBoard cb = new ChessBoard&#40;entry.getKey&#40;));
		totalError += Math.pow&#40;entry.getValue&#40;) - calculateSigmoid&#40;Eval.calculateScore&#40;cb&#41;), 2&#41;;
	&#125;
	totalError /= fens.size&#40;);
	return totalError;
&#125;

public double calculateSigmoid&#40;int score&#41; &#123;
	return 1 / &#40;1 + Math.pow&#40;10, -1.3 * score / 400&#41;);
&#125;

Re: tuning for the uninformed

Posted: Sat Nov 25, 2017 12:04 am
by CheckersGuy
sandermvdb wrote:
flok wrote:If anyone is willing to explain the Texel tuning method tht would be great!

Sofar I understand I have to let it play (well, run QS + eval on FENs) millions of games and then do something with the evaluation-value. But what? I don't understand the wiki explanation.
The basic idea is pretty simple: calculate the error of the evaluation when it is compared to the actual outcome of the positions. Lower a particular evaluation parameter and check if the error has improved, if not, higher the parameter, if again not improved, keep the original value. Do this for all parameters until you have reached the lowest error.
This is the local search algorithm but I would assume that it is better to run some gradient based algorithm first (Maybe gradient descent or gauss-newton). Then if the error doesn't change by much anymore I would switch to local search.

I am going to implement texel tuning this week and see what I get :P

Re: tuning for the uninformed

Posted: Sat Nov 25, 2017 12:30 pm
by Henk
All does not work if search space has a great many local optima and only very few global optima that you are interested in. But simulated annealing taking too long.

Re: tuning for the uninformed

Posted: Sat Nov 25, 2017 2:58 pm
by CheckersGuy
Henk wrote:All does not work if search space has a great many local optima and only very few global optima that you are interested in. But simulated annealing taking too long.
Local search and any other practical algorithm to minimize the error will end up in a local optimum.

Re: tuning for the uninformed

Posted: Sat Nov 25, 2017 3:48 pm
by Henk
CheckersGuy wrote:
Henk wrote:All does not work if search space has a great many local optima and only very few global optima that you are interested in. But simulated annealing taking too long.
Local search and any other practical algorithm to minimize the error will end up in a local optimum.
Wasn't it that if it optimizes enough parameters you won't get trapped in a local optimum. I can't remember.