Don't understand. What is quiet-labeled.epd?sandermvdb wrote:quiet-labeled.epd contains the outcome of every positionHenk wrote:I don't know how to get an estimate of the value of a position without using other engines.
Maybe use your own engine and re-search it on shallow depth. But who will say that that gives a right estimate if your current evaluation is bad.
You can not evaluate every test position manually for that is too much work.
tuning for the uninformed
Moderators: hgm, Rebel, chrisw
-
- Posts: 7220
- Joined: Mon May 27, 2013 10:31 am
Re: tuning for the uninformed
-
- Posts: 160
- Joined: Sat Jan 28, 2017 1:29 pm
- Location: The Netherlands
Re: tuning for the uninformed
Sorry, that is one of the testsets by Alexandru Mosoi, the author of Zurichess. It conains quiet positions including the outcome of the game.Henk wrote:Don't understand. What is quiet-labeled.epd?sandermvdb wrote:quiet-labeled.epd contains the outcome of every positionHenk wrote: Maybe use your own engine and re-search it on shallow depth. But who will say that that gives a right estimate if your current evaluation is bad.
You can not evaluate every test position manually for that is too much work.
Re: tuning for the uninformed
What do you mean by that?sandermvdb wrote:The basic idea is pretty simple: calculate the error of the evaluation when it is compared to the actual outcome of the positions.
Do you mean the following:
- fen as input
- calc move with an eval val
- calc eval of the move that should've been moved
- compare these two (how? percentual difference? or what?)
-
- Posts: 160
- Joined: Sat Jan 28, 2017 1:29 pm
- Location: The Netherlands
Re: tuning for the uninformed
No. I mean:flok wrote:What do you mean by that?sandermvdb wrote:The basic idea is pretty simple: calculate the error of the evaluation when it is compared to the actual outcome of the positions.
Do you mean the following:
- fen as input
- calc move with an eval val
- calc eval of the move that should've been moved
- compare these two (how? percentual difference? or what?)
- fen as input
- calculate evaluation
- calculate error: compare evaluation score with the actual outcome. If the evaluation calculates that white has a big advantage but black wins -> big error. The exact formula is described on the cpw. This is my (pseudo) implementation, where K=1.3:
Code: Select all
public double calculateTotalError() {
double totalError = 0;
for (Entry<String, Double> entry : fens.entrySet()) { // fens contains all positions, including the outcome
ChessBoard cb = new ChessBoard(entry.getKey());
totalError += Math.pow(entry.getValue() - calculateSigmoid(Eval.calculateScore(cb)), 2);
}
totalError /= fens.size();
return totalError;
}
public double calculateSigmoid(int score) {
return 1 / (1 + Math.pow(10, -1.3 * score / 400));
}
-
- Posts: 273
- Joined: Wed Aug 24, 2016 9:49 pm
Re: tuning for the uninformed
This is the local search algorithm but I would assume that it is better to run some gradient based algorithm first (Maybe gradient descent or gauss-newton). Then if the error doesn't change by much anymore I would switch to local search.sandermvdb wrote:The basic idea is pretty simple: calculate the error of the evaluation when it is compared to the actual outcome of the positions. Lower a particular evaluation parameter and check if the error has improved, if not, higher the parameter, if again not improved, keep the original value. Do this for all parameters until you have reached the lowest error.flok wrote:If anyone is willing to explain the Texel tuning method tht would be great!
Sofar I understand I have to let it play (well, run QS + eval on FENs) millions of games and then do something with the evaluation-value. But what? I don't understand the wiki explanation.
I am going to implement texel tuning this week and see what I get
-
- Posts: 7220
- Joined: Mon May 27, 2013 10:31 am
Re: tuning for the uninformed
All does not work if search space has a great many local optima and only very few global optima that you are interested in. But simulated annealing taking too long.
-
- Posts: 273
- Joined: Wed Aug 24, 2016 9:49 pm
Re: tuning for the uninformed
Local search and any other practical algorithm to minimize the error will end up in a local optimum.Henk wrote:All does not work if search space has a great many local optima and only very few global optima that you are interested in. But simulated annealing taking too long.
-
- Posts: 7220
- Joined: Mon May 27, 2013 10:31 am
Re: tuning for the uninformed
Wasn't it that if it optimizes enough parameters you won't get trapped in a local optimum. I can't remember.CheckersGuy wrote:Local search and any other practical algorithm to minimize the error will end up in a local optimum.Henk wrote:All does not work if search space has a great many local optima and only very few global optima that you are interested in. But simulated annealing taking too long.
-
- Posts: 558
- Joined: Sat Mar 25, 2006 8:27 pm
Re: tuning for the uninformed
I don't think there is ever a guarantee of that without additional information about the domain. There's always the chance of a global minimum that is far from the rest of the "good" solutions that you will never hit except by very good chance.Henk wrote:Wasn't it that if it optimizes enough parameters you won't get trapped in a local optimum. I can't remember.CheckersGuy wrote:Local search and any other practical algorithm to minimize the error will end up in a local optimum.Henk wrote:All does not work if search space has a great many local optima and only very few global optima that you are interested in. But simulated annealing taking too long.
Consider trying to find the minimum of this function (without actually knowing the function ahead of time):
y=-x, for 4999<x<=5000
y=x^2, for all other x
There is basically no chance someone is going to find the global minimum at x=5000. Then add another 100 dimensions for chess tuning.
-
- Posts: 931
- Joined: Tue Mar 09, 2010 3:46 pm
- Location: New York
- Full name: Álvaro Begué (RuyDos)
Re: tuning for the uninformed
That function is not qualitatively similar to the loss function being minimized in chess tuning. Adding many more dimensions actually ameliorates the problem of getting stuck in local minima.Robert Pope wrote:I don't think there is ever a guarantee of that without additional information about the domain. There's always the chance of a global minimum that is far from the rest of the "good" solutions that you will never hit except by very good chance.Henk wrote:Wasn't it that if it optimizes enough parameters you won't get trapped in a local optimum. I can't remember.CheckersGuy wrote:Local search and any other practical algorithm to minimize the error will end up in a local optimum.Henk wrote:All does not work if search space has a great many local optima and only very few global optima that you are interested in. But simulated annealing taking too long.
Consider trying to find the minimum of this function (without actually knowing the function ahead of time):
y=-x, for 4999<x<=5000
y=x^2, for all other x
There is basically no chance someone is going to find the global minimum at x=5000. Then add another 100 dimensions for chess tuning.
The fear of getting stuck in a local minimum is likely overblown. If your evaluation function is linear, the corresponding optimization problem is convex, which implies there is only one critical point, which is the global minimum. If your evaluation function is something like a deep neural network with ReLU activations, the minimization problem is not convex and there are gazillions of critical points, but because of the high dimensionality most of them are saddle points and not minima. There are results from solid-state physics (something about randomized polynomials) that indicate that all the local minima have values contained in a narrow region above the true minimum, so it doesn't really matter which one you find.