Page 2 of 4

Re: Yet another parameter tuner using optuna framework

Posted: Wed Sep 16, 2020 7:05 pm
by chrisw
Joerg Oster wrote:
Wed Sep 16, 2020 6:33 pm
chrisw wrote:
Wed Sep 16, 2020 3:30 pm
Joerg Oster wrote:
Wed Sep 16, 2020 2:30 pm
Ferdy wrote:
Wed Sep 16, 2020 1:17 pm
3. I'm not sure if parameter changes to a 'quick match' change all five parameters at a time, or just one?
Sorry I don't understand the question.
I guess he wants to know if the tuner changes all parameters at once or one by one for a new trial.
In the document they refer to this as relational sampling and independent sampling.
Yup, pretty much what I meant
If I understand it correctly, Optuna will eventually do both and also a mixture of both, to find out about the correlation of the parameters.
The graphs have labels “importance of parameter”, so I intuited that to mean the process was zeroing in on where to focus, so to speak. Floundering a bit for words because it’s not clear how or if.

Re: Yet another parameter tuner using optuna framework

Posted: Wed Sep 16, 2020 10:05 pm
by Ferdy
Joerg Oster wrote:
Wed Sep 16, 2020 2:30 pm
Ferdy wrote:
Wed Sep 16, 2020 1:17 pm
3. I'm not sure if parameter changes to a 'quick match' change all five parameters at a time, or just one?
Sorry I don't understand the question.
I guess he wants to know if the tuner changes all parameters at once or one by one for a new trial.
In the document they refer to this as relational sampling and independent sampling.
We can ask the optimizer either one by one or all at once. I did all at once. https://github.com/fsmosca/Optuna-Game- ... ner.py#L65

Example.

Code: Select all

pawn_value = trial.suggest_int('pawn_value', 50, 150, 2)
The parameters are param_name=pawn_value, minimum=50, maximum=150, step=2. The step 2 is useful for controlling the amount of increments applied to parameter values.

If I want more before making a trial run, I can ask for rook value for example.

Code: Select all

rook_value = trial.suggest_int('rook_value', 400, 600, 4)
Then in the next trial or game match, the test engine will take the pawn_value and rook_value above and the base engine will take the current best param values or in the case for trial 0, the base engine will take the default values or initial param values.

Re: Yet another parameter tuner using optuna framework

Posted: Wed Sep 16, 2020 10:24 pm
by Ferdy
Joerg Oster wrote:
Wed Sep 16, 2020 2:53 pm
Ferdy wrote:
Wed Sep 16, 2020 1:17 pm
2. Uses cutechess tournament mode, with output the result of a quick match (25 rounds?)
Yes, number of game is settable, the more games the better.
The number of games per trial is probably dependent on the sensitivity of the parameters, no?
24 games seems very small in any case.
There can be parameter that does need too much games per trial. 24 can be high or low. There can be other conditions, like time control used, the number of parameters being tuned and the threshold of best value. To save optimization time one can start at lower number games. But what matter is after the trials the parameter values suggested by the optimizer improves over the default values.

Re: Yet another parameter tuner using optuna framework

Posted: Wed Sep 16, 2020 10:37 pm
by Ferdy
chrisw wrote:
Wed Sep 16, 2020 3:41 pm
Looks very good. I still need to wrap my head around the graphs, will try later.

One thing, from your Github (and I don't want to flood you with suggested mods, I know how irritating that can be):
Second in order for the parameter values to be considered the best and replace the old best, it has to defeat the old best by more than 0.55 or 55% score. Normally this is only 0.5 or 50%.
I tried something like this in the past (except with random kicks to the parameters, not smart ones as you are doing with Optima), where the parameters, P1, give a better result than P0 - was to move P fractionally towards P1, rather than the full thing.
This could work in the case of the close 50-55% range. Just an idea, but I expect you are full of ideas already!
That can be tried for sure. I have not yet looked at the code inside the optimizer. It might already have a gradient calculated and automatic learning rate adjusted depending on the result of the match that we are sending to it after every trial.

Re: Yet another parameter tuner using optuna framework

Posted: Wed Sep 16, 2020 10:53 pm
by Ferdy
No4b wrote:
Wed Sep 16, 2020 6:58 pm
Very interesting tool!
I will definitely try to work with it.

Am i understanding correctly that the engine itself obtain parameters via command line fe as "QueenValueOp=975"?
Not at the moment (will add it later), you need to modify the code around here https://github.com/fsmosca/Optuna-Game- ... er.py#L197
And I need only one copy of the engine in the folder, tuner just will execute two copies of it with a different parameters set?
Correct.

The engine can be anywhere you can also specify an absolute path.

Code: Select all

python tuner.py --engine c:/chess/engines/enginefoldername/engineexefilename ...

Re: Yet another parameter tuner using optuna framework

Posted: Wed Sep 16, 2020 11:01 pm
by Kiudee
Thanks Ferdy for making your tool available! I like some of the plots you output for the results.
I wanted to point out that Optuna uses tree-structured parzen estimators as their model, which does not model interactions between parameters. Tools based on Gaussian processes (like the chess-tuning-tools or the tool released by thomasahle) take all interactions into account and thus are able to interpolate/extrapolate much more accurately.

Re: Yet another parameter tuner using optuna framework

Posted: Wed Sep 16, 2020 11:07 pm
by Ferdy
Kiudee wrote:
Wed Sep 16, 2020 11:01 pm
Thanks Ferdy for making your tool available! I like some of the plots you output for the results.
I wanted to point out that Optuna uses tree-structured parzen estimators as their model, which does not model interactions between parameters. Tools based on Gaussian processes (like the chess-tuning-tools or the tool released by thomasahle) take all interactions into account and thus are able to interpolate/extrapolate much more accurately.
Thanks for the info. The one with thomasahle takes a lot of memory tried it before. I have not yet tried your chess tuning tools. Will try it someday and compare it with optuna.

Re: Yet another parameter tuner using optuna framework

Posted: Thu Sep 17, 2020 7:31 am
by mvanthoor
At some point in the future, I'll have to look into this, or other similar tools. Thanks :)

Re: Yet another parameter tuner using optuna framework

Posted: Thu Sep 17, 2020 8:37 am
by Joerg Oster
Ferdy wrote:
Wed Sep 16, 2020 11:07 pm
Kiudee wrote:
Wed Sep 16, 2020 11:01 pm
Thanks Ferdy for making your tool available! I like some of the plots you output for the results.
I wanted to point out that Optuna uses tree-structured parzen estimators as their model, which does not model interactions between parameters. Tools based on Gaussian processes (like the chess-tuning-tools or the tool released by thomasahle) take all interactions into account and thus are able to interpolate/extrapolate much more accurately.
Thanks for the info. The one with thomasahle takes a lot of memory tried it before. I have not yet tried your chess tuning tools. Will try it someday and compare it with optuna.
Nevergrad might also be an interesting alternative.
It offers a wide variety of optimization methods,
and has a nice ask and tell interface.

Re: Yet another parameter tuner using optuna framework

Posted: Thu Sep 17, 2020 2:47 pm
by jdart
Interesting. I also found this software:

https://github.com/automl/HpBandSter

which seems kind of similar. I have tried this sort of thing before especially for search parameters, but the problem I've found is that the effect of varying these can be quite small. So you are trying to find the optimum point but basically on a very "flat" surface, and furthermore with a method that produces noisy objective measures. It is therefore hard to get convergence.