There are ways to do it. I am thinking about using sparse regression techniques to handle the high-dimensional case.mcostalba wrote:Thanks for your quick answer to previous question. Here is the next oneRémi Coulom wrote: More generally, you can try to be creative to reduce the dimensionality of the optimization problem.
Why don't you reduce the dimensionality by yourself ?
I mean user asks to tune p1,....,p8 and then he sets also a new parameter that is "dimensionality": given dimensionality = 2 you tune the two derived values c1 and c2 that are derived from p1,..p8 for instance (but here you are much more creative than me) by a linear combination of p1...p8.
If you remember the ampli+bias idea that Joona and me reported, this is a kind of generalization of that idea.
What do you think ?
P.S: After many months tuning SF I made up my mind that the secret of a good tuning is the choice of the starting variables to tune. So a mapping of P1,...,Pn to C1,...,Ck and tune Ck could yield, if done properly, a much faster and better tune.
But nothing will completely replace the intelligence of the user. It is like building an evaluation function. You can try to use a universal function approximator, with the chess board as input and the evaluation as output, and optimize that approximator somehow. But no generic solution will work better than manually building domain-specific features.
Rémi