obvious/easy move

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

jd1
Posts: 269
Joined: Wed Oct 24, 2012 2:07 am

Re: obvious/easy move - final results

Post by jd1 »

Pio wrote:
If I had a working chess engine I would confirm a change only if I had run lots of games in really fast time-controls/few-nodes for lots of different really fast time-controls/few-nodes to see if the change could be an improvement for long time controls. To see if the change might be good on long time controls you could for example just extrapolate your findings from the many different short time tests by using a plausible function and do a maximum likelihood estimation for the parameters of that function. That would give you an indication if the change will scale and work on long time controls.
I wonder whether you can fit a function to predict long TC performance. What type of function would you use?

I ask because I do something similar. I generally run 2-4000 games at 5' + 0.1', if the result is positive/close or I think the change is logical should be good, I run 2-4000 games at 10' +0.2'. If it is still good I test at 30' + 0.5'. And then I sum the results up, giving more weight to the longer time controls. So each change I commit gets about 8000 games at varying time controls.

Jerry
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: obvious/easy move - final results

Post by bob »

JBNielsen wrote:
bob wrote:
JBNielsen wrote:I have made my own version of play-easy-moves-fast.

I simply calculate how much the moves scores in average differs from the score of the best move.

To spice things up I have added these rules for each move:
1) if the difference is more than 75 I add 100 to the difference
2) if the difference is more than 25 I add 25 to the difference

If the total difference is more than 500 the difference is restricted to 500.

This will result in an average difference between 0 and 500.
And this allows my engine to end its calculation when 90% or 10% (gradually) of the time for the move is used.

I do not look at the result of previous iterations, but I only use this play-easy-moves-fast gradually the first iterations.

If I have a mate-score I use a default average difference of 50.

I have not tried to measure any elo-effect by this.
But it looks fine when I watch dabbaba play matches.

PS. Sorry for returning to the subject in this thread :wink:
How do you know "the moves scores in average" that you compare to the score of the first move??? Static eval? Root move ordering only?
As I wrote in an earlier post here ( http://www.talkchess.com/forum/viewtop ... 5&t=46605 )
My engine is probably not written the standard way. I started it 18 years ago based on what I had read and my intuition and logic.
When a white move at the root is refused by a given black move, I store the score for the move that gave the cuf-off.
If you re-read your post, you will realize it really doesn't explain anything at all. Alpha/beta uses the idea of finding a move to refute another move, in fact.

Hence my question.

If you do not use alpha/beta, it makes little sense to suggest ordering ideas to those that do, since the idea doesn't translate. In alpha/beta, we do not know the "score" of a move that refutes another. Just that it is "good enough" (and it doesn't even need to be the best move, either).
JBNielsen
Posts: 267
Joined: Thu Jul 07, 2011 10:31 pm
Location: Denmark

Re: obvious/easy move - final results

Post by JBNielsen »

bob wrote:
JBNielsen wrote:
bob wrote:
JBNielsen wrote:I have made my own version of play-easy-moves-fast.

I simply calculate how much the moves scores in average differs from the score of the best move.

To spice things up I have added these rules for each move:
1) if the difference is more than 75 I add 100 to the difference
2) if the difference is more than 25 I add 25 to the difference

If the total difference is more than 500 the difference is restricted to 500.

This will result in an average difference between 0 and 500.
And this allows my engine to end its calculation when 90% or 10% (gradually) of the time for the move is used.

I do not look at the result of previous iterations, but I only use this play-easy-moves-fast gradually the first iterations.

If I have a mate-score I use a default average difference of 50.

I have not tried to measure any elo-effect by this.
But it looks fine when I watch dabbaba play matches.

PS. Sorry for returning to the subject in this thread :wink:
How do you know "the moves scores in average" that you compare to the score of the first move??? Static eval? Root move ordering only?
As I wrote in an earlier post here ( http://www.talkchess.com/forum/viewtop ... 5&t=46605 )
My engine is probably not written the standard way. I started it 18 years ago based on what I had read and my intuition and logic.
When a white move at the root is refused by a given black move, I store the score for the move that gave the cuf-off.
If you re-read your post, you will realize it really doesn't explain anything at all. Alpha/beta uses the idea of finding a move to refute another move, in fact.

Hence my question.

If you do not use alpha/beta, it makes little sense to suggest ordering ideas to those that do, since the idea doesn't translate. In alpha/beta, we do not know the "score" of a move that refutes another. Just that it is "good enough" (and it doesn't even need to be the best move, either).
I use alpha/beta.

Take the position after d4,Nf6 Bg5,Ne4.
Here white has a lot of moves, that will lose the bishop. These moves will be refuted by a black move that returns a score of ca -300.
In my calculation the average move score will probably be 350, and this will cause the engine to move faster. And that is right because a lot of moves are easily refuted, and we have probably reached the usual depth.

The scores are not exact, but that is not always the most important in a chessprogram (speed is often more important).

My engine has a low N/S. Ca. 80.000/sec in the opening on my 5 year old laptop. I use a lot of time on move ordering - much is spent with SEE, and this will (in the mentioned position) besides escaping moves by Bg5 also bring covering moves like Nf3, Nh3, Qc1, f4 and h4 high on the list.

Perhaps it is this good move ordering that results in useable scores for my make-easy-moves-fast (and move ordering at the root as covered in the thread http://www.talkchess.com/forum/viewtopi ... 16&t=46605 )
Pio
Posts: 335
Joined: Sat Feb 25, 2012 10:42 pm
Location: Stockholm

Re: obvious/easy move - final results

Post by Pio »

Hi Jerry!

I think this thread is really good that Jesús started http://www.talkchess.com/forum/viewtopi ... 58&t=44889

Your question was this:
Pio wrote:


If I had a working chess engine I would confirm a change only if I had run lots of games in really fast time-controls/few-nodes for lots of different really fast time-controls/few-nodes to see if the change could be an improvement for long time controls. To see if the change might be good on long time controls you could for example just extrapolate your findings from the many different short time tests by using a plausible function and do a maximum likelihood estimation for the parameters of that function. That would give you an indication if the change will scale and work on long time controls.




I wonder whether you can fit a function to predict long TC performance. What type of function would you use?

I ask because I do something similar. I generally run 2-4000 games at 5' + 0.1', if the result is positive/close or I think the change is logical should be good, I run 2-4000 games at 10' +0.2'. If it is still good I test at 30' + 0.5'. And then I sum the results up, giving more weight to the longer time controls. So each change I commit gets about 8000 games at varying time controls.

Jerry
I would try out this formula:
Rating(depth) = C1 + C2 * log(depth + C3)

Now why would I use the constants C1, C2 and C3 as the undetermined constants and why do I use the log-function?

The C1 constant is there because if the log part of the equation is zero you still want to have some rating.

The C2 constant is there to determine how well your program scales that has to do with branching factor and also what type of logarithmic function you have chosen.

The logarithmic function is there because you could see a tree of depth m*n as a tree of depth n where all the nodes are represented by an m-ply search or as a tree of depth m where all the nodes are represented by an n-ply search.

It does not matter what type of logaritmic (you will probably use the log2-function or the natural logarithm function ln) function you use since it will just be a constant that will be encapsuled in the C2 constant

The C3 constant is there because you will have to have some term to correct for the extreme case when depth go to zero. C3 should be positive so that the logarithm will not go to -infinity. If you use some sort of pruning the last couple of pruningDepth number of plies, I think you should correct your formula to Rating(depth - pruningDepth) = C1 + C2 * log((depth - pruningDepth) + C3) but then the formula will only be valid for depth - pruningDepth > 0. The original formula will still be valid for depth > pruningDepth since the pruningDepth will be incorporated in the C3 term.

Of course you will get a better fit the more constants you use. The best is to find a function with as few unknowns as possible that fits the real data as close as possible. You could use least square fitting if it is easier or you could use maximum likeliehood estimation. I think you are absolutely right to weight the long games a little bit more since if you do not evaluate or search anything it will just be random moves and the result of the games will be like flipping a coin.

I do not have any data to support What I have written and everything is just a guess. You will see if the guess fits the real data.

Good luck!