George Tsavdaris wrote:Michael Sherwin wrote:Michael Sherwin wrote:
I am changing the formula to this for the next run.
Code: Select all
double delta;
double ddepth;
int r;
if(depth < 6) r = 3; else {
delta = max(h->eval - beta, 1.0);
ddepth = (double)depth;
r = (int)(0.18 * ddepth + 3.1 + log(delta)/5.0);
}
Well, the above one was terrible.
The following one is doing fantastic!
Code: Select all
double delta = max(h->eval - beta, 1.0);
double ddepth = (double)depth;
int r = (int)(0.25 * ddepth + 2.5 + log(delta)/5.0);
RomiChess96 - Olithinkwin32 : 11.5/13 11-1-1 (101=111111111) 88% +346
Don't anyone dare say '
WOWIE!'!

OK can i dare to ask for an explanation of all these?
I mean what exactly these 2 lines do?
double delta = max(h->eval - beta, 1.0);
int r = (int)(0.25 * ddepth + 2.5 + log(delta)/5.0);
And how did you find them?
Also Dann mentioned he predicted almost by hand his formula for Stockfish. How did he do that exactly?
And what one should do to do it in a more scientific way(regression analysis, interpolation etc, but how exactly)?
Also what Dann's method does more or less? How it works i mean. What different thing it does. (Note that i'm not a programmer but i'm trying- with a turtle speed- to get to that side.)
Dann should answer this, however, I will try.
double delta = max(h->eval - beta, 1.0);
delta measures the gap between the position evaluation and beta. beta is the best score possible to achieve for the side to move. If e - b is positive then delta will be 1 or more. If e - b is 0 or negative then delta will be one.
int r = (int)(0.25 * ddepth + 2.5 + log(delta)/5.0);
r is the amount to reduce the depth of the call to Null Move Search. ddepth is just the remaining depth type cast to a double. log is the natural logarithm found in nature, iirc it can be used to describe things like the spiral of a sea shell. It takes a larger number and returns a smaller one. The larger the delta the larger its contribution to r. log produces a curve that curves downward from a linear increase and effectively caps the contribution from delta.
so say that depth is 8: 2 + 2.5 + <0.5(for a close position) = <5.0 = 4
r = 4 which is good. : 2 + 2.5 + >0.5(for having a big advantage) = >5.0 = 5 (or more if delta is really big)
I assume that Dann had a target range for r in mind as a starting point. Maybe it was the base 3 + 1 for every 6 ply of depth (.18, 3.1--0.1 of 3.1 is a simple correction for .18) + more for a large delta. As a linear increase for delta is just simply too aggressive the natural logarithm is the natural place to start.
Use logic to guess at the best proportions and then narrow it down from there. If you have a cluster to test with then do interpolation or regression until you find the best values for your program.