double r = 0.18 * ddepth + 3.1 + log(delta)/5.0;
Balancing this equation is perhaps the next challenge? I need to think about this...
I just drew a curve by hand, and then set out a few points and formed the equation. I guess that an iterative process to improve the coeficients or an even better equation will give better results.
I also drop off depth in integral plies (because I was not sure it was valid to drop off fractional plies). It might be a good idea to scale more smoothly yet by using a multiplication of OnePly before rounding.
I doubt that 150 Elo will hold, but I guess at least 100 Elo will. Then again, my test was not very extensive.
Scales null move smoothly. I get about +150 Elo so far in my testing. How about yours?
I never got that idea to work, but I did test it quite a few years ago when depths were not as significant. In fact, Ernst and I developed the "adaptive null-move idea" independently. John Stanback had suggested the basics for the idea around 1995 or so. Initially it was a question of R=1 or R=2, but eventually became R=2 or R=3. I used to have fractional plies, and tried a gradual and continuous (smooth) reduction as it approached the frontier nodes, but never found something that worked any better than the simple R=3 closer to the root, R=2 closer to the leaves.
With Stockfish 1.6, I get a branching factor of well under 2. It clearly adds strength. I guess that when the idea is polished, it will be a lot better. I have other experiments that do something similar (well, actually the opposite as far as pruning goes -- so I guessed it would help) but I thought a quick demonstration of the idea might benefit the chess community.
Note that I do not have the 1.62 sources, which have additional corrections.
I guess that the curve can be greatly improved. It was a first hack eyeball guestimate.
Ahaaaaa you changed the indentation of the functions !!!
In a project where more then one people work changing indentation of the whole file is never a good idea. It becomes difficult to spot the real differences for no gain at all.
Anyhow thanks !!! I will test for sure
Sorry, an evil habit so that *I* can understand the code better and I won't be tricked by incorrect indentations.
I have a diff that will ignore white space so it never affects me to do it that way.
double r = 0.18 * ddepth + 3.1 + log(delta)/5.0;
Balancing this equation is perhaps the next challenge? I need to think about this...
I think that it may be interesting to balance this equation also with
replacing approximateEval = quick_evaluate(pos) by something that is closer to the real evaluation or the real evaluation.
Even if evaluate(pos) is too expensive to calculate then it is not expensive to calculate average difference between quick_evaluate() and evaluate()
for the cases that evaluate() is being calculated and later use the average value for better approximation.
double r = 0.18 * ddepth + 3.1 + log(delta)/5.0;
Balancing this equation is perhaps the next challenge? I need to think about this...
I think that it may be interesting to balance this equation also with
replacing approximateEval = quick_evaluate(pos) by something that is closer to the real evaluation or the real evaluation.
Even if evaluate(pos) is too expensive to calculate then it is not expensive to calculate average difference between quick_evaluate() and evaluate()
for the cases that evaluate() is being calculated and later use the average value for better approximation.
Uri
Yes, there are many possible tweaks, also calculate the evaluation and use it could be an option to test.
But as usual is far more quick to get the idea then to test it
I think here what is needed most is testing...ideas are not a problem
If the first cut of this as has been posted by Dann is shown to work then, if we really want to converge quickly to an optimum solution for the benefit of the comunity we should use Bob's cluster..... if he agrees
Scales null move smoothly. I get about +150 Elo so far in my testing. How about yours?
I never got that idea to work, but I did test it quite a few years ago when depths were not as significant. In fact, Ernst and I developed the "adaptive null-move idea" independently. John Stanback had suggested the basics for the idea around 1995 or so. Initially it was a question of R=1 or R=2, but eventually became R=2 or R=3. I used to have fractional plies, and tried a gradual and continuous (smooth) reduction as it approached the frontier nodes, but never found something that worked any better than the simple R=3 closer to the root, R=2 closer to the leaves.
With Stockfish 1.6, I get a branching factor of well under 2. It clearly adds strength. I guess that when the idea is polished, it will be a lot better. I have other experiments that do something similar (well, actually the opposite as far as pruning goes -- so I guessed it would help) but I thought a quick demonstration of the idea might benefit the chess community.
I haven't looked but what is the "range"? I used to use 3 down to 2, but when I added the qsearch check and check-evasion code, I went with 3 everywhere. If they reduce more than 3, I could see a possible benefit...
Scales null move smoothly. I get about +150 Elo so far in my testing. How about yours?
I never got that idea to work, but I did test it quite a few years ago when depths were not as significant. In fact, Ernst and I developed the "adaptive null-move idea" independently. John Stanback had suggested the basics for the idea around 1995 or so. Initially it was a question of R=1 or R=2, but eventually became R=2 or R=3. I used to have fractional plies, and tried a gradual and continuous (smooth) reduction as it approached the frontier nodes, but never found something that worked any better than the simple R=3 closer to the root, R=2 closer to the leaves.
With Stockfish 1.6, I get a branching factor of well under 2. It clearly adds strength. I guess that when the idea is polished, it will be a lot better. I have other experiments that do something similar (well, actually the opposite as far as pruning goes -- so I guessed it would help) but I thought a quick demonstration of the idea might benefit the chess community.
I haven't looked but what is the "range"? I used to use 3 down to 2, but when I added the qsearch check and check-evasion code, I went with 3 everywhere. If they reduce more than 3, I could see a possible benefit...
The formula is
double r = 0.18 * ddepth + 3.1 + log(delta)/5.0;
delta is at least 1 so the last component is not negative and it means if I understand correctly that they reduce at least 2.1 plies+18% of the remaining depth(the number in the formula is 3.1 and not 2.1 but there is 1 ply that you always reduce so I do not count it)
Uri