Smooth scaling stockfish

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

DomLeste
Posts: 221
Joined: Thu Mar 09, 2006 4:53 pm

Re: Little test with both stockfish 1.6s versions!

Post by DomLeste »

Thanks for your contribution Dann! going out in style for this decade :)

Are you telling us Stockfish 1.6s first version is probably better then the 2nd?

For others here...

1st version for 32bit version Scales null move smoothly 328 KB (336,384 bytes)

2nd update

32bit version slightly smoother scaling 326 KB (333,824 bytes)
Insanity: doing the same thing over and over again and expecting different results.
Albert Einstein
Dann Corbit
Posts: 12777
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Little test with both stockfish 1.6s versions!

Post by Dann Corbit »

DomLeste wrote:Thanks for your contribution Dann! going out in style for this decade :)

Are you telling us Stockfish 1.6s first version is probably better then the 2nd?

For others here...

1st version for 32bit version Scales null move smoothly 328 KB (336,384 bytes)

2nd update

32bit version slightly smoother scaling 326 KB (333,824 bytes)
I don't have enough games to prove that the first version is better but it is the trend that I see and it appears that some others have seen the same thing.
Dann Corbit
Posts: 12777
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Little test with both stockfish 1.6s versions!

Post by Dann Corbit »

Milos wrote:
Dann Corbit wrote:I suspect that it is too much. It also may be a good idea to adjust the constants. I actually just eyeballed the equation constants from a curve I drew by hand an calculated by inverting a small matrix. I guess that great improvement can be had by adjusting the formula in a rational and scientific manner.
log(delta)/5 is too strong. My feeling is that something like sqrt(delta)/10 works better (with delta limited to 100 max).
Delta is a huge value in Tord's program because it is in centipawns.

I do not know that log base is is the right squeezer.
I suggested cube root in my original post as an alternative.
This factor also is a rich area for possible experimentation.

I guess that when a final formula is found, it will be good to create a simple table that stores the constants or use a cubic spline field.
zullil
Posts: 6442
Joined: Tue Jan 09, 2007 12:31 am
Location: PA USA
Full name: Louis Zulli

Re: Little test with both stockfish 1.6s versions!

Post by zullil »

Dann Corbit wrote:
First version (I think it turns out better):

Code: Select all

#ifdef SMOOTH_REDUCTION
		double delta = approximateEval - beta;
		delta = max(delta, 1.0);
		double ddepth = double(depth);
		double r = 0.18 * ddepth + 3.1 + log(delta)/5.0;
		r = r > ddepth ? ddepth : r;
		int R = int(r);
#else
        // Null move dynamic reduction based on depth
        int R = (depth >= 5 * OnePly ? 4 : 3);

        // Null move dynamic reduction based on value
        if (approximateEval - beta > PawnValueMidgame)
            R++;
#endif
		nullValue = -search(pos, ss, -(beta-1), depth-R*OnePly, ply+1, false, threadID);
Second version:

Code: Select all

#ifdef SMOOTH_REDUCTION
		double delta = approximateEval - beta;
		delta = max(delta, 1.0);
		double ddepth = double(depth);
		double r = 0.18 * ddepth + 3.1 + log(delta)/5.0;
		r = r > ddepth ? ddepth : r;
		int R = int(r * (int)OnePly);
#else
        // Null move dynamic reduction based on depth
        int R = (depth >= 5 * OnePly ? 4 : 3);

        // Null move dynamic reduction based on value
        if (approximateEval - beta > PawnValueMidgame)
            R++;
		R *= OnePly;
#endif
		nullValue = -search(pos, ss, -(beta-1), depth-R, ply+1, false, threadID);

The difference is that R will always be truncated to integral plies in the first instance and it will be truncated into integral half plies in the second instance. This is because OnePly is actually defined as 2.

The net result is that the second version has stair-steps of 1/2 ply at a time and the first version has stair-steps of one ply at a time. There is more area under the 1/2 ply stair-steps and so more trimming occurs. I suspect that it is too much. It also may be a good idea to adjust the constants. I actually just eyeballed the equation constants from a curve I drew by hand an calculated by inverting a small matrix. I guess that great improvement can be had by adjusting the formula in a rational and scientific manner.
Thanks! Now I understand.
zullil
Posts: 6442
Joined: Tue Jan 09, 2007 12:31 am
Location: PA USA
Full name: Louis Zulli

Re: Little test with both stockfish 1.6s versions!

Post by zullil »

Dann Corbit wrote:
First version (I think it turns out better):

Code: Select all

#ifdef SMOOTH_REDUCTION
		double delta = approximateEval - beta;
		delta = max(delta, 1.0);
		double ddepth = double(depth);
		double r = 0.18 * ddepth + 3.1 + log(delta)/5.0;
		r = r > ddepth ? ddepth : r;
		int R = int(r);
#else
        // Null move dynamic reduction based on depth
        int R = (depth >= 5 * OnePly ? 4 : 3);

        // Null move dynamic reduction based on value
        if (approximateEval - beta > PawnValueMidgame)
            R++;
#endif
		nullValue = -search(pos, ss, -(beta-1), depth-R*OnePly, ply+1, false, threadID);
Second version:

Code: Select all

#ifdef SMOOTH_REDUCTION
		double delta = approximateEval - beta;
		delta = max(delta, 1.0);
		double ddepth = double(depth);
		double r = 0.18 * ddepth + 3.1 + log(delta)/5.0;
		r = r > ddepth ? ddepth : r;
		int R = int(r * (int)OnePly);
#else
        // Null move dynamic reduction based on depth
        int R = (depth >= 5 * OnePly ? 4 : 3);

        // Null move dynamic reduction based on value
        if (approximateEval - beta > PawnValueMidgame)
            R++;
		R *= OnePly;
#endif
		nullValue = -search(pos, ss, -(beta-1), depth-R, ply+1, false, threadID);


max should be Max, right?
Milos
Posts: 4190
Joined: Wed Nov 25, 2009 1:47 am

Re: Little test with both stockfish 1.6s versions!

Post by Milos »

Dann Corbit wrote:Delta is a huge value in Tord's program because it is in centipawns.

I do not know that log base is is the right squeezer.
I suggested cube root in my original post as an alternative.
This factor also is a rich area for possible experimentation.

I guess that when a final formula is found, it will be good to create a simple table that stores the constants or use a cubic spline field.
You didn't understand me. When I said log is too strong, I meant in the region where delta is smaller then half a centipawn.
Just look at the graph:
Image
Dann Corbit
Posts: 12777
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Little test with both stockfish 1.6s versions!

Post by Dann Corbit »

zullil wrote:
Dann Corbit wrote:
First version (I think it turns out better):

Code: Select all

#ifdef SMOOTH_REDUCTION
		double delta = approximateEval - beta;
		delta = max(delta, 1.0);
		double ddepth = double(depth);
		double r = 0.18 * ddepth + 3.1 + log(delta)/5.0;
		r = r > ddepth ? ddepth : r;
		int R = int(r);
#else
        // Null move dynamic reduction based on depth
        int R = (depth >= 5 * OnePly ? 4 : 3);

        // Null move dynamic reduction based on value
        if (approximateEval - beta > PawnValueMidgame)
            R++;
#endif
		nullValue = -search(pos, ss, -(beta-1), depth-R*OnePly, ply+1, false, threadID);
Second version:

Code: Select all

#ifdef SMOOTH_REDUCTION
		double delta = approximateEval - beta;
		delta = max(delta, 1.0);
		double ddepth = double(depth);
		double r = 0.18 * ddepth + 3.1 + log(delta)/5.0;
		r = r > ddepth ? ddepth : r;
		int R = int(r * (int)OnePly);
#else
        // Null move dynamic reduction based on depth
        int R = (depth >= 5 * OnePly ? 4 : 3);

        // Null move dynamic reduction based on value
        if (approximateEval - beta > PawnValueMidgame)
            R++;
		R *= OnePly;
#endif
		nullValue = -search(pos, ss, -(beta-1), depth-R, ply+1, false, threadID);


max should be Max, right?
Some C++ systems will not have macro max(a,b) defined.

So there is another macro in misc.h which is intended to be portable:

misc.h ( 40): #define Max(x, y) (((x) < (y))? (y) : (x))

So, indeed, the choice of Max() over max() is better.

We could include header <algorithm> in C++ which has the following:
template<class T> const T& max(const T& a, const T& b);
template<class T, class Compare>
const T& max(const T& a, const T& b, Compare comp);

however, older C++ compilers might want <algorithm.h> and C++ compliance is a bit more spotty than C compliance in many cases. So it is safer to declare our own macro with a case difference.
User avatar
hgm
Posts: 28354
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Little test with both stockfish 1.6s versions!

Post by hgm »

This does not feel right. You reduce more when you are more ahead. I would expect that a fundamentally flawed strategy. Allowing less depth is then almost a guarantee that you will never earn back the material. You might as well prune the branch, rather than reduce it.

The whole idea of recursive null-move pruning is already that you reduce more when you are ahead more, because you can afford to ignore more threats or captures by the opponent (each null move giving an extra reduction). So increasing the null-move reduction should have a similar effect as giving extra reduction based on the current eval. But with null-move you ctually test if you can afford the reducton dynamically, and if the opponent (although heavily behind) can play a number of very dangerous threats (checks, or atacks on your Queen) that might conceivably allow him to get even, the reduction evaporates, and you will get the fail low. Reducing based on the static eval is much more dangerous.

If increasng the reduction when you are more head helps, it just means your base reduction was not large enough.
Dann Corbit
Posts: 12777
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Little test with both stockfish 1.6s versions!

Post by Dann Corbit »

hgm wrote:This does not feel right. You reduce more when you are more ahead. I would expect that a fundamentally flawed strategy. Allowing less depth is then almost a guarantee that you will never earn back the material. You might as well prune the branch, rather than reduce it.

The whole idea of recursive null-move pruning is already that you reduce more when you are ahead more, because you can afford to ignore more threats or captures by the opponent (each null move giving an extra reduction). So increasing the null-move reduction should have a similar effect as giving extra reduction based on the current eval. But with null-move you ctually test if you can afford the reducton dynamically, and if the opponent (although heavily behind) can play a number of very dangerous threats (checks, or atacks on your Queen) that might conceivably allow him to get even, the reduction evaporates, and you will get the fail low. Reducing based on the static eval is much more dangerous.

If increasng the reduction when you are more head helps, it just means your base reduction was not large enough.
I think it more fully mirrors how humans play.
When you see a huge advantage or huge disadvantage you do not have to study the move as carefully.

When moves are about even is when you have to really ponder over them.
User avatar
hgm
Posts: 28354
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Little test with both stockfish 1.6s versions!

Post by hgm »

But this is already what recursive null-move does. If you are at beta+7 you ignore the capure of two of your minors (earning 6 ply reduction). If you are at beta+0.5 you can not even ignore the capture (or threat against) a Pawn.

The difference between humans and computers is that the latter aree stupid. Humans do not evaluate a position by wood-counting, but will recognize (in O(1) time) the potential of a position. Can I conceivably lose a Rook (because there is an unbreakable pin against it), can I conceivably be checkmated? Computers (at least the standard evaluations we use) are completely oblivious of this. They might only give a few dozen centi-Pawn mobility penalty for a Rook that is obviously doomed to a human. Therefore computers always have to verify the idea that they are strongly ahead, by proving it that the opponent cannot get even even when we do nothing (i.e. null-move).

Pruning just based on eval is a very inferior method compared to null-move pruning. Your formula makes it go back in that direction. I would even expect a dependence of the opposite sign to work better: reduce less (per null move) if you are strongly ahead. Because in practice you will reduce much in that case anyway (because you do on average more null moves).