Smooth scaling stockfish

Discussion of anything and everything relating to chess playing software and machines.

Moderator: Ras

Uri Blass
Posts: 10803
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Little test with both stockfish 1.6s versions!

Post by Uri Blass »

Dann Corbit wrote:
hgm wrote:This does not feel right. You reduce more when you are more ahead. I would expect that a fundamentally flawed strategy. Allowing less depth is then almost a guarantee that you will never earn back the material. You might as well prune the branch, rather than reduce it.

The whole idea of recursive null-move pruning is already that you reduce more when you are ahead more, because you can afford to ignore more threats or captures by the opponent (each null move giving an extra reduction). So increasing the null-move reduction should have a similar effect as giving extra reduction based on the current eval. But with null-move you ctually test if you can afford the reducton dynamically, and if the opponent (although heavily behind) can play a number of very dangerous threats (checks, or atacks on your Queen) that might conceivably allow him to get even, the reduction evaporates, and you will get the fail low. Reducing based on the static eval is much more dangerous.

If increasng the reduction when you are more head helps, it just means your base reduction was not large enough.
I think it more fully mirrors how humans play.
When you see a huge advantage or huge disadvantage you do not have to study the move as carefully.

When moves are about even is when you have to really ponder over them.
I think that it is not the same as humans think.


My opinion:

Analyzing for less time in most of the cases when you have an huge advantage or an huge disadvantage is logical and a good reason for pruning when the remaining depth is small.

I see no reason for big reductions that are additional to pruning.

If the remaining depth is high so you do not want to prune without search then you can do additional short search to get an exact score and decide based on the exact score if to prune when the idea is that big advantage that is going up for the side with the advantage is a good reason to prune
but if the big advantage goes down then it is more dangerous to prune
because it cause me to suspect that the big advantage is an illusion.

Uri
Uri Blass
Posts: 10803
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: Little test with both stockfish 1.6s versions!

Post by Uri Blass »

hgm wrote:But this is already what recursive null-move does. If you are at beta+7 you ignore the capure of two of your minors (earning 6 ply reduction). If you are at beta+0.5 you can not even ignore the capture (or threat against) a Pawn.

The difference between humans and computers is that the latter aree stupid. Humans do not evaluate a position by wood-counting, but will recognize (in O(1) time) the potential of a position. Can I conceivably lose a Rook (because there is an unbreakable pin against it), can I conceivably be checkmated? Computers (at least the standard evaluations we use) are completely oblivious of this. They might only give a few dozen centi-Pawn mobility penalty for a Rook that is obviously doomed to a human. Therefore computers always have to verify the idea that they are strongly ahead, by proving it that the opponent cannot get even even when we do nothing (i.e. null-move).

Pruning just based on eval is a very inferior method compared to null-move pruning. Your formula makes it go back in that direction. I would even expect a dependence of the opposite sign to work better: reduce less (per null move) if you are strongly ahead. Because in practice you will reduce much in that case anyway (because you do on average more null moves).
I think that computers can do everything that humans can do with the right program.

If humans can understand that there is positional compensation and use this information not to prune then computers can also do it with the right program.

Uri
Dann Corbit
Posts: 12777
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Little test with both stockfish 1.6s versions!

Post by Dann Corbit »

Uri Blass wrote:
Dann Corbit wrote:
hgm wrote:This does not feel right. You reduce more when you are more ahead. I would expect that a fundamentally flawed strategy. Allowing less depth is then almost a guarantee that you will never earn back the material. You might as well prune the branch, rather than reduce it.

The whole idea of recursive null-move pruning is already that you reduce more when you are ahead more, because you can afford to ignore more threats or captures by the opponent (each null move giving an extra reduction). So increasing the null-move reduction should have a similar effect as giving extra reduction based on the current eval. But with null-move you ctually test if you can afford the reducton dynamically, and if the opponent (although heavily behind) can play a number of very dangerous threats (checks, or atacks on your Queen) that might conceivably allow him to get even, the reduction evaporates, and you will get the fail low. Reducing based on the static eval is much more dangerous.

If increasng the reduction when you are more head helps, it just means your base reduction was not large enough.
I think it more fully mirrors how humans play.
When you see a huge advantage or huge disadvantage you do not have to study the move as carefully.

When moves are about even is when you have to really ponder over them.
I think that it is not the same as humans think.


My opinion:

Analyzing for less time in most of the cases when you have an huge advantage or an huge disadvantage is logical and a good reason for pruning when the remaining depth is small.

I see no reason for big reductions that are additional to pruning.

If the remaining depth is high so you do not want to prune without search then you can do additional short search to get an exact score and decide based on the exact score if to prune when the idea is that big advantage that is going up for the side with the advantage is a good reason to prune
but if the big advantage goes down then it is more dangerous to prune
because it cause me to suspect that the big advantage is an illusion.

Uri
I think if I see a full queen advantage or full queen disadvantage I do not need to examine those lines as thoroughly as a one pawn advantage or disadvantage.

Recall that we are talking about differences from a move that we already have in the bank. And the reductions are not extreme.

Null move in itself is not theoretically sound like alpha-beta but it seems to work. I think that heuristic examination will also prove or disprove my theory. It is just as easy to provide negative coefficients in my formulas!
;-)
User avatar
hgm
Posts: 28354
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Little test with both stockfish 1.6s versions!

Post by hgm »

Uri Blass wrote:I think that computers can do everything that humans can do with the right program.

If humans can understand that there is positional compensation and use this information not to prune then computers can also do it with the right program.
Of course. The point is that evaluations we are used to (and like the one Stockfish uses) are not "the right program" for this. They are too static and too instantaneous. You would need a much more future-directed "evaluation" to recognize the danger / potential of the position. And in practice, judging that statically has so far proved exceedingly difficult, and no one I know of has been able to make it competative to a fully dynamic search.

So at the current state of the art the most efficient "evaluation" that is smart enough to reliably judge if it is safe to prune is a (reduced) null move search.
User avatar
hgm
Posts: 28354
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Little test with both stockfish 1.6s versions!

Post by hgm »

Dann Corbit wrote:I think if I see a full queen advantage or full queen disadvantage I do not need to examine those lines as thoroughly as a one pawn advantage or disadvantage.
But this is exactly what normal null-move pruning (= with score-independent reduction) already does. So it cannot be used as an argument to introduce score dependence in the reduction.
Dann Corbit
Posts: 12777
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Little test with both stockfish 1.6s versions!

Post by Dann Corbit »

hgm wrote:
Dann Corbit wrote:I think if I see a full queen advantage or full queen disadvantage I do not need to examine those lines as thoroughly as a one pawn advantage or disadvantage.
But this is exactly what normal null-move pruning (= with score-independent reduction) already does. So it cannot be used as an argument to introduce score dependence in the reduction.
If I have an advantage of 2.5 pawns or if I have an advantage of 5000 pawns, the need to search is different. Hence, the reduction is different.

At least, that is how I see it. I can be wrong, as I often am, but so far the idea appears to offer some improvement. It may well be that I have things very backwards (as I am wont to do, being dyslexic and all). If so, then proper calculations using an appropriate curve should do better.
Dr.Ex
Posts: 196
Joined: Sun Jul 08, 2007 4:10 am

Re: Little test with both stockfish 1.6s versions!

Post by Dr.Ex »

Dann Corbit wrote:I think that he is right though. I am seeing the same thing. The newer version trims a bit more. I guess that it is too much.
I don't know. I have a Phenom X II 940 Processor. I like slightly smoother version more, so I will stick with it for the time beeing. It achieves the better results against Rybka with ponder off and it is a tactical monster. It solved my personal test suite on average in 2 s while the first version took 5.5 s on average.
Anyway, thanks for your efforts!
Dann Corbit
Posts: 12777
Joined: Wed Mar 08, 2006 8:57 pm
Location: Redmond, WA USA

Re: Little test with both stockfish 1.6s versions!

Post by Dann Corbit »

Dr.Ex wrote:
Dann Corbit wrote:I think that he is right though. I am seeing the same thing. The newer version trims a bit more. I guess that it is too much.
I don't know. I have a Phenom X II 940 Processor. I like slightly smoother version more, so I will stick with it for the time beeing. It achieves the better results against Rybka with ponder off and it is a tactical monster. It solved my personal test suite on average in 2 s while the first version took 5.5 s on average.
Anyway, thanks for your efforts!
The whole idea is just a springboard for better things to come. I expect that smart folks will come along and do some careful experiments and soon we will have pruning systems that give an average branching factor of 1.5 and with 8 CPU systems we will see depths of 50 plies in correspondence chess analysis.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Little test with both stockfish 1.6s versions!

Post by bob »

Dann Corbit wrote:
hgm wrote:This does not feel right. You reduce more when you are more ahead. I would expect that a fundamentally flawed strategy. Allowing less depth is then almost a guarantee that you will never earn back the material. You might as well prune the branch, rather than reduce it.

The whole idea of recursive null-move pruning is already that you reduce more when you are ahead more, because you can afford to ignore more threats or captures by the opponent (each null move giving an extra reduction). So increasing the null-move reduction should have a similar effect as giving extra reduction based on the current eval. But with null-move you ctually test if you can afford the reducton dynamically, and if the opponent (although heavily behind) can play a number of very dangerous threats (checks, or atacks on your Queen) that might conceivably allow him to get even, the reduction evaporates, and you will get the fail low. Reducing based on the static eval is much more dangerous.

If increasng the reduction when you are more head helps, it just means your base reduction was not large enough.
I think it more fully mirrors how humans play.
When you see a huge advantage or huge disadvantage you do not have to study the move as carefully.

When moves are about even is when you have to really ponder over them.
I've been playing chess for over 50 years now and I _never_ do a "null-move" in any of my analysis. That's not a human concept. You have to be careful and not start imagining what it is doing. Null-move is based on Beal's "null-move observation" idea. If you give your opponent two moves in a row, and he can't damage your position, even though you give him fewer plies than normal to do this, then your position is winning and there is no need to search any further along this pathway.

Null-move is not about winning material, or losing material, it is simply asking the question "Do I need to search all moves to the normal depth at this position, or can I do a shallow null-move search and use that result to get me out of here much quicker?" The idea of tying it to material imbalance is flawed, because that is not how it works. It is relative to the material balance difference between the current ply and the point where alpha/beta was defined earlier in the tree.

As a human you might "ponder more" when material is even, but as soon as you find a crushing move in your analysis for one possible candidate, you don't continue to search that, you move on to something that is more likely to be relevant, and this is what null-move is about. It works whether you start off a queen up or a queen down...
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Little test with both stockfish 1.6s versions!

Post by bob »

hgm wrote:
Dann Corbit wrote:I think if I see a full queen advantage or full queen disadvantage I do not need to examine those lines as thoroughly as a one pawn advantage or disadvantage.
But this is exactly what normal null-move pruning (= with score-independent reduction) already does. So it cannot be used as an argument to introduce score dependence in the reduction.
But what he needs to change is the "queen advantage or disadvantage". He is thinking of the initial position where you are a queen up. That is _not_ what null-move does for us. If we start off a queen down, it will help us prune branches where we end up two queens down. It is simply looking at the delta change from positoin P1 to position Pn where n is the depth.