Optimizing pruning

Discussion of chess software programming and technical issues.

Moderator: Ras

Antonio Torrecillas
Posts: 92
Joined: Sun Nov 02, 2008 4:43 pm
Location: Madrid
Full name: Antonio Torrecillas

Re: Optimizing pruning

Post by Antonio Torrecillas »

IMHO PB* has more to do with directing the search for threats to the uncertainty in the evaluation.(I can be wrong of course).

The algorithm selects the nodes to expand as a function of an optimistic value, the real value
of the node and the real value of the best root node so far.

The Real value is calculated/estimated using an alpha-beta search of depth _probe_depth_ ( 3 for example).
The Optimistic/pesimistic value is calculated the same way after a nullmove.
This introduces the following drawbacks:
1) Zugzwang (you know null moves...end games...)
2) In check estimates (you can't nullmove on checks).
3) Horizont effect. PB* solve the normal horizont effect of alpha-beta but introduce his own :-)
if a search of depth _probe_depth_ does not detect (a deeper) threat,
the optimistic value is not adequate and the node is burned, PB* will not recheck it.

PB* improves as improving the quality of the assessment Real/Optimistic evaluation.
For example increasing the _probe_depth_, so uncertainty also affects PB*.

If anyone has an idea of how to evaluate the upper and lower limits of the evaluation,or uncertainty ,or a cheaper way ,
I am willing to perform the test in my B* engine Rocinante and report the results.

Regards, Antonio.
wgarvin
Posts: 838
Joined: Thu Jul 05, 2007 5:03 pm
Location: British Columbia, Canada

Re: Optimizing pruning

Post by wgarvin »

Michiel Koorn wrote:
Ralph Stoesser wrote: Question is also how to calculate a reasonable accurate uncertainty value in evaluate() for a given position.
The evaluation at depth D could be subtracted from the backed up minimax value from D+1 as an estimate for uncertainty.

Which brings me to the issue I have with the whole concept of backing up uncertainty. It is too close to the concepts of search depth and quiescence. If a position is uncertain and important why not extend?
Because extending is expensive, and its too binary -- at each node either you go another ply deeper, or you don't.

OTOH, if calculating uncertainty were cheap, and if it could be backed up the tree easily along with the score, then it might be useful for decision-making.

The engine could try to avoid getting positions where it knows that it doesn't know what is going on. Which might be better than not knowing that it doesn't know! :lol:
User avatar
Don
Posts: 5106
Joined: Tue Apr 29, 2008 4:27 pm

Re: Optimizing pruning

Post by Don »

hgm wrote:I agree that what you propose is the fundamentally correct way to handle matters. But even after decades of pondering it, I have not found a satisfactory way to implement the use of such (averge, uncertainty) pairs. You would have to completely abandon minimax (and thus alpha-beta), fr starters.

How would you score a node from which two moves are possible, one that returned (+1, +/-0.5) and the other that returned (+2, +/-4) ? How would you maintain the window limits? In alpha-beta they are just scores of moves branching off the current line. So if scores become pairs, how would you decide whether to take a beta cutoff? If beta = (+1, +/-0.5), and you now find a move with score (+2, +/-4), would you take a cutoff?

The risk of such schemes is that you would not take a cutoff as often as with plain minimax, because some of the pairs do not have an obvious < or > relation. This could make it expensive, and as a result not competative.

And even then: coupling the uncertainty to the search is a logical thing to do. If you find a move that is 'promising' (i.e. large score, but also large uncertainty) you can do two things when you decide the uncertaanty is too big to waarrant cutoff: try to search an alternative move at the same depth, in the hope it will have smaller uncertainty, or search the promising move deeper, in the hope this will remove the uncertainty. I don't think there will be an obvious superiority of one method over the other. It will depend on the position. But perhaps in such a way that a compratively cheap analysis of the position tells you what to prefer.
I'm jumping in without reading all the posts, but I think part of the solution is to just use uncertainty to modify the evaluation function so that you are working with just 1 value. I agree however that it might also be useful for various margins such as futility and such.

Uncertainty usually works one way mostly. If you have a king side attack, it's generally good for you, not bad. The uncertainty is basically trying to evaluate whether it's really good or just slightly good.

Passed pawns are another example. If you have a passed pawn, it's probably good for you, but it's not always easy to say how good it is.

Most programs sum these terms up in linear fashion, but that's probably wrong. An evaluation function is really a probability estimate and uncertainty should affect this in a non-linear way. Even the best programs are not very good at measuring concepts such as king safety, so you can assume that your programs estimate of this could be subject to quite a bit of error. So if you are already 3 pawns up AND you have what your program thinks is a good king side attack, you really have not pushed your chances of winning up very much since you will probably win regardless. However, if you are 3 pawns up but your OPPONENT has a king side attack, it may be impacting your winning chances significantly. The uncertainty in both of these cases should impact the score returned by the evaluation function differently.

Another way to put this is that if your evaluation function thinks you are winning, your program should assume that any counter play on your opponents part is a big deal.