Optimizing pruning

Discussion of chess software programming and technical issues.

Moderator: Ras

Mangar
Posts: 65
Joined: Thu Jul 08, 2010 9:16 am

Re: Optimizing pruning

Post by Mangar »

Hi,

the idea of uncernity value is one I allready thought of. On idea to compute it is (Eval_of_Black - Material_of_Black + Eval_of_White - Material_of_White) / 2. This could be evaluated without much changes of eval. Thus the average sum of positional evaluations for both black and white. (This is only possible if you use evaluation at inner nodes. )

As only simple ideas in search stays successful after a while I tend to compute a margin parameter by using only aspects having very large influence on eval.

Greetings Volker
Mangar Spike Chess
mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 9:17 pm

Re: Optimizing pruning

Post by mcostalba »

Mangar wrote:Hi,

the idea of uncernity value is one I allready thought of. On idea to compute it is (Eval_of_Black - Material_of_Black + Eval_of_White - Material_of_White) / 2. This could be evaluated without much changes of eval. Thus the average sum of positional evaluations for both black and white. (This is only possible if you use evaluation at inner nodes. )

As only simple ideas in search stays successful after a while I tend to compute a margin parameter by using only aspects having very large influence on eval.

Greetings Volker
Volker, this is a great idea !

I would change the formula in:

Code: Select all

K * (abs(Eval_of_Black - Material_of_Black) + abs(Eval_of_White - Material_of_White))
or even the pythagorean sum:

Code: Select all

K * sqrt((Eval_of_Black - Material_of_Black)^2 + (Eval_of_White - Material_of_White)^2)
I like it, is simple and general.

Joona, what do you think ?
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Optimizing pruning

Post by bob »

hgm wrote:I agree that what you propose is the fundamentally correct way to handle matters. But even after decades of pondering it, I have not found a satisfactory way to implement the use of such (averge, uncertainty) pairs. You would have to completely abandon minimax (and thus alpha-beta), fr starters.
Wasn't this idea discussed by Berliner in his B* stuff 30 years ago or so?

How would you score a node from which two moves are possible, one that returned (+1, +/-0.5) and the other that returned (+2, +/-4) ? How would you maintain the window limits? In alpha-beta they are just scores of moves branching off the current line. So if scores become pairs, how would you decide whether to take a beta cutoff? If beta = (+1, +/-0.5), and you now find a move with score (+2, +/-4), would you take a cutoff?

The risk of such schemes is that you would not take a cutoff as often as with plain minimax, because some of the pairs do not have an obvious < or > relation. This could make it expensive, and as a result not competative.

And even then: coupling the uncertainty to the search is a logical thing to do. If you find a move that is 'promising' (i.e. large score, but also large uncertainty) you can do two things when you decide the uncertaanty is too big to waarrant cutoff: try to search an alternative move at the same depth, in the hope it will have smaller uncertainty, or search the promising move deeper, in the hope this will remove the uncertainty. I don't think there will be an obvious superiority of one method over the other. It will depend on the position. But perhaps in such a way that a compratively cheap analysis of the position tells you what to prefer.
zamar
Posts: 613
Joined: Sun Jan 18, 2009 7:03 am

Re: Optimizing pruning

Post by zamar »

mcostalba wrote:
Mangar wrote:Hi,

the idea of uncernity value is one I allready thought of. On idea to compute it is (Eval_of_Black - Material_of_Black + Eval_of_White - Material_of_White) / 2. This could be evaluated without much changes of eval. Thus the average sum of positional evaluations for both black and white. (This is only possible if you use evaluation at inner nodes. )

As only simple ideas in search stays successful after a while I tend to compute a margin parameter by using only aspects having very large influence on eval.

Greetings Volker
Volker, this is a great idea !

I would change the formula in:

Code: Select all

K * (abs(Eval_of_Black - Material_of_Black) + abs(Eval_of_White - Material_of_White))
or even the pythagorean sum:

Code: Select all

K * sqrt((Eval_of_Black - Material_of_Black)^2 + (Eval_of_White - Material_of_White)^2)
I like it, is simple and general.

Joona, what do you think ?
Well, if Black has bonus +100cp for passed pawn and -100cp penalty for structural weaknesses (Eval_of_Black - Material_of_Black)^2 = 0 which obviously isn't the right result :-)

I have the feeling that potential sources for instability are:

- king attack (potential mate)
- passed pawns (potential new queen)
- material imbalances (difficult to evaluate even for GM)
- static threats (sth can get captured)
- mobility (increases chance for tactical tricks)
- structural pawn weaknesses (sometimes these are very bad, sometimes not)

I feel that somewhar correct solution is a linear combination of those above weighted with depth. Implementing and balancing is the true challenge!
Joona Kiiski
Michiel Koorn

Re: Optimizing pruning

Post by Michiel Koorn »

Ralph Stoesser wrote: Question is also how to calculate a reasonable accurate uncertainty value in evaluate() for a given position.
The evaluation at depth D could be subtracted from the backed up minimax value from D+1 as an estimate for uncertainty.

Which brings me to the issue I have with the whole concept of backing up uncertainty. It is too close to the concepts of search depth and quiescence. If a position is uncertain and important why not extend?
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Optimizing pruning

Post by bob »

Michiel Koorn wrote:
Ralph Stoesser wrote: Question is also how to calculate a reasonable accurate uncertainty value in evaluate() for a given position.
The evaluation at depth D could be subtracted from the backed up minimax value from D+1 as an estimate for uncertainty.

Which brings me to the issue I have with the whole concept of backing up uncertainty. It is too close to the concepts of search depth and quiescence. If a position is uncertain and important why not extend?
Would seem to me that uncertainty is inversely related to depth. That's why we have extensions in fact, to search the positions we are less certain about to a deeper depth to remove some of the uncertainty.
User avatar
michiguel
Posts: 6401
Joined: Thu Mar 09, 2006 8:30 pm
Location: Chicago, Illinois, USA

Re: Optimizing pruning

Post by michiguel »

Mangar wrote:Hi,

the idea of uncernity value is one I allready thought of. On idea to compute it is (Eval_of_Black - Material_of_Black + Eval_of_White - Material_of_White) / 2. This could be evaluated without much changes of eval. Thus the average sum of positional evaluations for both black and white. (This is only possible if you use evaluation at inner nodes. )

As only simple ideas in search stays successful after a while I tend to compute a margin parameter by using only aspects having very large influence on eval.

Greetings Volker
I have been experimenting with something similar. I cannot use Eval_of_black or Mat_of_black, because most things I have are the "delta". For that reason, I use the difference between material and eval() as a parameter to take into account. I only experimented with it at the leaf and it seems to work. I say "seems" because I have not worked enough to properly compare it with other options.

Miguel
User avatar
michiguel
Posts: 6401
Joined: Thu Mar 09, 2006 8:30 pm
Location: Chicago, Illinois, USA

Re: Optimizing pruning

Post by michiguel »

bob wrote:
hgm wrote:I agree that what you propose is the fundamentally correct way to handle matters. But even after decades of pondering it, I have not found a satisfactory way to implement the use of such (averge, uncertainty) pairs. You would have to completely abandon minimax (and thus alpha-beta), fr starters.
Wasn't this idea discussed by Berliner in his B* stuff 30 years ago or so?
Didn't I say this threes messages ago? :-) :-)

Miguel

How would you score a node from which two moves are possible, one that returned (+1, +/-0.5) and the other that returned (+2, +/-4) ? How would you maintain the window limits? In alpha-beta they are just scores of moves branching off the current line. So if scores become pairs, how would you decide whether to take a beta cutoff? If beta = (+1, +/-0.5), and you now find a move with score (+2, +/-4), would you take a cutoff?

The risk of such schemes is that you would not take a cutoff as often as with plain minimax, because some of the pairs do not have an obvious < or > relation. This could make it expensive, and as a result not competative.

And even then: coupling the uncertainty to the search is a logical thing to do. If you find a move that is 'promising' (i.e. large score, but also large uncertainty) you can do two things when you decide the uncertaanty is too big to waarrant cutoff: try to search an alternative move at the same depth, in the hope it will have smaller uncertainty, or search the promising move deeper, in the hope this will remove the uncertainty. I don't think there will be an obvious superiority of one method over the other. It will depend on the position. But perhaps in such a way that a compratively cheap analysis of the position tells you what to prefer.
Michiel Koorn

Re: Optimizing pruning

Post by Michiel Koorn »

bob wrote: Would seem to me that uncertainty is inversely related to depth. That's why we have extensions in fact, to search the positions we are less certain about to a deeper depth to remove some of the uncertainty.
Of course. The difference between us is that I attempted to quantify the uncertainty of the evaluation of a position an you are quantifying the uncertainty of the search.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Optimizing pruning

Post by bob »

Michiel Koorn wrote:
bob wrote: Would seem to me that uncertainty is inversely related to depth. That's why we have extensions in fact, to search the positions we are less certain about to a deeper depth to remove some of the uncertainty.
Of course. The difference between us is that I attempted to quantify the uncertainty of the evaluation of a position an you are quantifying the uncertainty of the search.
I don't think there is any measurable "uncertainty" in the search. The search moves pieces around, but eventually is forced to stop moving them and produce a number that represents the goodness or badness of the position. It is this number that has the uncertainty, moreso than the search... Ideally, with a really good q-search, the evaluation uncertainty would be zero, because you would not give the evaluation code a position that had difficult-to-evaluate dynamic characteristics. In reality, that hardly ever happens.