Razoring...

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Razoring...

Post by bob »

mjlef wrote:
bob wrote:
Gerd Isenberg wrote:
bob wrote:I had heard Tord mention that he was using razoring. I had tried this a few years back (along with lots of other things) and I try to save old code. I decided yesterday to see how it would look on a cluster run since I had never been able to test that thoroughly in the past.
Bob, can you please elaborate on how this razoring works, guess on depth == 2 (pre-frontier nodes)? Is it based on the original idea from Birmingham and Kent? A little pseudo code would be nice.

Thanks,
Gerd
Simple idea, really:

Code: Select all

/*
 ************************************************************
 *                                                          *
 *   now we try a quick Razoring test.  If we are within 3  *
 *   plies of a tip, and the current eval is 3 pawns (or    *
 *   more) below beta, then we just drop into a q-search    *
 *   to try to get a quick cutoff without searching more in *
 *   a position where we are way down in material.          *
 *                                                          *
 ************************************************************
 */
   if &#40;razoring_allowed && depth <= razor_depth&#41; &#123;
    if &#40;alpha == beta - 1&#41; &#123;
      if &#40;Evaluate&#40;tree, ply, wtm, alpha, beta&#41; + razor_margin < beta&#41; &#123;
        value = QuiesceChecks&#40;tree, alpha, beta, wtm, ply&#41;;
        if &#40;value < beta&#41;
          return &#40;value&#41;;
      &#125;
    &#125;
  &#125;
In crafty, the above is done after the null-move has been tried and failed low, or if it wasn't tried at all. I am playing with "razoring allowed". The original discussion I had made notes from suggested the usual "no search extensions and such" (This is in Heinz's "scalable search ...."). I have been experimenting with various reasons to not do this. Current razor_depth is 3 plies, which is equivalent to his "pre-pre-frontier node" concept. I was experimenting with the margin and currently don't have any accurate results yet, thanks to our IBRIX filesystem once again crashing during the night. I am using 3 pawns, but have seen good results with 2 pawns as well. More once I can get the calibration tests done on the cluster.

The idea is that if you are well behind, with just a couple of plies left, there is not much that will get that material back besides a capture, so just collapse into a q-search and see if you can get the material back with a shallow search. If not, bail out, otherwise continue searching normally.

This was a quick-and-dirty approach. I am going to rewrite search so that the razoring/futility-pruning (and maybe even extended futility pruning) is all done at one place inside the main make/search/unmake loop where I have more information available about a particular move, which will probably make this work better...
Bob,

How did the run for varying the razoring margin turn out?

Mark
Current value in 23.0 was the best. Others dropped off in terms of Elo, going in either direction... Note that razoring is not a huge gain if you already have null-move and LMR. It won't add 10 Elo.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Razoring... (data)

Post by bob »

bob wrote:
jarkkop wrote:Have you tried in Crafty using 'see' as in Fruit for giving a bonus if square in front of passer pawn has see>=0.
That is on the to-do list, which is a way of measuring the mobility of a passed pawn...
Now has been tested. It plays worse. The cost of the test is higher than the benefit it produces. A net loss of Elo.
MattieShoes
Posts: 718
Joined: Fri Mar 20, 2009 8:59 pm

Re: Razoring...

Post by MattieShoes »

I tested razoring instead of futility pruning and it was quite a bit worse for me. That could have been due to the way I did razoring, I don't know. I haven't tried them in conjunction, mostly because razoring feels kind of wrong to me. My engine still lacks a reasonable king safety eval so I suspect that might have something to do with it.
kranium
Posts: 2129
Joined: Thu May 29, 2008 10:43 am

Re: Razoring...

Post by kranium »

bob wrote:
mjlef wrote:
bob wrote:
Gerd Isenberg wrote:
bob wrote:I had heard Tord mention that he was using razoring. I had tried this a few years back (along with lots of other things) and I try to save old code. I decided yesterday to see how it would look on a cluster run since I had never been able to test that thoroughly in the past.
Bob, can you please elaborate on how this razoring works, guess on depth == 2 (pre-frontier nodes)? Is it based on the original idea from Birmingham and Kent? A little pseudo code would be nice.

Thanks,
Gerd
Simple idea, really:

Code: Select all

/*
 ************************************************************
 *                                                          *
 *   now we try a quick Razoring test.  If we are within 3  *
 *   plies of a tip, and the current eval is 3 pawns &#40;or    *
 *   more&#41; below beta, then we just drop into a q-search    *
 *   to try to get a quick cutoff without searching more in *
 *   a position where we are way down in material.          *
 *                                                          *
 ************************************************************
 */
   if &#40;razoring_allowed && depth <= razor_depth&#41; &#123;
    if &#40;alpha == beta - 1&#41; &#123;
      if &#40;Evaluate&#40;tree, ply, wtm, alpha, beta&#41; + razor_margin < beta&#41; &#123;
        value = QuiesceChecks&#40;tree, alpha, beta, wtm, ply&#41;;
        if &#40;value < beta&#41;
          return &#40;value&#41;;
      &#125;
    &#125;
  &#125;
In crafty, the above is done after the null-move has been tried and failed low, or if it wasn't tried at all. I am playing with "razoring allowed". The original discussion I had made notes from suggested the usual "no search extensions and such" (This is in Heinz's "scalable search ...."). I have been experimenting with various reasons to not do this. Current razor_depth is 3 plies, which is equivalent to his "pre-pre-frontier node" concept. I was experimenting with the margin and currently don't have any accurate results yet, thanks to our IBRIX filesystem once again crashing during the night. I am using 3 pawns, but have seen good results with 2 pawns as well. More once I can get the calibration tests done on the cluster.

The idea is that if you are well behind, with just a couple of plies left, there is not much that will get that material back besides a capture, so just collapse into a q-search and see if you can get the material back with a shallow search. If not, bail out, otherwise continue searching normally.

This was a quick-and-dirty approach. I am going to rewrite search so that the razoring/futility-pruning (and maybe even extended futility pruning) is all done at one place inside the main make/search/unmake loop where I have more information available about a particular move, which will probably make this work better...
Bob,

How did the run for varying the razoring margin turn out?

Mark
Current value in 23.0 was the best. Others dropped off in terms of Elo, going in either direction... Note that razoring is not a huge gain if you already have null-move and LMR. It won't add 10 Elo.
hi Bob-

this topic is very timely for me, as i just finished testing Demon/Crimson - 2048 games w/without razoring....i had already tested razoring extensively with Cyclone, many many thousands of games, and it produced no help at all. It was eventually removed in all later (stronger) versions. of course, being based on fruit, cyclone was already utilizing (what I consider to be) a very effective history pruning heuristic...and I believe this is why razoring was of no help. Unfortunately, i never had time to test cyclone with razoring in lieu of history pruning.

Demon/Crimson does not use (and has not yet implemented or tested) history pruning, but i have seen a substantial improvement with razoring...
(if used in conjunction with null-move as shown below) approx. +3-5 % winning percentage increase. (testing is usually 1024 or 2048 fast games against 8 standard opponents).

the search is a standard PVS, uses fractional plies (one ply = 40) , and razors if null conditions are not met. R is adaptive and varies between 3 and 4 depending on depth:

what i've come to understand is:
razoring can be very effective, but if used in conjunction with history pruning or any other form of LMR...?...(my belief is that it is of) no help whatsoever. ??
please correct me if this is not right…

i’m not sure if the implementation below is unusual or not, but at this point, razoring in lieu of null-move is providing the best result for me, (of course have a lot more testing and development to do)...

pseudo code:

#define NULL_REDUCTION(d) ((d)> 6 * ONE_PLY ? 4 * ONE_PLY : 3 * ONE_PLY)

int value;

//null move conditions - if the previous ply was not a null move and
//the hash table indicates it likely won’t fail and we’re not in check

if (null_move && !null_fails && !ply_info[ply].in_check)
{
int zugzwang = 1;

if (WHITE(side))
{
if (board->white_queens || pop_count(board->white_rooks | board->white_bishops | board->white_knights) > 1)
zugzwang = 0;
}
else
{
if (board->black_queens || pop_count(board->black_rooks | board->black_bishops |
board->black_knights) > 1)
zugzwang = 0;
}

if (!zugzwang)
{
int r = NULL_REDUCTION(depth);

if (depth - r >= ONE_PLY)
value = -search(ply + 1, depth - r, SWITCH (side), -beta, 1 - beta, 0);
else
value = -quiescient(ply + 1, SWITCH (side), -beta, 1 - beta);

if (value >= beta)
return value;
}
}

//razoring
else if(depth <= 3 && eval( side, alpha, beta ) < beta - 300)
{
int value = -quiescient(ply + 1, SWITCH_SIDE(side), -beta, -alpha);
if(value < beta)
return value;
}


PS - how did you get your code example to display so nicely formatted?

Regards-
Norm
MattieShoes
Posts: 718
Joined: Fri Mar 20, 2009 8:59 pm

Re: Razoring...

Post by MattieShoes »

//razoring
else if(depth <= 3 && eval( side, alpha, beta ) < beta - 300)
{
int value = -quiescient(ply + 1, SWITCH_SIDE(side), -beta, -alpha);
if(value < beta)
return value;
}
I'm trying to understand what the difference is between SWITCH(side) and SWITCH_SIDE(side) is. Also, wouldn't it razor PV nodes where beta might be INF? It looks like it'd also razor positions where the side to move is in check since that'd make the null if-statement fail and it's not explicitly checked in the razor portion... That is, unless I'm missing something.

enclosing your code inside code tags will leave pre-formatted text nice and pretty. [c0de]blahblah[/c0de], only replace 0 with o :-)
kranium
Posts: 2129
Joined: Thu May 29, 2008 10:43 am

Re: Razoring...

Post by kranium »

MattieShoes wrote:
//razoring
else if(depth <= 3 && eval( side, alpha, beta ) < beta - 300)
{
int value = -quiescient(ply + 1, SWITCH_SIDE(side), -beta, -alpha);
if(value < beta)
return value;
}
I'm trying to understand what the difference is between SWITCH(side) and SWITCH_SIDE(side) is. Also, wouldn't it razor PV nodes where beta might be INF? It looks like it'd also razor positions where the side to move is in check since that'd make the null if-statement fail and it's not explicitly checked in the razor portion... That is, unless I'm missing something.

enclosing your code inside code tags will leave pre-formatted text nice and pretty. [c0de]blahblah[/c0de], only replace 0 with o :-)

thanks for the tip Matt!

to answer your questions:

SWITCH and SWITCH_SIDE = TYPO...

i use a LIMIT constant (32000) to contain value:
something like this:

if (value == LIMIT)
{
board->turn = SWITCH_TURN(board->turn);
blah, blah
return -LIMIT;
}

there's probably a better way, ?

yes, positions with check will also be razored...
but i believe this is also the case with Crafty/ Glaurung / Toga and others...?

anyway, my point here was to illustrate that razoring provides a big improvement, in lieu of history/LMR... considering the fact that I am only using null and razoring, and the simple addition of razoring added at least 3-5%...

Norm :)
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Razoring...

Post by bob »

kranium wrote:
bob wrote:
mjlef wrote:
bob wrote:
Gerd Isenberg wrote:
bob wrote:I had heard Tord mention that he was using razoring. I had tried this a few years back (along with lots of other things) and I try to save old code. I decided yesterday to see how it would look on a cluster run since I had never been able to test that thoroughly in the past.
Bob, can you please elaborate on how this razoring works, guess on depth == 2 (pre-frontier nodes)? Is it based on the original idea from Birmingham and Kent? A little pseudo code would be nice.

Thanks,
Gerd
Simple idea, really:

Code: Select all

/*
 ************************************************************
 *                                                          *
 *   now we try a quick Razoring test.  If we are within 3  *
 *   plies of a tip, and the current eval is 3 pawns &#40;or    *
 *   more&#41; below beta, then we just drop into a q-search    *
 *   to try to get a quick cutoff without searching more in *
 *   a position where we are way down in material.          *
 *                                                          *
 ************************************************************
 */
   if &#40;razoring_allowed && depth <= razor_depth&#41; &#123;
    if &#40;alpha == beta - 1&#41; &#123;
      if &#40;Evaluate&#40;tree, ply, wtm, alpha, beta&#41; + razor_margin < beta&#41; &#123;
        value = QuiesceChecks&#40;tree, alpha, beta, wtm, ply&#41;;
        if &#40;value < beta&#41;
          return &#40;value&#41;;
      &#125;
    &#125;
  &#125;
In crafty, the above is done after the null-move has been tried and failed low, or if it wasn't tried at all. I am playing with "razoring allowed". The original discussion I had made notes from suggested the usual "no search extensions and such" (This is in Heinz's "scalable search ...."). I have been experimenting with various reasons to not do this. Current razor_depth is 3 plies, which is equivalent to his "pre-pre-frontier node" concept. I was experimenting with the margin and currently don't have any accurate results yet, thanks to our IBRIX filesystem once again crashing during the night. I am using 3 pawns, but have seen good results with 2 pawns as well. More once I can get the calibration tests done on the cluster.

The idea is that if you are well behind, with just a couple of plies left, there is not much that will get that material back besides a capture, so just collapse into a q-search and see if you can get the material back with a shallow search. If not, bail out, otherwise continue searching normally.

This was a quick-and-dirty approach. I am going to rewrite search so that the razoring/futility-pruning (and maybe even extended futility pruning) is all done at one place inside the main make/search/unmake loop where I have more information available about a particular move, which will probably make this work better...
Bob,

How did the run for varying the razoring margin turn out?

Mark
Current value in 23.0 was the best. Others dropped off in terms of Elo, going in either direction... Note that razoring is not a huge gain if you already have null-move and LMR. It won't add 10 Elo.
hi Bob-

this topic is very timely for me, as i just finished testing Demon/Crimson - 2048 games w/without razoring....i had already tested razoring extensively with Cyclone, many many thousands of games, and it produced no help at all. It was eventually removed in all later (stronger) versions. of course, being based on fruit, cyclone was already utilizing (what I consider to be) a very effective history pruning heuristic...and I believe this is why razoring was of no help. Unfortunately, i never had time to test cyclone with razoring in lieu of history pruning.

Demon/Crimson does not use (and has not yet implemented or tested) history pruning, but i have seen a substantial improvement with razoring...
(if used in conjunction with null-move as shown below) approx. +3-5 % winning percentage increase. (testing is usually 1024 or 2048 fast games against 8 standard opponents).

the search is a standard PVS, uses fractional plies (one ply = 40) , and razors if null conditions are not met. R is adaptive and varies between 3 and 4 depending on depth:

what i've come to understand is:
razoring can be very effective, but if used in conjunction with history pruning or any other form of LMR...?...(my belief is that it is of) no help whatsoever. ??
please correct me if this is not right…
Did not test in that way, so I can't answer accurately, but intuition says that is correct. Razoring is effectively a reduction near the tips. Moves that don't get razored may well get reduced by LMR anyway.

i’m not sure if the implementation below is unusual or not, but at this point, razoring in lieu of null-move is providing the best result for me, (of course have a lot more testing and development to do)...

pseudo code:

#define NULL_REDUCTION(d) ((d)> 6 * ONE_PLY ? 4 * ONE_PLY : 3 * ONE_PLY)

int value;

//null move conditions - if the previous ply was not a null move and
//the hash table indicates it likely won’t fail and we’re not in check

if (null_move && !null_fails && !ply_info[ply].in_check)
{
int zugzwang = 1;

if (WHITE(side))
{
if (board->white_queens || pop_count(board->white_rooks | board->white_bishops | board->white_knights) > 1)
zugzwang = 0;
}
else
{
if (board->black_queens || pop_count(board->black_rooks | board->black_bishops |
board->black_knights) > 1)
zugzwang = 0;
}

if (!zugzwang)
{
int r = NULL_REDUCTION(depth);

if (depth - r >= ONE_PLY)
value = -search(ply + 1, depth - r, SWITCH (side), -beta, 1 - beta, 0);
else
value = -quiescient(ply + 1, SWITCH (side), -beta, 1 - beta);

if (value >= beta)
return value;
}
}

//razoring
else if(depth <= 3 && eval( side, alpha, beta ) < beta - 300)
{
int value = -quiescient(ply + 1, SWITCH_SIDE(side), -beta, -alpha);
if(value < beta)
return value;
}


PS - how did you get your code example to display so nicely formatted?

Regards-
Norm
precede with bracket-code-bracket, and follow with bracket-/code-bracket.

One note is that I use R-3 every for NULL search. I added checks to q-search (first level only) and saw no benefit. But once I tried null R=3, rather than the adaptive R-3~2, I saw a slight improvement.
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Razoring...

Post by bob »

CRoberson wrote:I remember from several years ago that somebody stated checks in the
qsearch are a big help if you don't have much of a king safety eval.
Otherwise, it is worth little.
I think checks only help to deal with the most common horizon effect, which is a position like white pawn at f6, white queen at h6, white threatening Qg7#. If you reduce that path so that you don't play Qg7, then you overlook something critical because the problem is still there. qsearch checks pick that up so that you don't reduce the path and leave a simple mate possibility alive to wreck your statically computed eval that is way wrong.
User avatar
pedrox
Posts: 1056
Joined: Fri Mar 10, 2006 6:07 am
Location: Basque Country (Spain)

Re: Razoring...

Post by pedrox »

Two tests can be done:

1. Try razoring before null move.

2. Razoring extend to depths greater than 3. I've tried such depths 5 and 8 by increasing the margin to 100cp per unit of depth.

Pedro
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: Razoring...

Post by bob »

pedrox wrote:Two tests can be done:

1. Try razoring before null move.

2. Razoring extend to depths greater than 3. I've tried such depths 5 and 8 by increasing the margin to 100cp per unit of depth.

Pedro
How can you try razoring _before_ null-move? You do null move search first, before you generate moves or anything. Razoring is applied to each individual move near the tips. But this can only happen after null-search has been completed and you start to search real moves to normal depth.