relative history heuristic

Discussion of chess software programming and technical issues.

Moderator: Ras

Gerd Isenberg
Posts: 2251
Joined: Wed Mar 08, 2006 8:47 pm
Location: Hattingen, Germany

Re: relative history heuristic

Post by Gerd Isenberg »

Gerd Isenberg wrote: Thanks for the correction. I referrerd to your LMR post from March 2006, because you explained something similar, as inside the mentioned Relative History Heuristic pdf-paper by Winands et al. from 2006. I was aware that your focus was on LMR, but I guess your way of RHH was also used in move ordering as well.
The paper was published in Jan 2006, but the CG conference Talk by Mark Winands was already from July 2004:
10:45 – 11.15
The Relative History Heuristic

Mark H.M. Winands, Erik. C.D. van der Werf, H. Jaap van den Herik, and Jos W.H.M. Uiterwijk

Universiteit Maastricht, The Netherlands
Mark talking about RHH, Ramat-Gan July 5, 2004
Image
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: relative history heuristic

Post by bob »

Uri Blass wrote:
bob wrote:
Gerd Isenberg wrote:
Uri Blass wrote:
The idea is clearly more than 3 years old and the article is nothing new.

Many engines use history tables for ordering moves and I believe that many people use depth*depth because I remember it from old code that is more than 3 years old.

Uri
Hi Uri,

the issue is relative history heuristic, not how to increment history by square(depth).

Gerd
I am not certain, but I think I was the first to use depth*depth. The original paper by Schaeffer suggested 1<<depth but that was not usable at the search depths I was reaching. One branch to 31 plies would blow the counter up. The squared depth approach was an obvious try to still strongly reflect that shallower ply moves were move important that moves deep in the tree, without seriously overflowing a 32 bit counter...
If the problem is overflowing a 32 bit counter then you can use 64 bit counters.

Uri
Not in 1994. "long long" was a gcc option, but not on other machines, such as the sun C compiler on the sun workstations of the day. 64 bit counters are no good for 1<<depth since you still overflow _very_ quickly.
mcostalba
Posts: 2684
Joined: Sat Jun 14, 2008 9:17 pm

Re: relative history heuristic

Post by mcostalba »

Gerd Isenberg wrote:
mcostalba wrote: In Stockfish I think I have improved over it using piece/square tables values but only as a second order choice when history is not enough (as example is zero) or two moves have the same history values.

Marco
Hi Marco,

that is interesting, and I would like to mention that fact in CPW. Isn't that also used in Fruit or Toga? Like Bob, I abandoned HH. I rely (beside Hash Move, Winning Captures, Killers[ply] and Killers[ply-2], "interesting" moves determined by evaluation) on a "plausible" statefull move-by-move generator, considering attacked pieces, attacking valuable or otherwise likely en prised pieces, and center- and king-tropism.

Cheers,
Gerd
For what I know neither Toga or Fruit use psq tables for ordering.

When you write of center-tropism I think psqt are aimed at the same goal, just have a look at tables values, they are bigger for center squares.

Marco
Uri Blass
Posts: 10820
Joined: Thu Mar 09, 2006 12:37 am
Location: Tel-Aviv Israel

Re: relative history heuristic

Post by Uri Blass »

bob wrote:
Uri Blass wrote:
bob wrote:
Gerd Isenberg wrote:
Uri Blass wrote:
The idea is clearly more than 3 years old and the article is nothing new.

Many engines use history tables for ordering moves and I believe that many people use depth*depth because I remember it from old code that is more than 3 years old.

Uri
Hi Uri,

the issue is relative history heuristic, not how to increment history by square(depth).

Gerd
I am not certain, but I think I was the first to use depth*depth. The original paper by Schaeffer suggested 1<<depth but that was not usable at the search depths I was reaching. One branch to 31 plies would blow the counter up. The squared depth approach was an obvious try to still strongly reflect that shallower ply moves were move important that moves deep in the tree, without seriously overflowing a 32 bit counter...
If the problem is overflowing a 32 bit counter then you can use 64 bit counters.

Uri
Not in 1994. "long long" was a gcc option, but not on other machines, such as the sun C compiler on the sun workstations of the day. 64 bit counters are no good for 1<<depth since you still overflow _very_ quickly.
The discussion is about what we can try today and today 64 bit counters are not a problem.

Usually programs do not get depth 32 and 64 bits are clearly enough for the sum of many millions of 32 bit numbers.

The only problem can be if partial extensions are considered in the depth so practically you get even depth 64 in iteration 8 or iteration 16 but in that case you can use (1<<(depth/PLY)) when PLY is the normal depth of 1 ply.

Uri
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: relative history heuristic

Post by bob »

Uri Blass wrote:
bob wrote:
Uri Blass wrote:
bob wrote:
Gerd Isenberg wrote:
Uri Blass wrote:
The idea is clearly more than 3 years old and the article is nothing new.

Many engines use history tables for ordering moves and I believe that many people use depth*depth because I remember it from old code that is more than 3 years old.

Uri
Hi Uri,

the issue is relative history heuristic, not how to increment history by square(depth).

Gerd
I am not certain, but I think I was the first to use depth*depth. The original paper by Schaeffer suggested 1<<depth but that was not usable at the search depths I was reaching. One branch to 31 plies would blow the counter up. The squared depth approach was an obvious try to still strongly reflect that shallower ply moves were move important that moves deep in the tree, without seriously overflowing a 32 bit counter...
If the problem is overflowing a 32 bit counter then you can use 64 bit counters.

Uri
Not in 1994. "long long" was a gcc option, but not on other machines, such as the sun C compiler on the sun workstations of the day. 64 bit counters are no good for 1<<depth since you still overflow _very_ quickly.
The discussion is about what we can try today and today 64 bit counters are not a problem.

Usually programs do not get depth 32 and 64 bits are clearly enough for the sum of many millions of 32 bit numbers.

The only problem can be if partial extensions are considered in the depth so practically you get even depth 64 in iteration 8 or iteration 16 but in that case you can use (1<<(depth/PLY)) when PLY is the normal depth of 1 ply.

Uri
No, the discussion was why I chose to use depth*depth rather than 1<<depth as given in the Schaeffer paper. The reason was, 32 bit counters overflow with 1994 search depths, whereas when Schaeffer wrote the paper, depth=5 was the test he used I believe.

As far as the rest of your comment, you are not thinking carefully. I reach depth 20 in most games, for the nominal depth. It is _common_ to reach depths of 45 and beyond on lines that are extended. You can't add up "millions" of such numbers (1<<45 or bigger). So overflow is _still_ an issue with that big a number.
Harald
Posts: 318
Joined: Thu Mar 09, 2006 1:07 am

Re: relative history heuristic

Post by Harald »

bob wrote:
Uri Blass wrote:
bob wrote:
Uri Blass wrote:
bob wrote:
Gerd Isenberg wrote:
Uri Blass wrote:
The idea is clearly more than 3 years old and the article is nothing new.

Many engines use history tables for ordering moves and I believe that many people use depth*depth because I remember it from old code that is more than 3 years old.

Uri
Hi Uri,

the issue is relative history heuristic, not how to increment history by square(depth).

Gerd
I am not certain, but I think I was the first to use depth*depth. The original paper by Schaeffer suggested 1<<depth but that was not usable at the search depths I was reaching. One branch to 31 plies would blow the counter up. The squared depth approach was an obvious try to still strongly reflect that shallower ply moves were move important that moves deep in the tree, without seriously overflowing a 32 bit counter...
If the problem is overflowing a 32 bit counter then you can use 64 bit counters.

Uri
Not in 1994. "long long" was a gcc option, but not on other machines, such as the sun C compiler on the sun workstations of the day. 64 bit counters are no good for 1<<depth since you still overflow _very_ quickly.
The discussion is about what we can try today and today 64 bit counters are not a problem.

Usually programs do not get depth 32 and 64 bits are clearly enough for the sum of many millions of 32 bit numbers.

The only problem can be if partial extensions are considered in the depth so practically you get even depth 64 in iteration 8 or iteration 16 but in that case you can use (1<<(depth/PLY)) when PLY is the normal depth of 1 ply.

Uri
No, the discussion was why I chose to use depth*depth rather than 1<<depth as given in the Schaeffer paper. The reason was, 32 bit counters overflow with 1994 search depths, whereas when Schaeffer wrote the paper, depth=5 was the test he used I believe.

As far as the rest of your comment, you are not thinking carefully. I reach depth 20 in most games, for the nominal depth. It is _common_ to reach depths of 45 and beyond on lines that are extended. You can't add up "millions" of such numbers (1<<45 or bigger). So overflow is _still_ an issue with that big a number.
May be you are right with the overflow but the argument is flawed.
Even with a lot of extensions the depth does not grow bigger.
If you are somewhere in the tree with depth = 10 and you extend,
then on the next ply you are at depth 10 + 1 - 1 = 10. This way you
may reach a big ply number but that does not go to the history.
There are some positions (e.g. this number 70 with 2 kings and 6 or 7
blocked pawns) where the root iteration goes to depth > 30 very fast.
That is where the big numbers come from.

Harald
bob
Posts: 20943
Joined: Mon Feb 27, 2006 7:30 pm
Location: Birmingham, AL

Re: relative history heuristic

Post by bob »

Harald wrote:
bob wrote:
Uri Blass wrote:
bob wrote:
Uri Blass wrote:
bob wrote:
Gerd Isenberg wrote:
Uri Blass wrote:
The idea is clearly more than 3 years old and the article is nothing new.

Many engines use history tables for ordering moves and I believe that many people use depth*depth because I remember it from old code that is more than 3 years old.

Uri
Hi Uri,

the issue is relative history heuristic, not how to increment history by square(depth).

Gerd
I am not certain, but I think I was the first to use depth*depth. The original paper by Schaeffer suggested 1<<depth but that was not usable at the search depths I was reaching. One branch to 31 plies would blow the counter up. The squared depth approach was an obvious try to still strongly reflect that shallower ply moves were move important that moves deep in the tree, without seriously overflowing a 32 bit counter...
If the problem is overflowing a 32 bit counter then you can use 64 bit counters.

Uri
Not in 1994. "long long" was a gcc option, but not on other machines, such as the sun C compiler on the sun workstations of the day. 64 bit counters are no good for 1<<depth since you still overflow _very_ quickly.
The discussion is about what we can try today and today 64 bit counters are not a problem.

Usually programs do not get depth 32 and 64 bits are clearly enough for the sum of many millions of 32 bit numbers.

The only problem can be if partial extensions are considered in the depth so practically you get even depth 64 in iteration 8 or iteration 16 but in that case you can use (1<<(depth/PLY)) when PLY is the normal depth of 1 ply.

Uri
No, the discussion was why I chose to use depth*depth rather than 1<<depth as given in the Schaeffer paper. The reason was, 32 bit counters overflow with 1994 search depths, whereas when Schaeffer wrote the paper, depth=5 was the test he used I believe.

As far as the rest of your comment, you are not thinking carefully. I reach depth 20 in most games, for the nominal depth. It is _common_ to reach depths of 45 and beyond on lines that are extended. You can't add up "millions" of such numbers (1<<45 or bigger). So overflow is _still_ an issue with that big a number.
May be you are right with the overflow but the argument is flawed.
Even with a lot of extensions the depth does not grow bigger.
If you are somewhere in the tree with depth = 10 and you extend,
then on the next ply you are at depth 10 + 1 - 1 = 10. This way you
may reach a big ply number but that does not go to the history.
There are some positions (e.g. this number 70 with 2 kings and 6 or 7
blocked pawns) where the root iteration goes to depth > 30 very fast.
That is where the big numbers come from.

Harald
Quite right, in fact. I was thinking in terms of "ply" and not "depth" (which is plies remaining).