I think that in depends on your tuning capabilities. If you are able to tune with accurary more than 1 cp, you should use finer resolution and round it. If your tuning accurary cannot reach 1 cp, it probably doesn't make any difference.lkaufman wrote:Some top programs round off the score to achieve greater speed at some cost in quality, namely Rybka, SF, and Komodo. Others do not, namely (I believe) Ivanhoe, Critter, and (probably) Houdini.

The question is, once you have decided on the scoring resolution you actually want the search to use, is it better to have the eval also use that resolution or should it use finer resolution and round off? In other words, which is more logical:

1.Rybka - eval terms in millipawns, final score for search rounded to centipawns.

or

2. Ivanhoe - eval terms in centipawns, no rounding.

Any opinions?

## Rounding

**Moderators:** Harvey Williamson, Dann Corbit, hgm

**Forum rules**

This textbox is used to restore diagrams posted with the [d] tag before the upgrade.

### Re: Rounding

Joona Kiiski

- hgm
**Posts:**25448**Joined:**Fri Mar 10, 2006 9:06 am**Location:**Amsterdam**Full name:**H G Muller-
**Contact:**

### Re: Rounding

I did use 4cP granularity to be able to fit the PST values, which include the piece base values, into a single byte. (So they can range from 0 to 10.) With (2 x) 14 piece types and an 81-square board that gives a very significant relieve of L1 cache pressure.marcelk wrote:That said, my opinion is that engines do it because other engines do it.

### Re: Rounding

If you look at the Crafty change log, you can see that it went from mpawn to cpawn resolution at some point. Presumably because Bob could not show that mpawn was better than cpawn.

### Re: Rounding

This is not simple at all. Komodo use millipawns where 1 pawn = 1000. We cannot just go to centipawns in a straightforward way without introducing a lot of error. Using method 1 is straightforward because there can only be a single error at the end which is rounded. But method 2 would require introducing error with every term that wasn't already divisible by 10.marcelk wrote:This is simple to test, so opinions are not needed.lkaufman wrote:Some top programs round off the score to achieve greater speed at some cost in quality, namely Rybka, SF, and Komodo. Others do not, namely (I believe) Ivanhoe, Critter, and (probably) Houdini.

The question is, once you have decided on the scoring resolution you actually want the search to use, is it better to have the eval also use that resolution or should it use finer resolution and round off? In other words, which is more logical:

1.Rybka - eval terms in millipawns, final score for search rounded to centipawns.

or

2. Ivanhoe - eval terms in centipawns, no rounding.

Any opinions?

That said, my opinion is that engines do it because other engines do it.

We have some terms that are multiplied by a weight which cannot be broken down any further where 1 or 2 millipawns makes a big difference. For example our king safety weight is set to 5 or 6 I tihnk. We cannot divide that by 10 and round because it would almost turn king safety off, so we would have to rework other calculations which would become too grainy.

I think it's difficult to change the scale one you have committed to one. One thing we COULD do is to use millipawn resolution in the search - but even that would require a very careful code review to be sure everything in the search was using the new resolution - for example multiplying futility margins by 10 and so on.

In past studies I have seen that more resolution is always better in the quality sense but it can slow down the search a bit which offsets this. So you would get a slightly slower program that was slightly stronger (given the same depth) if you went from 100 to 1000 for a pawn.

Marcel

PS: There is a 3rd alternative that does something similar: no rounding but offsetting the scout value by some amount.

### Re: Rounding

You are making too much of this. It's an arbitrary decision that is not likely to affect the strength of the program in a significant way.lkaufman wrote:Exactly! So would you care to hazard a guess as to why the Ippo author(s) used such coarse eval resolution? Also, you say "1/100 used in most engines" but as far as I know only the Ippos use such coarse resolution for eval. Who else did you have in mind?rvida wrote:I think 1/256 granularity used in Critter is good enough. 1/100 used in most engines might be a bit coarse for some terms (eg. mobility, where you add N*number of squares and N=1 is too small N=2 too big)lkaufman wrote:My opinion is that it does, but since both Critter and Ivanhoe don't use more refined eval than the search uses, their authors presumably hold a different opinion. If so I would like to know why they do it the way that they do. I guess there is no one here who can answer for Ivanhoe though.

We chose 1000 simply because it was a lot easier to work with and we were raised on the decimal system, probably because we have 10 fingers. We round it to centi-pawns after all calculations are done. There are probably values that are just too low for a strong program however such a pawn = 20. Didn't the old Richard Lang programs use pawn = 32? And yet they were unbeatable at the time.

### Re: Rounding

Agreed.zamar wrote:I think that in depends on your tuning capabilities. If you are able to tune with accurary more than 1 cp, you should use finer resolution and round it. If your tuning accurary cannot reach 1 cp, it probably doesn't make any difference.lkaufman wrote:Some top programs round off the score to achieve greater speed at some cost in quality, namely Rybka, SF, and Komodo. Others do not, namely (I believe) Ivanhoe, Critter, and (probably) Houdini.

The question is, once you have decided on the scoring resolution you actually want the search to use, is it better to have the eval also use that resolution or should it use finer resolution and round off? In other words, which is more logical:

1.Rybka - eval terms in millipawns, final score for search rounded to centipawns.

or

2. Ivanhoe - eval terms in centipawns, no rounding.

Any opinions?

If you read the book "point count chess" you can get away with pawn = 3. That old book suggested a system where everything boiled down to 1/3 of a pawn units.

For example doubled pawn, 1/3, 1 tempo 1/3, etc.

### Re: Rounding

Hence the concept of offsetting scout instead of rounding eval: to get both advantages.Don wrote:In past studies I have seen that more resolution is always better in the quality sense but it can slow down the search a bit which offsets this. So you would get a slightly slower program that was slightly stronger (given the same depth) if you went from 100 to 1000 for a pawn.

With rounded evals you will still get a unwanted research whenever round(score) > round(best_score), eg. round(0.125) > round(0.124), or 0.13 > 0.12, where the real difference is only 0.001 and not worth a research. But with a scout offset this case doesn't happen. Finer control possible because less information thrown away.

- hgm
**Posts:**25448**Joined:**Fri Mar 10, 2006 9:06 am**Location:**Amsterdam**Full name:**H G Muller-
**Contact:**

### Re: Rounding

Well, 33 cP is obviously far to course when you want the engine to realize that a Knight on e4 is better than one on g2. And I think good centralization is actually worth quite some Elo.

But to me even centiPawn tuning of eval terms does look like over-doing it. Precision in applying the bonus / penalty will be far more rewarding than tuning the magnitude of the term. I always like the square / circle metaphor:

Suppose you have to determine if points (x,y) fall within the unit circle, and for some reason do not know how to multiply. So you use

Inside(x, y) = abs(x) < c && abs(y) < c

Now you can spend a lifetime determining the best value of c, to a precision of 0.01, 0.001, etc. And of course some c are better than other. But even for the best possible c, you will classify a fair number of points that are outside as being inside, and vice versa.

While even roughly guessed parameters a and b would perform enormously better than the most carefully optimized c when you would switch to

Inside(x,y) = abs(x) < a && abs(y) < a && abs(x+y) < b && abs(x-y) < b

E.g. a=0.9 and b=1.3. Because almost any octagon is more like a circle than the best square.

But to me even centiPawn tuning of eval terms does look like over-doing it. Precision in applying the bonus / penalty will be far more rewarding than tuning the magnitude of the term. I always like the square / circle metaphor:

Suppose you have to determine if points (x,y) fall within the unit circle, and for some reason do not know how to multiply. So you use

Inside(x, y) = abs(x) < c && abs(y) < c

Now you can spend a lifetime determining the best value of c, to a precision of 0.01, 0.001, etc. And of course some c are better than other. But even for the best possible c, you will classify a fair number of points that are outside as being inside, and vice versa.

While even roughly guessed parameters a and b would perform enormously better than the most carefully optimized c when you would switch to

Inside(x,y) = abs(x) < a && abs(y) < a && abs(x+y) < b && abs(x-y) < b

E.g. a=0.9 and b=1.3. Because almost any octagon is more like a circle than the best square.

### Re: Rounding

The ideal resolution is 12 for a pawn. It's a perfect number and has many divisorshgm wrote:Well, 33 cP is obviously far to course when you want the engine to realize that a Knight on e4 is better than one on g2. And I think good centralization is actually worth quite some Elo.

But to me even centiPawn tuning of eval terms does look like over-doing it. Precision in applying the bonus / penalty will be far more rewarding than tuning the magnitude of the term. I always like the square / circle metaphor:

Suppose you have to determine if points (x,y) fall within the unit circle, and for some reason do not know how to multiply. So you use

Inside(x, y) = abs(x) < c && abs(y) < c

Now you can spend a lifetime determining the best value of c, to a precision of 0.01, 0.001, etc. And of course some c are better than other. But even for the best possible c, you will classify a fair number of points that are outside as being inside, and vice versa.

While even roughly guessed parameters a and b would perform enormously better than the most carefully optimized c when you would switch to

Inside(x,y) = abs(x) < a && abs(y) < a && abs(x+y) < b && abs(x-y) < b

E.g. a=0.9 and b=1.3. Because almost any octagon is more like a circle than the best square.

- Zach Wegner
**Posts:**1922**Joined:**Wed Mar 08, 2006 11:51 pm**Location:**Earth-
**Contact:**

### Re: Rounding

I imagine this is true for most people, but IMO this is because most people use inferior programming languages/development strategies.Don wrote:I think it's difficult to change the scale one you have committed to one.

I prefer to have all tuned values in floating point, and have the actual fixed-point integer arithmetic generated for some arbitrary precision as a compilation step. In my code this can be adjusted per evaluation module, so e.g. you can switch to higher precision for evaluating king safety, and it rounds at the end.