Rounding

Discussion of chess software programming and technical issues.

Moderators: hgm, Rebel, chrisw

User avatar
hgm
Posts: 27788
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Rounding

Post by hgm »

In Joker I used 1/256 Pawn. In HaQiKi D and Shokidoki (which have common ancestry) I use 1/25, to get single-byte PST.
kbhearn
Posts: 411
Joined: Thu Dec 30, 2010 4:48 am

Re: Rounding

Post by kbhearn »

if the goal was to squeeze the most worthwhile resolution into the least bits, you'd transform it through a logarithmic scale or sigmoidal as mentioned earlier - you don't really need to tell the difference between 10.00 and 10.01, decipawns might be sufficient by then.
User avatar
hgm
Posts: 27788
Joined: Fri Mar 10, 2006 10:06 am
Location: Amsterdam
Full name: H G Muller

Re: Rounding

Post by hgm »

True, but that applies only to the total evaluation, and not to the individual piece values.

It would actually be better to use such a sigmoid correction from a theoretical point of view as well, in combination with a delayed-loss bonus: if sacrificing a minor immediately makes the difference between being a Queen down or Queen+Rook down after 20 ply, it would probably be a bad decision to sac the minor. Repairing that can only be achieved if the bonus for delaying the sacrifice can outweigh the additional Rook in the leaves, which is only realistic if you are in a very flat part of the sigmoid there.
FlavusSnow
Posts: 89
Joined: Thu Apr 01, 2010 5:28 am
Location: Omaha, NE

Re: Rounding

Post by FlavusSnow »

Could these different precision values be set somehow and optimized with the same methods used to optimize other terms (namely CLOP)?
User avatar
Zach Wegner
Posts: 1922
Joined: Thu Mar 09, 2006 12:51 am
Location: Earth

Re: Rounding

Post by Zach Wegner »

Don wrote:
Zach Wegner wrote:
Don wrote:I think it's difficult to change the scale one you have committed to one.
I imagine this is true for most people, but IMO this is because most people use inferior programming languages/development strategies.

I prefer to have all tuned values in floating point, and have the actual fixed-point integer arithmetic generated for some arbitrary precision as a compilation step. In my code this can be adjusted per evaluation module, so e.g. you can switch to higher precision for evaluating king safety, and it rounds at the end.
I did once try using floating point for the primary calculation and it was not that long ago, it was on a dual core modern chip. I had heard that on modern chips this might even benefit the speed since we have a separate floating point execution unit that can do this in parallel. However it was not the case, I saw a noticeable slowdown.

I think I made the right decision - to use 1000 as that allows me to use any lower resolution. If I experiment with the "final" resolution that the search uses I will lay it out in such a way that I can easily experiment with other resolutions with no further pain. It's probably only a 2 hour job even if I do it the hard way and we will probably see if 200 for a pawn is an improvement at some point but it's certainly not very high on our list of priorities.
I'm not talking about using the floating point hardware of the CPU. I mean using floating point in the source and converting to fixed point integer math at compile time, with the fixed-point precision easily changed. You can do this in C with the preprocessor:

Code: Select all

#define RES 100.0 // centipawns
#define VALUE(x) ((int)((x) * RES))
int some_bonus = VALUE(0.34249); // 34 cp