Losing on time

Discussion of chess software programming and technical issues.

Moderator: Ras

User avatar
Evert
Posts: 2929
Joined: Sat Jan 22, 2011 12:42 am
Location: NL

Re: Losing on time

Post by Evert »

bob wrote:
Evert wrote:In Sjef (my match-playing program for chess variants based on Sjaak) I have an option to ignore the flag and always give a certain minimum time to the engine. The idea here was to transition from a normal time control to a fixed-time-per-move time control once time is close to running out. The reason I did this was to improve the quality of play at ultra short time controls, so I could play more games (for better statistics) without the results being dominated by horrible moves played at move 40.

I don't think it actually worked all that well and I don't use the option anymore (in fact, I don't use Sjef all that often either - for some reason that I have yet to figure out the draw rate in self-play matches is much higher than if I run the match under XBoard, which is weird). In fact, using Fischer time-controls is probably a better idea for a similar overall effect.
I am reminded of the old "dance with the one that brung ya." That is, whatever you do, do it in the way you will do it in the important games. CCT events won't have such an option, if a program depends on it to avoid running out of time, it will lose over and over, unnecessarily.
Well yes. Clearly it's not an idea that's suitable for testing how the program deals with short time controls.

The main idea was to maintain some minimal search depth in situations where I specifically want to test an evaluation or search feature and don't want any noise from the time control. But as I said, using Fischer time controls does that already, and is overall a better idea.
I think crazy-fast time controls are a good "stress test" of the time allocation code. I like game/1 second. They fly by. You overstep the time limit very frequently and you lose on time.
Fun anecdote: I noticed I was getting a fair number of time losses in a recent set of tests. I rewrote the (convoluted and overly complicated) time allocation code to overcome that. Turns out I actually had a bug that caused the transposition table to be twice as large as requested, which meant that all matches combined were requesting more than the physical RAM in the testing machine. Oops.
Once I fixed that the time losses went away. The new time allocation code turned out to perform worse than the old code...