bob wrote:We bought a group of sun workstations in late 1985 or so. First problem many of us encountered was using NFS to access files and using make to compile. If the workstations have time slip, and you try to distribute your compiles over the group of workstations, chaos ensues when the dates cause some machines to recompile something, while others don't due to incorrect dates/times.
It was back in 1986 that I got my first Apple computer, a Mac Plus which I used to write my chess program
Spector. The Mac Plus had a battery backed RTC (real time clock), a not very common feature of that day. And the chip used for the RTC was not all that great; in fact, mine failed soon after I bought the machine -- the RTC would slow, then freeze after the computer had been running for much more than an hour. I had to pay for an AppleCare insurance package to have the main board replaced because I didn't notice the failure until after the stingy 90 day warranty had expired. Moral of story: Be cautious in trusting your computer's time keeping ability.
Anyway, in the days before the Web, it was possible to use a dial-up modem to connect to an Internet timeserver. And that's what I did with my Mac Plus and a 2400 bps Hayes SmartModem. This kept the time on my Plus accurate to within a second or so of the real time at the cost of a weekly long distance phone call.
Accessing the RTC via a library routine was not very fast, and I wanted
Spector to be able to check the elapsed time for a search at every node. So I used the Macintosh-specific
Ticks() routine. This routine, if I recall correctly, was actually a macro which read a fixed location global 32 bit integer which counted the number of 60 Hz video refreshes since booting. The rollover period was about 829 days, a bit longer than the average life expectancy of a Mac Plus power supply and many times that of the mean period between New England power outages.
Today,
Symbolic uses a descendant of the
Ticks() routine; the program calls the C library routine
setitimer() to generate a 10 millisecond periodic signal. This signal is caught by a handler which increments a global tick counter, and that's the counter read at every node to allow the search to stop on a dime with very little overhead. This technique works quite well and I recommend it to my fellow authors whose programs are to run an a Unix system.
Programming note: When deactivating an interval timer, on some systems it may be necessary to wait three or more interval periods for the deactivation to go into effect, so don't disable the signal handler until it's safe to do so.
Code: Select all
#define TickFreqLen 100 // Tick frequency in Hertz
#define TUSecLen 1000000ull
#define TickTimeLen (TUSecLen / TickFreqLen)
void Driver::IntervalTimerInit(void)
{
Log("Interval timer initializing");
TickCount = 0;
UsecCount = 0;
struct itimerval itval0, itval1;
TimeValFromTU(TickTimeLen, itval0.it_interval);
TimeValFromTU(TickTimeLen, itval0.it_value);
if (setitimer(ITIMER_REAL, &itval0, &itval1) != 0)
Die("IntervalTimerInit", "Bad setitimer");
}
void Driver::IntervalTimerTerm(void)
{
Log("Interval timer terminating");
struct itimerval itval0, itval1;
ZeroToTimeVal(itval0.it_interval);
ZeroToTimeVal(itval0.it_value);
if (setitimer(ITIMER_REAL, &itval0, &itval1) != 0)
Die("IntervalTimerTerm", "Bad setitimer");
usleep(TickTimeLen * 3);
}