I wrote a little C++ snippet that loops a thousand times printing out the current time with microsecond resolution. The value for the wall clock is taken from gettimeofday(). In each case, the accuracy is better than one millisecond. Sample output:
On a dual 1.133 GHz Pentium 3 running RedHat 9 Linux:
Of course the I/O (here, it's just the O) is taking time. The reason behind the demo is to show the timing resolution available. The output is line buffered, so there is an additional blocking system call for each record.
All modern CPUs and chipsets support high resolution timing to one microsecond or better, even if the OS in use doesn't. On a modern Mac, there's the nanosleep() call that can access a nanosecond timer. Regular gettimeofday() has to settle for microsecond resolution as that's the limit of the specification.
could be wrong, but I thought I read somewhere that high resolution timing is not reliable on anything except real-time OSes (and things like CPU frequency scaling makes it even worse).
What I meant in the last post is that, maybe it is more precise than what you have shown, if not for the blocking calls. That is why I suggested calling gettimeofday() in succession (don't output between calls).
sje wrote:Well, I already know that it's good for 1 usec on my machines. Perhaps you could write your own version for your machines.
(I discarded the source for my tests a while back.)
Note that what you are verifying is simply that the timer value is a monotonically increasing value that changes in microsecond increments, but it doesn't say a thing about how accurate this actually is. The operating system disables interrupts frequently for more than a microsecond...
sje wrote:Well, actually I do know that the microsecond clock is good to one microsecond, although that is not necessarily proven from the sample output.
How do you solve the interrupt issue? when any interrupt occurs, the rest are disabled until they are expicitly enabled, and it doesn't take much to miss clock ticks. That's why things like NTP were developed, to correct the time slip that occurs naturally because of this...
I have _never_ seen a computer that could maintain a clock to even 1 second per day, which is millisecond accuracy...