Rodent 0.12

Discussion of chess software programming and technical issues.

Moderators: hgm, Harvey Williamson, bob

syzygy
Posts: 4161
Joined: Tue Feb 28, 2012 10:56 pm

Re: Rodent 0.12

Post by syzygy » Mon Mar 26, 2012 10:01 pm

diep wrote:while gettickcount does give you milliseconds precision, it's very accurate milliseconds precision, which effectively is more accurate than linux microsecondsprecision in GetTimeOfDay; for the simple reason that windows kernel is in assembler and this is just moving the register to the user function, whereas the linux kernel is in C, so it's less accurate than GetTickCount, other than the millisecond resolution you get back versus microsecond resolution.
That "simple reason" doesn't make so much sense, does it?

Maybe you oversimplified it?

diep
Posts: 1780
Joined: Thu Mar 09, 2006 10:54 pm
Location: The Netherlands
Contact:

Re: Rodent 0.12

Post by diep » Mon Mar 26, 2012 10:49 pm

syzygy wrote:
diep wrote:while gettickcount does give you milliseconds precision, it's very accurate milliseconds precision, which effectively is more accurate than linux microsecondsprecision in GetTimeOfDay; for the simple reason that windows kernel is in assembler and this is just moving the register to the user function, whereas the linux kernel is in C, so it's less accurate than GetTickCount, other than the millisecond resolution you get back versus microsecond resolution.
That "simple reason" doesn't make so much sense, does it?

Maybe you oversimplified it?
Nah i'll take the vision of the windows kernel team there, not so much whatever bla bla the msdn wrote 20 years ago :)

Haven't you read the recent AMD gpu documentation by the way?
Basically the currenth helpdesks lack so much technical information, yet make 100 rupees a month as a salary, that they even cut away diagrams and rewrite things to incomprehensable statements that basically no longer give clear information on how the GPU looks like.

I wonder how someone with just that recent documentation is gonna try to understand how the internals of a GPU work :)

It's the same thing at so many helpdesks now which write the technical documentation. Cheap labour replacing the insides. We shouldn't do as if that doesn't happen :)

It's difficult to compete with $1.11 an hour labour simply. Yet for them that's little as well, so the real experts they can't hire for that over there either. So they aren't there simply.

Add to that, usually they give support for hardware and software that they simply do not have themselves. Try to give support for a hardware component you have never ever seen in your life :)

I can give examples there that you wouldn't believe...

As for gettickcount, i can confirm with Diep that it's very accurate meassuring for Diep. I have a debug option to turn on all sorts of statistics under which measuring how long each core effectively has been searching.

One would GUESS doing that with gettickcount() doesn't give an accurate picture. Yet it does.

If you'd realize Diep also has an idle loop for a cpu, and that it isn't counting when a cpu is in there, then you start to get an impression slowly on how accurate this GetTickCount() works.

So i have 2 confirmations of it working very accurate, to the millisecond accurate :)

syzygy
Posts: 4161
Joined: Tue Feb 28, 2012 10:56 pm

Re: Rodent 0.12

Post by syzygy » Tue Mar 27, 2012 12:05 am

diep wrote:Add to that, usually they give support for hardware and software that they simply do not have themselves. Try to give support for a hardware component you have never ever seen in your life :)
Reminds me of the North Korean software developers writing iPhone apps ;)
So i have 2 confirmations of it working very accurate, to the millisecond accurate :)
Ok, but that doesn't yet imply that gettimeofday() in Linux is less accurate "because the Linux kernel is written in C". Anyway, I just googled a bit and it seems that clock_gettime() should be very accurate on kernel 2.6.18 and later. And since that kernel, gettimeofday() also uses clock_gettime(). On recent kernels the overhead seems to be low as well:
The only benefits to gettimeofday() is that on powerpc, ia64 and x86_64, it is implemented with a userspace-only vsyscall/vdso, which avoids the syscall overhead. However, recent x86_64 kernels have added support for vsyscall clock_gettime() as well.
Link (almost 4 years old already)

It seems the timing source used might depend on intel/amd and power saving settings... The technical issues mentioned should apply to both Windows and Linux, though.

diep
Posts: 1780
Joined: Thu Mar 09, 2006 10:54 pm
Location: The Netherlands
Contact:

Re: Rodent 0.12

Post by diep » Wed Mar 28, 2012 10:52 am

syzygy wrote:
diep wrote:Add to that, usually they give support for hardware and software that they simply do not have themselves. Try to give support for a hardware component you have never ever seen in your life :)
Reminds me of the North Korean software developers writing iPhone apps ;)
So i have 2 confirmations of it working very accurate, to the millisecond accurate :)
Ok, but that doesn't yet imply that gettimeofday() in Linux is less accurate "because the Linux kernel is written in C". Anyway, I just googled a bit and it seems that clock_gettime() should be very accurate on kernel 2.6.18 and later. And since that kernel, gettimeofday() also uses clock_gettime(). On recent kernels the overhead seems to be low as well:
The only benefits to gettimeofday() is that on powerpc, ia64 and x86_64, it is implemented with a userspace-only vsyscall/vdso, which avoids the syscall overhead. However, recent x86_64 kernels have added support for vsyscall clock_gettime() as well.
Link (almost 4 years old already)

It seems the timing source used might depend on intel/amd and power saving settings... The technical issues mentioned should apply to both Windows and Linux, though.
Look in both windows as well as linux, you have more accurate ways of timing, yet GetTickCount() if it doesn't overflow loses you less system time from the kernel as it's assembler optimized.

The whole problem nowadays is not so much the measurement; it's the fact that these 2 operating systems both are total outdated in the same way.

Both are single processor OS-es in their core and both do all sorts of stupid locking things to get even the simplest thing done. In linux you can see this easy in the sourcecode: every packet to and from the machine gets central locked, even udp/raw data you send and receive.

This is so total outdated - i have no words for it.

From recent postings of Linus i understand he wants to keep it like that...
(but never say never)

As a result of this total being outdated, all the manufacturers have their own solution to hack the kernel and integrate their drivers into it; especially in linux of course - forget HPC with windows - i bet some marketeer has written once a great story about windows and HPC (high performance computing). Of course nearly all supercomputers are linux nowadays as a result.

This huge bottleneck of the kernel, which even in the realtime kernel is there, makes it tougher to modify linux to a HPC environment. Not impossible of course, all the sysadmins will manage in the end doing exactly that - yet look around - clustering hasn't made it into the mainstream.

Post Reply