hi,
thanks for Your input.
as for sleeping, it is used for weakening the engine. this feature is not activated yet, because it needs much more tuning than I can afford right now.
do You have access to code that wasets specific number of nanoseconds under Linux?
Rodent 0.12
Moderators: hgm, Rebel, chrisw
-
- Posts: 893
- Joined: Mon Jan 15, 2007 11:23 am
- Location: Warsza
-
- Posts: 3232
- Joined: Mon May 31, 2010 1:29 pm
- Full name: lucasart
Re: Rodent 0.12
Yes, the link I sent you. There is also usleep (if you want microseconds).PK wrote: do You have access to code that wasets specific number of nanoseconds under Linux?
In fact it's probably easier to use usleep, unless you really need nanosecond precision and to not be polluted by signals.
Using usleep couldn't be easier:
Code: Select all
#include <unistd.h>
int usleep(useconds_t usec);
-
- Posts: 3232
- Joined: Mon May 31, 2010 1:29 pm
- Full name: lucasart
Re: Rodent 0.12
also for the sake of portability (as well as simplicity), you could use the ISO C function clock() instead of all this
while linux gives you nanosecond precision, I don't think windows' GetTickCount has a better resolution than a millisecond (or maybe even 10 ms...), so there's no point in all this complication.
Much easier and fully portable:
Anyway, I fixed your code and compiled it. I'll test it in my Open Source Bullet rating list. As usual results tomorrow in the tournament forum
Code: Select all
int sTimer::GetMS(void)
{
#if defined(_WIN32) || defined(_WIN64)
return GetTickCount();
#else
struct timeval tv;
gettimeofday(&tv, NULL);
return tv.tv_sec * 1000 + tv.tv_usec / 1000;
#endif
}
Much easier and fully portable:
Code: Select all
#include <time.h>
clock_t start = clock();
...
clock_t stop = clock();
unsigned duration_milliseconds = (stop - start) * 1000 / CLOCKS_PER_SEC;
-
- Posts: 2272
- Joined: Mon Sep 29, 2008 1:50 am
Re: Rodent 0.12
Well usleep is deprecated....In fact it's probably easier to use usleep, unless you really need nanosecond precision and to not be polluted by signals.
Code: Select all
POSIX.1-2001 declares this function obsolete; use nanosleep(2) instead.
POSIX.1-2008 removes the specification of usleep().
-
- Posts: 893
- Joined: Mon Jan 15, 2007 11:23 am
- Location: Warsza
Re: Rodent 0.12
64-bit compiles by Denis Mendoza and Dann Corbit are uploaded
developement snapshot with compatibility fixes will come after the weekend
busy regards,
pawel
developement snapshot with compatibility fixes will come after the weekend
busy regards,
pawel
Pawel Koziol
http://www.pkoziol.cal24.pl/rodent/rodent.htm
http://www.pkoziol.cal24.pl/rodent/rodent.htm
-
- Posts: 1383
- Joined: Fri Jul 14, 2006 7:56 am
- Location: London, England
- Full name: Jim Ablett
Re: Rodent 0.12
Here's my Rodent 0.12 GCC builds (Win32 & Linux 32/64) plus src+makefile which compiles cleanly.
I didn't include a Mingw64 compile because it runs slower than the Msvc/Intel ones from Dann & Dennis which
surprised me as the Mingw32 compile is faster.
http://dl.dropbox.com/u/5047625/rodent-012-gcc-ja.zip
Jim.
I didn't include a Mingw64 compile because it runs slower than the Msvc/Intel ones from Dann & Dennis which
surprised me as the Mingw32 compile is faster.
http://dl.dropbox.com/u/5047625/rodent-012-gcc-ja.zip
Jim.
-
- Posts: 3232
- Joined: Mon May 31, 2010 1:29 pm
- Full name: lucasart
Re: Rodent 0.12
Actually, according to Micro$oft, GetTickCount and GetTickCount64 have a resolution between 10 and 16 mslucasart wrote: while linux gives you nanosecond precision, I don't think windows' GetTickCount has a better resolution than a millisecond (or maybe even 10 ms...)
I couldn't find a proper timer that uses the CPU clock in the Windows API. And it seems that MSVC implements clock() by calling GetTickCount anyway.
As always, Windows sucks
-
- Posts: 2555
- Joined: Fri Nov 26, 2010 2:00 pm
- Location: Czech Republic
- Full name: Martin Sedlak
Re: Rodent 0.12
You can try QueryPerformanceCounter/QueryPerformanceFrequency instead if you need really precise timing.lucasart wrote: Actually, according to Micro$oft, GetTickCount and GetTickCount64 have a resolution between 10 and 16 ms
I couldn't find a proper timer that uses the CPU clock in the Windows API. And it seems that MSVC implements clock() by calling GetTickCount anyway.
As always, Windows sucks
-
- Posts: 1334
- Joined: Sun Jul 17, 2011 11:14 am
Re: Rodent 0.12
Jim,
Your source is non-compileable (if there is such a word).
Yes, I know - my GCC is _ancient_ (4.0.1) - and I'll try to build myself a copy of clang(++).
Matthew:out
Your source is non-compileable (if there is such a word).
Code: Select all
g++ -c -g attacks.c -Wall -O3 -Wno-write-strings
attacks.c:33: error: integer constant is too large for 'long' type
attacks.c:33: error: integer constant is too large for 'long' type
attacks.c:35: error: integer constant is too large for 'long' type
attacks.c:35: error: integer constant is too large for 'long' type
attacks.c:37: error: integer constant is too large for 'long' type
attacks.c:37: error: integer constant is too large for 'long' type
attacks.c:37: error: integer constant is too large for 'long' type
attacks.c:37: error: integer constant is too large for 'long' type
attacks.c:49: error: integer constant is too large for 'long' type
attacks.c:49: error: integer constant is too large for 'long' type
attacks.c:50: error: integer constant is too large for 'long' type
attacks.c:50: error: integer constant is too large for 'long' type
attacks.c:58: error: integer constant is too large for 'long' type
attacks.c:58: error: integer constant is too large for 'long' type
attacks.c:59: error: integer constant is too large for 'long' type
attacks.c:59: error: integer constant is too large for 'long' type
make: *** [attacks.o] Error 1
Matthew:out
Some believe in the almighty dollar.
I believe in the almighty printf statement.
I believe in the almighty printf statement.
-
- Posts: 481
- Joined: Thu Apr 16, 2009 12:00 pm
- Location: Slovakia, EU
Re: Rodent 0.12
You can get 1ms precision using timeBeginPeriod, timeEndPeriod and timeGetTime functions.lucasart wrote:Actually, according to Micro$oft, GetTickCount and GetTickCount64 have a resolution between 10 and 16 mslucasart wrote: while linux gives you nanosecond precision, I don't think windows' GetTickCount has a better resolution than a millisecond (or maybe even 10 ms...)
I couldn't find a proper timer that uses the CPU clock in the Windows API. And it seems that MSVC implements clock() by calling GetTickCount anyway.
As always, Windows sucks
http://msdn.microsoft.com/en-us/library ... 85%29.aspx
The default precision of the timeGetTime function can be five milliseconds or more, depending on the machine. You can use the timeBeginPeriod and timeEndPeriod functions to increase the precision of timeGetTime.