Here's my Rodent 0.12 GCC builds (Win32 & Linux 32/64) plus src+makefile which compiles cleanly.
I didn't include a Mingw64 compile because it runs slower than the Msvc/Intel ones from Dann & Dennis which
surprised me as the Mingw32 compile is faster.
http://dl.dropbox.com/u/5047625/rodent-012-gcc-ja.zip
Jim.
Rodent 0.12
Moderators: hgm, Rebel, chrisw
-
- Posts: 1384
- Joined: Fri Jul 14, 2006 7:56 am
- Location: London, England
- Full name: Jim Ablett
-
- Posts: 3232
- Joined: Mon May 31, 2010 1:29 pm
- Full name: lucasart
Re: Rodent 0.12
Actually, according to Micro$oft, GetTickCount and GetTickCount64 have a resolution between 10 and 16 mslucasart wrote: while linux gives you nanosecond precision, I don't think windows' GetTickCount has a better resolution than a millisecond (or maybe even 10 ms...)
I couldn't find a proper timer that uses the CPU clock in the Windows API. And it seems that MSVC implements clock() by calling GetTickCount anyway.
As always, Windows sucks
-
- Posts: 2559
- Joined: Fri Nov 26, 2010 2:00 pm
- Location: Czech Republic
- Full name: Martin Sedlak
Re: Rodent 0.12
You can try QueryPerformanceCounter/QueryPerformanceFrequency instead if you need really precise timing.lucasart wrote: Actually, according to Micro$oft, GetTickCount and GetTickCount64 have a resolution between 10 and 16 ms
I couldn't find a proper timer that uses the CPU clock in the Windows API. And it seems that MSVC implements clock() by calling GetTickCount anyway.
As always, Windows sucks
-
- Posts: 1334
- Joined: Sun Jul 17, 2011 11:14 am
Re: Rodent 0.12
Jim,
Your source is non-compileable (if there is such a word).
Yes, I know - my GCC is _ancient_ (4.0.1) - and I'll try to build myself a copy of clang(++).
Matthew:out
Your source is non-compileable (if there is such a word).
Code: Select all
g++ -c -g attacks.c -Wall -O3 -Wno-write-strings
attacks.c:33: error: integer constant is too large for 'long' type
attacks.c:33: error: integer constant is too large for 'long' type
attacks.c:35: error: integer constant is too large for 'long' type
attacks.c:35: error: integer constant is too large for 'long' type
attacks.c:37: error: integer constant is too large for 'long' type
attacks.c:37: error: integer constant is too large for 'long' type
attacks.c:37: error: integer constant is too large for 'long' type
attacks.c:37: error: integer constant is too large for 'long' type
attacks.c:49: error: integer constant is too large for 'long' type
attacks.c:49: error: integer constant is too large for 'long' type
attacks.c:50: error: integer constant is too large for 'long' type
attacks.c:50: error: integer constant is too large for 'long' type
attacks.c:58: error: integer constant is too large for 'long' type
attacks.c:58: error: integer constant is too large for 'long' type
attacks.c:59: error: integer constant is too large for 'long' type
attacks.c:59: error: integer constant is too large for 'long' type
make: *** [attacks.o] Error 1
Matthew:out
Some believe in the almighty dollar.
I believe in the almighty printf statement.
I believe in the almighty printf statement.
-
- Posts: 481
- Joined: Thu Apr 16, 2009 12:00 pm
- Location: Slovakia, EU
Re: Rodent 0.12
You can get 1ms precision using timeBeginPeriod, timeEndPeriod and timeGetTime functions.lucasart wrote:Actually, according to Micro$oft, GetTickCount and GetTickCount64 have a resolution between 10 and 16 mslucasart wrote: while linux gives you nanosecond precision, I don't think windows' GetTickCount has a better resolution than a millisecond (or maybe even 10 ms...)
I couldn't find a proper timer that uses the CPU clock in the Windows API. And it seems that MSVC implements clock() by calling GetTickCount anyway.
As always, Windows sucks
http://msdn.microsoft.com/en-us/library ... 85%29.aspx
The default precision of the timeGetTime function can be five milliseconds or more, depending on the machine. You can use the timeBeginPeriod and timeEndPeriod functions to increase the precision of timeGetTime.
-
- Posts: 1384
- Joined: Fri Jul 14, 2006 7:56 am
- Location: London, England
- Full name: Jim Ablett
Re: Rodent 0.12
Hi Matthew,ZirconiumX wrote:Jim,
Your source is non-compileable (if there is such a word).
Yes, I know - my GCC is _ancient_ (4.0.1) - and I'll try to build myself a copy of clang(++).Code: Select all
g++ -c -g attacks.c -Wall -O3 -Wno-write-strings attacks.c:33: error: integer constant is too large for 'long' type attacks.c:33: error: integer constant is too large for 'long' type attacks.c:35: error: integer constant is too large for 'long' type attacks.c:35: error: integer constant is too large for 'long' type attacks.c:37: error: integer constant is too large for 'long' type attacks.c:37: error: integer constant is too large for 'long' type attacks.c:37: error: integer constant is too large for 'long' type attacks.c:37: error: integer constant is too large for 'long' type attacks.c:49: error: integer constant is too large for 'long' type attacks.c:49: error: integer constant is too large for 'long' type attacks.c:50: error: integer constant is too large for 'long' type attacks.c:50: error: integer constant is too large for 'long' type attacks.c:58: error: integer constant is too large for 'long' type attacks.c:58: error: integer constant is too large for 'long' type attacks.c:59: error: integer constant is too large for 'long' type attacks.c:59: error: integer constant is too large for 'long' type make: *** [attacks.o] Error 1
Matthew:out
If you are using an old version of GCC (and you are) add '
Code: Select all
-fpermissive
You will get lots of warnings when compiling, but it should build and run ok.
Jim.
-
- Posts: 41461
- Joined: Sun Feb 26, 2006 10:52 am
- Location: Auckland, NZ
Re: Rodent 0.12
Hi Pawel,PK wrote:available on its usual web page, www.koziol.home.pl/rodent
This version will probably stay for a while, as I will have insane amount of work in March. It's not much stronger than 0.11, but should be a bit more careful tactically.
Changes:
- some restructuring
- smarter time management (probably the most helpful modification)
- LMR code tweaked and simplified, history restriction enabled
- first draft of weakening code (may be enabled by #defines)
- loose pieces of endgame knowledge
Currently only the 32-bit compile is available, so I hope for some help
For the coming month I expect only to add more endgame stuff and release it as developement snapshots.
I started testing Rodent 0.12, but had to give up because I was getting time losses on the final move before either the first or second time control.
Never had that issue with Rodent 0.10, which was the last version I tested.
Graham.
gbanksnz at gmail.com
-
- Posts: 893
- Joined: Mon Jan 15, 2007 11:23 am
- Location: Warsza
Re: Rodent 0.12
developement version has this bug fixed, but unfortunately king safety code is under a major rewrite, so I cannot release right now.
Pawel Koziol
http://www.pkoziol.cal24.pl/rodent/rodent.htm
http://www.pkoziol.cal24.pl/rodent/rodent.htm
-
- Posts: 41461
- Joined: Sun Feb 26, 2006 10:52 am
- Location: Auckland, NZ
Re: Rodent 0.12
No hurry. Thanks.PK wrote:developement version has this bug fixed, but unfortunately king safety code is under a major rewrite, so I cannot release right now.
gbanksnz at gmail.com
-
- Posts: 1822
- Joined: Thu Mar 09, 2006 11:54 pm
- Location: The Netherlands
Re: Rodent 0.12
while gettickcount does give you milliseconds precision, it's very accurate milliseconds precision, which effectively is more accurate than linux microsecondsprecision in GetTimeOfDay; for the simple reason that windows kernel is in assembler and this is just moving the register to the user function, whereas the linux kernel is in C, so it's less accurate than GetTickCount, other than the millisecond resolution you get back versus microsecond resolution.rvida wrote:You can get 1ms precision using timeBeginPeriod, timeEndPeriod and timeGetTime functions.lucasart wrote:Actually, according to Micro$oft, GetTickCount and GetTickCount64 have a resolution between 10 and 16 mslucasart wrote: while linux gives you nanosecond precision, I don't think windows' GetTickCount has a better resolution than a millisecond (or maybe even 10 ms...)
I couldn't find a proper timer that uses the CPU clock in the Windows API. And it seems that MSVC implements clock() by calling GetTickCount anyway.
As always, Windows sucks
http://msdn.microsoft.com/en-us/library ... 85%29.aspxThe default precision of the timeGetTime function can be five milliseconds or more, depending on the machine. You can use the timeBeginPeriod and timeEndPeriod functions to increase the precision of timeGetTime.
the msdn writing on this is like 20 years old by now or so, ignore it.