I do not know, but it is quite picky. I compile with that as routine. Still, once in a while I use valgrind too. For some reason, If I compile files individually and link them, the warning does not show up. I need to compile gcc <warning switches> file1.c file2.c etc. etc. -o programname to get the result.
Miguel
Valgrind (which is basically a virtual cpu) works by tagging memory with extra bits to distinguish between initialized data and uninitialized data.
I once encountered something similar while comparing gcc and icc executables. The icc executable behaved the same as the gcc as long as no optimization was turned on.
It was a bit difficult to debug as the debug versions behaved the same.
The problem at the end was relying on undefined behavior of a c function (without knowing that it is undefined)
In my implementation of insert sort I used memcpy to make room for an element. The behavior of memcpy is undefined when src and target memory overlap. gcc kept the desired behavior even when optimization was turned on. icc did not. Replacing memcpy with memmove solved the issue in my case.
Especially the rule: "Quit thinking and look". This means not to make wild guesses when you don't have an idea what in thousands of lines of code causes the difference. Rather add a dump and see where the difference occurs: http://talkchess.com/forum/viewtopic.php?t=39390
Desperado wrote:Ok, now that we know the problem is hidden in transposition table code
and _if_ you can exclude different hashsize for 32/64 bit here is my next
bet:
* mixing up lo/hi - Index/Signature code.
I mean, you are using loBits for indexing, are you really using _hiBits_
for the signature. That would explain the 32/64 bit differences
immediatelly and also different debug/release behaviour.
(edit: or is dwSignature the _complete_ , a 32 bit hashkey for the position ?)
Just one more question.
And are you using 1 slot or many slots where the position can be put in ?
Michael
As mentioned in my reply to Ricardo, the size of the hash entry is 16 bytes in debug and release, for both 32-bit and 64-bit.
And, yes, dwSignature is the complete 32-bit signature for the position as is stored in the board structure. I am going to check the code thoroughly to make sure that it is always being passed properly.
SaveHash is only called in three places, before returning Beta, before returning Alpha, and before a null move cutoff.
Desperado wrote:Ok, now that we know the problem is hidden in transposition table code
and _if_ you can exclude different hashsize for 32/64 bit here is my next
bet:
* mixing up lo/hi - Index/Signature code.
I mean, you are using loBits for indexing, are you really using _hiBits_
for the signature. That would explain the 32/64 bit differences
immediatelly and also different debug/release behaviour.
(edit: or is dwSignature the _complete_ , a 32 bit hashkey for the position ?)
Just one more question.
And are you using 1 slot or many slots where the position can be put in ?
Michael
As mentioned in my reply to Ricardo, the size of the hash entry is 16 bytes in debug and release, for both 32-bit and 64-bit.
And, yes, dwSignature is the complete 32-bit signature for the position as is stored in the board structure. I am going to check the code thoroughly to make sure that it is always being passed properly.
SaveHash is only called in three places, before returning Beta, before returning Alpha, and before a null move cutoff.
jm
I assume you already checked that the implicit casts work like expected(value,sign)
and further that the data to store is always in the range you
think it is (like passing depth<0?!, in qs?, reducedDepth instead of depth...).
Also you do not mix things (like ply/depth) by accident when data get stored.
So currently with the given code snippets and information i dont have more _first_ guesses
that would produce different behaviour for 32/64 bit builds.
if you do not have sth. like an EventLogger (from Onno s suggestion),
you may record the following format to isolate the problem.
for all versions: debug32,debug64,release32,release64
logFile: (nodecount , dwSignature)
You can only log every 100K in first run, 10K second run...
until you get the exact nodeID, where first difference occurs.
When you found the node you can retrack the problem in debug mode,
or at least get an idea what may be different in release mode by
retracking manually.
This sounds like an unitialized variable. Have you tried GCC and let it build the dependency graph (it will do this with -O option)
Much faster in my opinion is to use valgrind. If there is an uninitialized variable it will immediately tell you.
It doesn't seem to catch 'em all. BTW gcc is far faster, as we are just talking about a simple compile step. It will produce a warning for any variable that is used before it is assigned, although aliases through pointers and such can break this completely...
But any time you can change the node count by changing the compile options, or 32 vs 64 bit execution target, or adding or deleting code that does nothing, it always leads to a suspicion of uninitialized stack data first and foremost.
While valgrind, and some of the other things like bounds checkers and such can help, there is a learning curve to use them. GCC's dependency graph analysis is automatic so long as you tell it to compile and optimize.