hgm wrote:I don't have Clang, but this issue is triggered by a bug report on XBoard, where someone tries to compile it for Mac with Clang. The build log contains the following warning:
moves.c:1337:26: warning: comparison of constant -4 with expression of type
'ChessSquare' is always false
[-Wtautological-constant-out-of-range-compare]
if (board[EP_STATUS] == EP_IRON_LION && (board[rt][ft] == WhiteLion ...
~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~
Now based on what people remarked here there could be a genuine problem: board[] is declared as ChessSquare, which is an enum type. In the gcc that I am using this is apparently always implemented as (unsigned int). But the range of the enum does not extend beyond 256, so it could very well be that Clang implements it as (unsigned char). In that case there would be a problem, because board[EP_STATUS] = -4; would assign 0xFF to it, which would not be sign extended on conversion to (unsigned int).
I guess I will have to be more careful here, and cast the ChessSquare to (signed char) before doing the comparison.
It may have nothing to do with the sign: if board[] is an enum type and EP_IRON_LION is not part of the enum, you can also get this (or a similar) warning, the idea being that an enum type should only ever hold values that are part of the enum. You certainly can have negative constants in an enum list.
In this case, the correct fix is to define EP_IRON_LION in the enum list.
EDIT: I don't have a link handy, but I think the C standard states that an enum element is of type "int", not "unsigned int".
it appears the answer is that the implementation gets to choose what the underlying integer type of your enum is (signed int is defined as the maximum range of enum values, but if the enum values needed can fit in an unsigned char the compiler is allowed to choose that for instance).
So i guess you should explicitly cast any enums to the type you want them to represent if you're about to do something that relies on what integer type they are.
kbhearn wrote:it appears the answer is that the implementation gets to choose what the underlying integer type of your enum is (signed int is defined as the maximum range of enum values, but if the enum values needed can fit in an unsigned char the compiler is allowed to choose that for instance).
Enum constants are of type int, whatever integer type is used by the compiler to represent a typedef'ed integer type is compiler dependent (and possibly context dependent as well).
So i guess you should explicitly cast any enums to the type you want them to represent if you're about to do something that relies on what integer type they are.
If you want to mix and match enums and plain integers, then I guess so. Otherwise I would suggest that you shouldn't do that and use variables of the typedef'ed enum instead...
Unsigned integers shall obey the laws of arithmetic modulo 2^n where n is the number of bits in the value representation of that particular size of integer.
This means that the the largest number UINT_MAX satisfies UINT_MAX + 1 == 0, or -1 == UINT_MAX, regardless of the underlying representation.
Unsigned integers shall obey the laws of arithmetic modulo 2^n where n is the number of bits in the value representation of that particular size of integer.
This means that the the largest number UINT_MAX satisfies UINT_MAX + 1 == 0, or -1 == UINT_MAX, regardless of the underlying representation.
First of all, that's a draft of a C++ standard, but I think we are talking about C here.
The fact that unsigned integers obey the laws of arithmetic modulo 2^n tells me that (0u-1u) is indeed what we want. But it tells me nothing about what happens when I take -1 (a signed integer constant) and convert it to unsigned int.
AlvaroBegue wrote:
First of all, that's a draft of a C++ standard, but I think we are talking about C here.
The fact that unsigned integers obey the laws of arithmetic modulo 2^n tells me that (0u-1u) is indeed what we want. But it tells me nothing about what happens when I take -1 (a signed integer constant) and convert it to
unsigned int.
OK, first, the draft C Standard N1539 contains pretty much the same wording in 6.2.5/9. Second, 6.3.1.3/3 of the same C Standard states:
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
AlvaroBegue wrote:
First of all, that's a draft of a C++ standard, but I think we are talking about C here.
The fact that unsigned integers obey the laws of arithmetic modulo 2^n tells me that (0u-1u) is indeed what we want. But it tells me nothing about what happens when I take -1 (a signed integer constant) and convert it to
unsigned int.
OK, first, the draft C Standard N1539 contains pretty much the same wording in 6.2.5/9. Second, 6.3.1.3/3 of the same C Standard states:
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
Ah, that's interesting. C++98 left the result implementation defined. It's a good thing that C and later C++ standards don't have this issue.