Actually very little is actually "undefined" with respect to hardware. It always has some action that is predictable 100% of the time. Unfortunately, vendors can't agree on some things (shift is a good example) where some machines allow negative shift amounts, some do not. So the compiler standard was left as vague for those cases.Uri Blass wrote:I do not understand much about hardware but I think that it could be better to have definition for every possibility instead of having undefined cases.wgarvin wrote:In order to define it, they would have to state what the relationship is between the bits being shifted, the number that the left operand represented before the shift operation, and the number represented by the result. In other words, what number is meant by a particular pattern of bits? Nowadays pretty much all machines use a 2's-complement representation for their integer numbers, but when C was invented there were other machines around that used 1's complement or other representations. So the C standard couldn't really define it without making C impossible to implement sanely on some of the machines it was being used on. Instead, programmers were expected to know how their machine worked and use the operators accordingly. Widely-portable low-level software was not as common back then, I guess.Uri Blass wrote:Yes this is clear but I wonder why not to define it when the right operand is negative.wgarvin wrote:That seems pretty clear.Dann Corbit wrote: Semantics
3 The integer promotions are performed on each of the operands. The type of the result is that of the promoted left operand. If the value of the right operand is negative or is greater than or equal to the width of the promoted left operand, the behavior is undefined.
Here's another example: what is a NULL pointer? If you were to answer "a pointer value that consists of all zero bits" then you would be wrong. The NULL pointer is a unique value per pointer type that means "points to nothing", but the implementation can choose how to represent it. The fact that NULL is a macro that expands to a (maybe type-casted) literal 0 token has nothing to do with the value; that 0 token tells the compiler to make a NULL pointer, that's all.
Nearly all C implementations for the last 15+ years use an all-zero-bits value for the NULL pointer for every type. Because of that, there is a lot of code out there that does things like, memset a structure with 0 bytes and assume that that initializes any pointers in the structure with NULL. Nowadays, no one would make a C or C++ implementation that behaved differently, because there is so much code out there which would not work on it. I'm not sure if newer versions of the C or C++ standards have specified that NULL pointers now use an all-zero-bits representation, but I doubt it. In practice, go right ahead and assume that its true, because it is true on any implementation of C or C++ you'll ever use. But the C and C++ specs are filled with things like this---behaviour which is undefined, or implementation-defined, even though it works in a consistent and sane way across 99% of the machines in the world, because somewhere there is an oddball machine type which C runs on where that particular thing works totally differently.
If the fastest implementation of << that works correct for positive is dependent on the compiler then it is possible to have also another symbol to represent the faster operation that does not work for negative.
The reason for the definition that I thought about has nothing to do with hardware and it is simply a trivial mathematical generalization.
It is trivial that every hardware can support translation of << for negative number to >> and the compiler simply can translate
Some standards were stupidly left to the vendor, such as is a char signed or unsigned, but that is another issue...