Let me cross examine the witness. And I am going to "lead" the witness just a bit by asking very specific questions.Rein Halbersma wrote:No. The use of a + 1 > a for signed int a is unsafe. The optimization for sure will not help this single line very much, but the optimizer rules in a compiler are written for an entire universe of correct code, not for the hopefully soon extinct subset of non-conforming and unmaintained code.bob wrote:The optimization about a + 1 > a being optimized into "1" is an unsafe one.
The whole "the compiler is malicious" line of reasoning is based on a fundamental misunderstanding. It seems that you and HG regard C as a mapping of the hardware to C programming constructs, instead of the other way around. The reason that some program constructs in C lead to undefined behavior is that the mapping of C to hardware is not one-to-one. Some things cannot be defined in a portable way because of the enormous diversity in hardware (memory models, bus-width, cache coherence etc.).
"are we compiling for a computer, yes or no?"
<I assume you answer yes>
"does the computer have a finite word length, which means that at some point, when you add 1 to a value, the value wraps around to a negative value for signed ints, just like it wraps around to zero for unsigned?"
<I assume you answer yes>
"why do we treat signed and unsigned overflow differently, since the hardware of today, any platform, treats them exactly the same?"
<you do get to answer this one>
"What is the difference between x++ producing a zero on unsigned ints, and x++ producing a large - number on signed ints?"
<your turn again>
"can you justify treating them differently?"
"does formal mathematics make such a distinction between the two cases?"
Then you will see where I am coming from here.
This difference between signed and unsigned is arbitrary. It serves no good purpose. It ADDS confusion to the programming process. And it is diametrically opposed to what the hardware does for both cases, which is to handle them exactly the same.