
On Sat, Aug 16, 2008 at 11:50:05PM +0300, dizzy wrote:
I do not agree. They generally do their job fine (which is provide portable support to work with native, non checked, platform integer types). For any other needs you should probably use another tool.
Well, they do *not* do their job fine: (-1U < 2) == false, which is a mathematical nonsense (more on that below). Signed arithmetic overflow is undefined behavior, and some CPUs actually raise an exception on overflow (e.g. MIPS). Every 'a+b' expression, with a and b being signed integers, is potential UB. Some machines (e.g. x86) do not raise an exception but set an overflow flag which may be tested by a single instruction, yet I don't know of a compiler which offers run-time overflow checking as a code-generation option. Portable checks for overflow (relying on bit operations) incur immense overhead (certainly much greater than a single instruction). [mail rearranged a bit]
You can research how to turn off that warning with your compiler only in specific parts of the code. Obviously the easiest way is explicit conversion.
Or should I just listen to the devil on my shoulder and turn off the appropriate warnings?
Obviously so if you are doing correct code. By your reasoning one should find a solution for the common if(a = b) warning other than: - turn off the warning - or wrap with another set of paranthesis if((a = b)) (the equivalent of the explicit conversion in your case)
Writing an extra set of parentheses is not visually intrusive or cluttering. Writing static_cast<int>(expr), (int)(expr), or surrounding the offending code with #pragma _is_ cluttering and intrusive. Yes, I want my code to be short, concise, and easily readable, in addition to being correct. So shoot me :-) I have researched comp.lang.c++.moderated archives on this topic, and other sources, and found two advices: Peter van der Linden in "Expert C Programming: Deep C Secrets" writes: "Avoid unnecessary complexity by minimizing your use of unsigned types. Specifically, don't use an unsigned type to represent a quantity just because it will never be negative (e.g., "age" or "national_debt")." A quote of Bjarne Stroustrup: "The unsigned integer types are ideal for uses that treat storage as a bit array. Using an unsigned instead of an int to gain one more bit to represent positive integers is almost never a good idea. Attempts to ensure that some values are positive by declaring variables unsigned will typically be defeated by the implicit conversion rules." Yet, all of the STL uses an unsigned type for size_type and the return value of size(). As much as I'd like to use only signed ints, this becomes prohibitive (due to warnings) when mixing them with STL. An yes, I've been bitten several times in the past by implicit signed -> unsigned conversions in relational comparisons. The most sane thing would be to throw an exception at runtime if one compares a negative integer with some unsigned quantity, instead of getting false for '-1 < 2U', which is a mathematical nonsense. signedness of a type *should not* affect its numerical value in mathematical operations.
Another example: an external library defines its interfaces with signed integer types, I work with unsigned types (why? to avoid even more warnings when comparing task IDs with vector::size() result, as in assert(task->id < tasks.size()), which are abundant in my code).
You realise you can fix just that with a helpful function wrapper and you don't need to change whole interfaces and redesign your code because of a warning of your compiler being too picky with perfectly fine code right?
I *do* realize that. However, simple wrappers won't fix operators. I like having as many low-overhead sanity checks in my code as possible, but I don't want to litter the code with a bunch of assert()s in front of every arithmetic expression. Should I replace every 'if(a < b)' with 'if(less(a, b))' and provide a bunch of overloaded less() functions?
The warnings were ment to be helpful. If they are not, turn them off.
If I'm going to turn off that particular warning, I want to compensate with extensive run-time checking, at least in the "debug" version. There are _good_ reasons that warnings about 64->32 bit truncation or comparisons of different signedness exist.
Supose std::vector<T>::size_type is such a checked integral. Since it's suposed to represent the maximum size possible in memory (actually it's just the size_type of the Allocator argument but I'm talking about the default one) then it would be something like integer<32, 0, 4294967295>. Then, your thread ID is something like integer<16, 0, 65535>. Then the compile time checks you talked about woudl not allow you to assign values from what vector.size() returns to your thread IDs. So what you actually do is get to the same old
Oh, the library would allow conversion from integer<32, 0, 4294967295> to
integer<16, 0, 65535>, and insert an appropriate run-time range-check. So
types integer
There is a lot of possible dangerous code that can be done in C or C++ and some compilers tell you about it but that do not make it wrong code. Warnings
The problem is that every 'a+b', with a and b signed, is dangerous (potential UB), unless you either 1) formally *prove* that the expression won't overflow or 2) insert extra run-time checks (which clutter the code). :/ Something as common as simple addition should at least have an _option_ of *always*, under *all* circumstances having defined behavior. The integer<> class proposed above is just one of possibilities; but that should have been included in the standard :/