
On Sun, Mar 27, 2011 at 9:41 AM, Brent Spillner <spillner@acm.org> wrote:
"What I consider especially strange from an algebraic point of view, is to automatically promote a signed integer type to its corresponding unsigned integer type, when >an arithmetic expression contains both signed and unsigned integer types. Shouldn't it be the other way round? Or wouldn't it be even better, if there would be no >automatic conversion at all? At least from an algebraic point of view, any natural number is also an integer, not vice versa like in C."
To be precise, this happens only when the rank of the unsigned type is greater than or equal to that of the signed type. In the case where both operands have non-negative values, this minimizes the risk of overflow during addition or multiplication. When performing an operation on a unsigned value and a signed value (or subtracting a unsigned value from a signed positive value of lesser magnitude), it is incumbent upon the programmer to have some notion of what he or she is doing. I believe the rationale may be that signed values are the default, so if you went out of your way to declare something as unsigned then you're accepting some responsibility for using it appropriately. Note also that (at least in ANSI C) when performing arithmetic between an unsigned value and a signed value of higher rank, the promotion is to the signed type--- so the "promote to the 'larger' type" principle is consistently applied across the board, particularly if you consider unsigned types "larger" than the signed equivalent (seems reasonable to me given how uncommon negative values are in most integer code.)
Even if that's consistent, it's mathematically wrong to convert a signed type to unsigned unless you know the value is non-negative. -- Dave Abrahams BoostPro Computing http://www.boostpro.com