
Chad Nelson wrote:
On 04/02/2010 09:56 PM, Jeffrey Lee Hellrung, Jr. wrote:
You seem to really want some kind of one-to-one identification between integers and xint::integer objects,
Well, yes -- the xint::integer objects represent integers, after all. ;-)
Yes, and IEEE floats represent (limited precision and limited range) dyadic rationals. My point is, it can be practical and efficient to have 2 different underlying representations for the same abstract instance. You're already carrying around a sign bit + mantissa to represent an integer, so it seems to me unnatural to bend over backwards (exaggeration) to keep that sign bit unset when the mantissa drops to 0, if you really don't have to.
have yet to see where this makes a difference in the core interface or complicates the internal implementation.
It would a bit, because there may be code in there that assumes that if the negative Boolean is true, the value is less than zero.
Isn't the sign just (mantissa == 0 ? 0 : sign_bit ? -1 : 1) ? Perhaps I should've been clearer: I have yet to see where this makes the internal implementation more complicated than assuming a unique zero representation, and it appears like it would simplify implementation as you avoid "cleaning up" a -0. However, the one thing I haven't done is actually *look* at the current implementation, and I was hoping your familiarity would be able to pinpoint some precise issue.
I don't know what you mean here. Isn't the arithmetic involving zeros fairly straightforward, signed or not?
If the zeros aren't signed, yes. If they are, and the library is supposed to honor the signs, things could get hairy. But it appears that I misunderstood your intentions... if a negative zero is still treated as zero, then all the math works out.
For reference, wikipedia outlines the arithmetic rules specific to signed zeros in their "signed zeros" article: http://en.wikipedia.org/wiki/Signed_zeros What I would like to happen is, for example, the "natural" implementation of addition, where you don't specifically consider the rules for adding signed zeros, automatically works anyway for signed zeros. I might have to delve into the code to actually see what would happen...
The aforementioned arbitrary-precision floating point package will just have to handle that itself, maybe by making an extremely small but non-zero value and calling it negative zero. *That* seems "messy and arbitrary" ;)
It might be, it would depend on the design goals of the library.
I think the utility of signed zeros in floating point has been proven, so "it would be nice" if that was implemented as naturally as possible. I don't necessarily consider this a requirement of the core library, but I think it should be seriously considered.
You've convinced me not to dismiss the idea out of hand. I'm still not sure that it's worthwhile, but I'll probably add it, with the understanding that any calculations done with it (except a unary minus) will come out with a positive-zero result. If someone needs a negative zero, they'll have to explicitly request it via the unary minus.
Well, hopefully whatever zero (whether it's +0 or -0) "falls out" of an operation will suffice. I don't think you want to guarantee every result equal to either +0 and -0 (save a unary minus) will be fixed to +0. That defeats the whole point of my proposal. - Jeff