
on 04.04.2010 at 5:27 Jeffrey Lee Hellrung, Jr. wrote :
Chad gave the practical reason. But perhaps more fundamentally, the information (the sign bit) is *there*, and I'd like it to be usable, even if the mantissa is zero. If you don't want to distinguish between -0 and +0 (and I don't see why you would when strictly in the domain of integers), then you won't ever notice there are 2 different representations. [...] And it *is* natural for sign + magnitude (which I had been slightly mistakenly calling "sign + mantissa") representations. If builtin integers were based on a sign + magnitude or 1's complement representation, we'd already be used to dealing with signed zeros:
I'm guessing it's simpler in hardware to use 2's complement, but we're pretty much stuck with sign + magnitude AFAICT for an unbounded integer library.
i think that designing the interface assuming the implementation is a very bad idea but designing the interface keeping the implementation in mind is whole lot better so assuming that "the sign bit is there" (but rather a sign bool) is a bad idea because you restrict the implementation given such restrictions prevents you from easily switching to arbitrary implementation so my point is that the interface must be design regardless of the implementation (but keeping it in mind) -- Pavel P.S. if you notice a grammar mistake or weird phrasing in my message please point it out