
On Mon, Aug 18, 2008 at 12:24:46PM +0300, dizzy wrote:
Writing an extra set of parentheses is not visually intrusive or cluttering.
Says who?
I say so. Extra parentheses = 2 chars. Old-style cast = two sets of parentheses + type name. New-style cast = even more clutter.
Well that's you. C++ was also made to: - not pay for anything you don't need (I think RTTI is an exception there)
ok, agreed.
- allow writing code that is very close to the native platform
Whatever that means. I disagree here because 'a+b' is undefined behavior even on 2's complement machines which do not trap on overflow (even though the result is well defined). Even when I *do* want undetected overflow behavior of signed numbers for some reason, with whatever semantics the underlying CPU provides, I *cannot* get it, because it's UB. (In practice, I *do* get it as long as the code generator is not too nifty with optimizations.) Anyway, we're not going to agree here, so we might just well close this part of the discussion, this is anyway OT.
- it can promote both arguments to the signed integer type that supports all values of both (say long) and compare that; this may seem fine to you but it has the following problems: - it assumes there is a type that can represent both those values; we can clearly imagine that a and b might have been long and unsigned long in the beginning so that won't work; not to mention there is no guarantee that even
It can work perfectly, there are only two cases: - a < 0 : true, done, no need for conversion - a >= 0 : ordinary unsigned comparison
- promoting to a bigger type than int and comparing that may prove more expensive than comparing ints (which the standard defines as being the natural
You don't need to promote.
Your solution to this problem?
See above.
program that something may be negative or not? That's actually _why_ I use unsigned types, to signal that it can't be negative from that moment on (ie
It can't be negative, but it can assume a meaningless, huge positive value.
Or otherwise, how do you know a pointer can't be invalid at every point of use in the program? You can't, you can have invariants based on the code flow up
I can't, but I rarely do anything but dereferencing on pointers.
Actually, I think I could simplify my requirements a bit:
- define template class integer<T> with T == char, short, int, long, long long (no unsigned types allowed!)
So you can't know at some point in code if some value is positive without assumption (may be error prone) or runtime check (too costly). I think you should allow to signal syntactically that some type takes only positive values, it is something often needed in code to know this for sure about some
It can be positive, yet still meaningless. Thus, if being "poisitive" is relevant at some point in the code, then also "being in range with some tighter upper bound than MAX_UINT" is also relevant. So the check is needed in any case. As the case is now with C++, "being positive" is a worthless constraint because "a-b" always results in a positive value, regardless of a < b or a > b. This syntactic constraint just gives a false sense of security, imho, at least when it is used the way you have just described (guarantee that something is positive).
Initialization with implicit conversion too? If so that might become tricky with the bellow requirement about conversion to integer types (in general it's tricky to get implicit conversions right).
Probably not. I have to think about this.
- allow conversion to _any_ underlying integer type, signed or unsigned, provided the value fits; otherwise throw an exception - allow mixed arithmetic between mixed integer<T> classes (e.g. integer<int> + integer<char> would be defined) as well as between integer<T> and any primitive type, subject to the conversion rules above - arithmetic would be checked for overflow
This, as you said, can be overly expensive on some platforms or not? But if you need it sure.
With platform-specific inline ASM, it's hardly expensive on any platform (an extra instruction or so). Writing it in portable C++ is expensive, yes.
- comparisons between integer<T> and unsigned types are allowed as long as integer<T> is positive or 0; otherwise an exception is thrown
Mathematically speaking "0" is a positive value ;) On the serious side, why
No, it's not :-) 0 is nonnegative, but not positive. See here: http://mathworld.wolfram.com/Positive.html http://mathworld.wolfram.com/Nonnegative.html
throw? Wouldn't be better to still try to perform them somehow (and only if there is no underlying common type that can represent all their values and be able to compare them throw). This of course comes with associated costs and if as you say below you want a flag that disables all associated runtime costs then I understand why you need to throw here in the debug version.
Yes, this occurred to me too. Should be encapsulated in some policy, so you can choose the behaviour you want.
- bit manipulation operators would behave as if T were unsigned; an additional method or function signed_shift(integer<T>) would be provided
But you should make some more strict requirements then. You can't assume a negative value encoding (such as 2's complement) thus you should not allow bitwise operations affecting more than the bits forming the value, ie without the sign bit. Otherwise you (as the C++ standard) get "unspecified value".
OK, more food for thought. Though, I primarily target 2nd complement machines. If somebody needs support for other representation, he's welcome to contribute :)
- arrays of integer<T> must have the same low-level layout as arrays of T
I'm not sure how this can be accomplished. Sure you can have only the integer
struct Integer { int x ; }; Integer a; the standard guarantees that (void*)&a == (void*)&a.x Compiler will hopefully not insert any additional padding, so the result is in practice achieved.
features, so then to make it generic you will need to do some sort of compile time configuration, like for example with policies. You say you would be using
yes, i want at least checking/unsafe policy.
arithmetic syntax without assert()s polluting the code everywhere. But you would use it as native types when in release mode, so then the C++ integer types are not flawed that you would use them in release mode? Your only complain is that they were flawed for not having expensive runtime checks when used in debug mode? :)
I complained to two things: - that the arithmetic, as it is defined now, can be mathematically nonsensical - that the language definition does not give an option for well-defined arithmetic, either implemented in the compiler or as part of the standard library So yes, integer types suddenly become good enough after the program has been tested extensively enough.
Something as common as simple addition should at least have an _option_ of *always*, under *all* circumstances having defined behavior.
Portable and without runtime costs, no. It has no such option. That's the
Just because it can be costly, it should be an option. But a standardized one.
You mean in the standard library? Well I work often with open source projects,
For example.
something not already available in software: "if it's not available it means it's not needed". To a certain degree this is valid for the current problem
Or it might just be that the majority of OSS developers do not realize that such library is needed or why it would be useful.