
Thanks Dave! I concur, it does seem a better approach because it provides full severability and orthgonality to the other concepts in the subclass of traits. The approach is still to use the metafunction paradigm, but I see your point that having separate types for signed and unsigned would allow for better scaling of concept, keeping each type aspect, signed or unsigned, independent. Thus, I have no problem with something like: template <class X> struct unsigned_type { }; template <class X> struct signed_type { }; // Specializations template <> struct unsigned_type<char> { typedef unsigned char type; }; template <> struct signed_type<char> { typedef signed char type; }; // ... template <> struct unsigned_type<int> { typedef unsigned int type; }; template <> struct signed_type<int> { typedef signed int type; }; template <> struct unsigned_type<unsigned int> { typedef unsigned int type; }; template <> struct signed_type<unsigned int> { typedef signed int type; }; // ... So if that were the format it took, say in its own file, signed.hpp or some such name, would that be more amenable Dave? Further, it seems that given cstdint.hpp and integer.hpp AFAICK I would need specializations for all the types defined there-in. A problem I ignored before then becomes dependencies. Since integer.hpp is defined purely in terms of integeral types, I don't see any problem with one depending on the other since the names in integer.hpp are simply an alias (typedef) for the intrinsics. It is the cstdint.hpp and underlying stdint.h defined in C99 that worries me, since although most implemention will take the 8, 16, 32, 32, 64 approach to char, short, int, long and long long respectively, all that is guarenteed by the standard is at least 8, 16, 16, 32, and 64 for the basic integral types. What can happen therefore is an implementation may choose to define the above integral types as 8, 16, 64, 64, 128, because it wishes to emphasise 64-bit integers as its intrinsic integral type. (It should however use stdint.h to provide an int_fast32_t to be long and make the int a 32-bit value to achieve the desired effect.) This implementation may then provide int32_t (although it does not need to) as an exact typedef for some compiler-specific type __int32, for example. In this case, [un]signed_type will not be specialized for __int32 because it is compiler specific and therefore unsigned_type<int32_t>::type will fail because the type member does not exist in the specializations. OTOH, if such a still ISO C99 conformant compiler did NOT provide an __int32 and therefore an int32_t, as 7.18.1.1.3 clearly states its perogative, cstdint.hpp will fail to compile because int32_t (and uint32_t) does not exist. Therefore, it could be said that signed.hpp would provide a minimal approach to cstdint.hpp in the same way C99 7.18.1.1.3 defines: assume that if the intN_t is not a native size, it will not exist. In a sense, for now I leave it as cstdint.hpp's problem, not signed.hpp. Jeffrey. ----- Original Message ----- From: "David Abrahams" <dave@boost-consulting.com> Newsgroups: gmane.comp.lib.boost.devel Sent: Monday, June 28, 2004 2:07 PM Subject: Re: Proposal: Addition to boost::integer_traits
"Jeffrey C. Jacobs" <darklord@timehorse.com> writes:
A while ago I proposed adding a signed_traits class and then after receiving some input decided it better this be a property of some existing traits object, ideally "integer_traits" since only integer can be signed.
Please, no more degenerate traits classes! We need metafunctions instead:
unsigned_type<x>::type signed_type<x>::type
etc.
-- Dave Abrahams Boost Consulting http://www.boost-consulting.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost