
Hi all, There are some integer libraries out there that improve on the fundamental integers; Boost.Integer, ACCU BoundedInt, part of The Boost Constrained Value library, and probably many other. They add the ability to specify an integer based on a range, they add bounds checking at runtime, or a combination of both. What I have not found yet though is a library that checks bounds at compile time. Me personally I would like to have the compiler check as much as possible. For instance, the compiler would complain about an assignment if the target's range is not a superset of the source's, unless an explicit conversion is made. Does anyone know of a library supporting this? If there is no such library I hope there is an interest in one and that this post will start a discussion about how it best can be done. The motivation for this kind of library is that its usage helps show the intent of the developer, and it can point out faults that otherwise even an extensive test suite might miss. That it forces the developer to thoroughly think about value ranges in computations I also regard as a point in favour. What about the type-safe part in the subject of this post? Even though most mistakes will be caught by the bounds checking, I believe there are situations where one wants to clearly state that an integer represents one specific concept and nothing else, for instance a numeric identifier. So I would like this kind of library to also support a kind of type-safety. I am not sure that that is the correct term for it, but what I mean is that when one declares a bounded integer it should be possible to state that it can not mix with other bounded integers regardless if the bounds are compatible. As I have been thinking about this for years now I of course have a lot of ideas on how to implement this. As a starting point for a discussion I would like to present some basic parts of what I have come up with. The integer type is declared as follows: template< typename Range, typename Tune = fast > struct integer; It takes a range type instead of taking the bounds directly. It also has a tuning parameter which is used when selecting the fundamental integer type to use for storage. The idea of having a separate range type is that it can be used for other types, see the array type below. The range type then: template< intmax_t Lower, intmax_t Upper, typename TypesafetyTag = defaultTypesafety > struct range_c; The bounds is no surprise, but here I add the type-safety part. I am not quit sure about this approach because the bounds and the type-safety mechanisms are orthogonal, but it works together quite well in practice I believe. Maybe it should not be named "range" but something else. The type-safety mechanism is fairly simple; two integers must have the same tag to be part of the same expression (unless a type-safety conversion is used). This means that they cannot be mixed with fundamental integers either. The default type-safety tag is special in that it allows for a mix with fundamental integers except that there is no implicit conversion or assignment to fundamental integers (this would otherwise provide a loophole). In practice fundamental integers do not work that well with the static bounded integers though, because most often the range is too big. Another type that seems fairly obvious is a static array (similar to Boost.Array): template< typename Range > struct array; The array of course is just as large as needed for the range. As subscript parameter it only accepts bounded integers with compatible ranges, i.e. with subset bounds and equal type-safety tag. Contrary to what is normal in C++ the array is not necessarily zero-based, it starts at the lower bound of the array. There is much more of course, for instance that the range type should take types instead of integers as bounds, but this post is getting too long already. As said before I hope this is going to be the start of a fruitful discussion. Sincerely, Leif Linderstam