
From: "Robert Kawulak" <kawulak@student.agh.edu.pl>
From: Rob Stewart
Once you go with a function local static, you have to worry about thread safety.
Even though the local statics are constant? Then the only problem would be initialisation, but doesn't compiler take a proper care of this in multi-threaded implementation?
No.
In the simpler range bounded approach, the type includes the boundaries, so there's no need for statics. It would only be the more complicated bounding policies that might have need for this behavior. Thus, don't force the creation of statics in your design. Ensure that if they are used, it is the choice of the policy writer.
Well, it's not forced - one may easily supply his own bounds specifying policy that pass bounds by value.
I don't understand. So long as the comparison against the boundaries is a comparison against a constant, then the code will be efficient and there's no need for statics.
Actually, in my implementation it depends on the underlying type. If it's integral, then bounds are returned by value. If it's not, the min_value() function looks something like this:
static const value_type & min_value() { static const value_type v = MinValueGenerator()(); return v; }
Is there really something that wrong with this design? Is there any danger
I see what you're doing. Why not: void validate(value_type & value_io); That function can do the validation any way it sees fit and enables the discontinuous ranges you spoke of previously.
out there because of using function-scope const static object? It just seemed much better to me - the value is constructed only once, and every next time min_value() takes only one comparison (static initialisation check) and returning a reference. In return-by-value case every call results
Don't forget that using the function local static, aside form the MT problem, has overhead. The implementation does, in effect example() { int (in-platform-specific-location) result; if (!compiler-specific-flag) { result = initial-value; compiler-specific-flag = true; } } when you write this: example() { static int result(initial-value); }
in: 1) creation of generator object (in most cases optimised away) 2) call of generator's call operator which results in construction of new object 3) copying the object on return and destructing it (may be optimised away by RVO, but not always) 4) eventually copying and destructing the object again and again as it's passed by value *) using the object, for example when comparing it with the value you try to assign to check whether the value lays within bounds, in every assign there are two such checks - multiply all the operations by 2 5) destructing the object
I can see no con in using function-scope const statics here and a huge pro - efficiency. And I don't consider it as premature optimisation - it rather seems to me as avoiding premature pessimisation.
All of those points are based upon your design. If you use the validate() approach, you avoid all of that and can check for both the minimum and maximum range at once, if that's appropriate. (I haven't looked at your code, I haven't thought through the design issues, etc., so what I propose may not be possible. I leave it to you to consider the idea.)
Well, this wouldn't be constrained<int> but constrained< bounded_policies::error< bounds_specifiers::static_bounds<int, 0, 10> > >
That's nasty. Do you need things separated like that? It would be easier to use (though maybe less flexible in some important way) with this: constrained<bounded<int, 0, 10> > -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;