
Jeff Garland wrote:
Yuval Ronen wrote:
Now I also understand better the sentence that appear in N2320 in all timed functions: "ElapsedTime shall be explicitly convertible to nanoseconds". It's because nanoseconds are the basic for all. Now lets
I'll check, but that's incorrect and has been removed. They don't need to be convertible to nanoseconds, they need to provide get_count and a ticks_per_second trait to allow conversion to the system time resolution.
say you want to allow a demanding user to write his own picoseconds class. This class needs to be "explicitly convertible to nanoseconds". What does it mean? The only answer I can think of is "it has an explicit conversion operator to nanoseconds" (assuming C++0x have an explicit conversion operator feature). That makes the picoseconds class different from all the other time_duration types, because microseconds isn't explicitly convertible to milliseconds, for example. And there's probably a good reason it isn't convertible - because it looses resolution. So why should picoseconds be convertible to nanoseconds?
It won't be -- as I said, that part of N2320 wasn't quite right. Of course, the conversion problem still exists if you use a higher resolution duration than the system supports. In that case, the system will round up to the nearest supported resolution on the system (eg: nanoseconds --> microseconds).
Okay, that makes a lot of difference. N2320 needs some fixing. However...
After reading that there is no universal time_duration type, I asked myself "then what is the return type of time_point subtraction?" So I looked in N2411, and saw that the answer is "nanoseconds". It also makes sense given the fact the utc_time is defined to have nanoseconds resolution. So my conclusion from it, is that nanoseconds /is/ that universal duration_time type. All the rest are (or can be) simple logic-less wrappers around nanoseconds. Had there wasn't any universal time point type (utc_time), but seconds_utc_time, nanoseconds_utc_time, etc, then you could say that there isn't a universal time, but as it is now, I think there is.
No, there is still no universal time duration. You can send any type that meets the qualifications into a sleep or wait. As for utc_time (now renamed to system_time in the latest drafts) using nanoseconds, that is just an indication that the system_time class has a maximum resolution of nanoseconds.
... in general (ignoring threading stuff for now), I see no point in providing genericity in time-duration types without parallel genericity in time-point types. These are too interconnected. The fact that the time-point type is defined to have a fixed (nanoseconds) resolution sterilize almost all advantages from time-duration resolution genericity. If sub-nanosecond resolution for time-duration is necessary, then I guess it's also needed for time-points. And another small question: why make those traits a function, and not compile-time constants?