Re: [Boost-users] date_time microsec_clock::univeral_time() implementation
Jeff Garland
wrote: Brian Neal wrote: We have an application that was using our own custom functions for getting the OS time. We were basically representing time as the number of milliseconds since the UNIX epoch in an unsigned long, or as the number of nanoseconds since the epoch in a 64-bit type.
We later discovered boost, and cut everything over to boost::date_time. This has worked out very, very well and has uncovered several time related bugs in our old code.
Glad it was helpful :-)
However, we do have a small percentage of code that is time critical, and we have noticed that calling microsec_clock::univeral_time() is 2.6 times slower than our old function, and for this one bit of code that is significant.
Ok.
Looking at the implementation for UNIX like systems, I see that create_time() in microsec_time_clock.hpp is calling gettimeofday() followed by a call to gmtime() or localtime() as appropriate. I think it's this second call that is the difference between our old code and boost.
Ok.
In our time critical chunk of code we are just time tagging events, and we aren't concerned about timezone. I was thinking about adding a function to our code that called clock_gettime() (which is what our old code did), but then constructing a time_type out of that using the UNIX epoch for the date part and the results of clock_gettime() for the time_duration part, i.e, in pseudo code:
time_type our_get_time() { clock_gettime(CLOCK_REALTIME, ×pec); time_duration td = massage(timespec); return time_type(unix_epoch, td); }
Looks like a solid approach. Of course you don't have to modify Boost.date_time, you can simply extend it with your own clock implementation.
I replied to myself about this to the list, but my messages seem to have gotten out of order, so I'll repeat my findings and amplify a bit below: It turned out, much to my surprise, that the above approach was much, much slower than the current implementation of universal_time(). I suppose that is because it's got an enormous amount of seconds to add onto a date that is 36 years ago, and thus you got all your leap year calculations and stuff. I dunno, I didn't pursue it further. But it was much worse performance. I was doing something like this: const date unixEpoch(1970, Jan, 1); ptime getTime() { struct ::timespec ts; ::clock_gettime(CLOCK_REALTIME, &ts); // error checking omitted const time_duration td(seconds(ts.tv_sec) + nanoseconds(ts.tv_nsec)); return ptime(unixEpoch, td); } So I went back and rethought about how I was using boost::date_time. In our pre-boost code, we were just getting these 32-bit numbers, essentially tick counts, from the OS. We were just using these tick counts for comparisons and deltas, timestamping and stuff. We certainly didn't need full dates, as our events last, at most, 10 seconds. If you think about it, these tick counts are just a duration from some OS specific epoch. Aha, time_duration! So in our time critical code, we replaced all of our ptimes with time_durations, and replaced all calls to universal_time() with a function of our own making (getTimeStamp()), which calls clock_gettime() and forms up a time_duration with the results. This worked perfectly and brought back our performance to the pre-boost levels. For our other, non-critical time code, we just left the ptimes in.
I'm also curious why gettimeofday() was chosen over clock_gettime()? Is one more available than the other?
Honestly, I don't remember why at this point. There is the issue with timezone adjustment for the other cases. My guess is portability was a factor since gettime_clock is only supported in the POSIX real-time spec, although it's probably pretty widespread now. That said, I'd be willing to improve the implementation of microsecond_clock with this change where it is possible -- looks like we can tell by checking CLOCK_REALTIME.
I would guess, also, that portability might have been a driver. I do not believe the clock_gettime() is available on cygwin, for example, but gettimeofday() is.
If you send me a working implementation I can drop it in and try it out ;-)
Well, as I said, the current universal_time() is much faster than treating the results of clock_gettime() as a time_duration from the UNIX epoch. But again, after rethinking our porting to boost, we brought the overkill on ourselves for replacing our tick counts with ptimes. time_durations were a better fit for what we were doing for that one specific case. Thanks. Boost::date_time is a very useful and powerful library!
Brian Neal wrote:
Jeff Garland
wrote:
So I went back and rethought about how I was using boost::date_time. In our pre-boost code, we were just getting these 32-bit numbers, essentially tick counts, from the OS. We were just using these tick counts for comparisons and deltas, timestamping and stuff. We certainly didn't need full dates, as our events last, at most, 10 seconds. If you think about it, these tick counts are just a duration from some OS specific epoch. Aha, time_duration! So in our time critical code, we replaced all of our ptimes with time_durations, and replaced all calls to universal_time() with a function of our own making (getTimeStamp()), which calls clock_gettime() and forms up a time_duration with the results. This worked perfectly and brought back our performance to the pre-boost levels. For our other, non-critical time code, we just left the ptimes in.
Sounds like a good solution.
Well, as I said, the current universal_time() is much faster than treating the results of clock_gettime() as a time_duration from the UNIX epoch. But again, after rethinking our porting to boost, we brought the overkill on ourselves for replacing our tick counts with ptimes. time_durations were a better fit for what we were doing for that one specific case.
Thanks. Boost::date_time is a very useful and powerful library!
You're welcome. Jeff
participants (2)
-
Brian Neal
-
Jeff Garland