
Hello. I have some troubles with boost::thread, and think they are important: 1. condition::timed_wait dangerous to use, because waiting delay specified as time when we are will stop waiting. So this is very surprising when system time is changed! I guess that waiting for specified delay is much more safe than waiting until some time arrive. For example try launch the program bellow and change system time backward (to past) when it's working. I have tested this under win32 + msvc7 // ------------------------------- boost::condition cvar; boost::mutex monitor; void thread1() { for(;;) { boost::mutex::scoped_lock lock(monitor); boost::xtime xt; boost::xtime_get(&xt, boost::TIME_UTC); std::cout << "now: " << xt.sec << std::endl; xt.sec += 5; std::cout << "will wait until: " << xt.sec << std::endl; if (!cvar.timed_wait(lock, xt)) std::cout << "timed out!" << std::endl; std::cout << "next step" << std::endl; } } int main(int argc, char* argv[]) { boost::thread thrd1(&thread1); thrd1.join(); return 0; } // ------------------------------- 2. The secound trouble is that the thread function has catch(...) at the root and when unhandled exception in thread thrown, it's just silently eated and thread stopped. But the process stay alive, and now nothing about it. And ever wrong that catch(...) under VC6 will catch OS exceptions. p.s. sorry for so bad english

"MagomedAbdurakhmanov" <maq@mail.ru> wrote in message news:200404160031.i3G0VZO1023813@milliways.osl.iu.edu...
Hello. I have some troubles with boost::thread, and think they are important:
1. condition::timed_wait dangerous to use, because waiting delay specified as time when we are will stop waiting. So this is very surprising when system time is changed! I guess that waiting for specified delay is much more safe than waiting until some time arrive.
2. The secound trouble is that the thread function has catch(...) at
I'll have to think about this. It would break a lot of code to change this now, and there are some good reasons for specifying an absolute time instead of a relative time. For example, if you use the recommended loop to wait on the condition variable, using an absolute time is at the very least a lot more convenient: boost::mutex::scoped_lock lock(monitor); boost:condtion cv; bool test = false; boost::xtime xt; boost::xtime_get(&xt, boost::TIME_UTC); while (!test && cv.timed_wait(lock, xt)); //Wait until test is true or timeout occurs //This would be a lot harder to get right if a relative time were used the root and
when unhandled exception in thread thrown, it's just silently eated and thread stopped. But the process stay alive, and now nothing about it. And ever wrong that catch(...) under VC6 will catch OS exceptions.
On the other hand, letting exceptions escape from the thread function is also bad. For what it's worth, the thread_function in the thread_dev branch (which I'm working on merging into the main branch as time allows) has this: catch (...) { using namespace std; terminate(); } Mike

Hello Michael, Saturday, April 17, 2004, 1:17:05 AM, you wrote:
1. condition::timed_wait dangerous to use, because waiting delay MG> specified as time when we are will stop waiting. So this is very surprising when MG> system time is changed! I guess that waiting for specified delay is much more safe than MG> waiting until some time arrive.
MG> I'll have to think about this. It would break a lot of code to change MG> this now, and there are some good reasons for specifying an absolute MG> time instead of a relative time. For example, if you use the MG> recommended loop to wait on the condition variable, using an absolute MG> time is at the very least a lot more convenient: Ok. But we don't need to change the behavior by default. Much better to allow both methods to wait. So we can choose which method is better in our case. For example timed_wait can be overloaded (or provides another metod) for period argument. Something like this: boost::xtime_period xtp(1000); if (cv.timed_wait(xtp)) { // ... }
2. The secound trouble is that the thread function has catch(...) at MG> the root and when unhandled exception in thread thrown, it's just silently eated MG> and thread stopped. But the process stay alive, and now nothing about it. And ever wrong that catch(...) under VC6 will catch OS exceptions.
MG> On the other hand, letting exceptions escape from the thread function MG> is also bad. For what it's worth, the thread_function in the MG> thread_dev branch (which I'm working on merging into the main branch MG> as time allows) has this: MG> catch (...) MG> { MG> using namespace std; MG> terminate(); MG> } MG> Mike This is better than just silently catch them. But there is a advantage if we let to escape exceptions from thread. I'll speak about win32: when application throws exception somewhere and terminates, the OS can inform the code instruction where exception occured (dr.watson). It's very helps to find trouble place. And in that case we have choice: i can write catch(...) inside my thread function and catch them all, or skip some of them. -- Best regards, maq mailto:maq@mail.ru

Michael Glassford wrote:
"MagomedAbdurakhmanov" <maq@mail.ru> wrote in message news:200404160031.i3G0VZO1023813@milliways.osl.iu.edu...
Hello. I have some troubles with boost::thread, and think they are important:
1. condition::timed_wait dangerous to use, because waiting delay specified as time when we are will stop waiting. So this is very surprising when system time is changed! I guess that waiting for specified delay is much more safe than waiting until some time arrive.
I'll have to think about this. ...
http://groups.google.com/groups?selm=355740B6.927856C5%40zko.dec.com http://groups.google.com/groups?threadm=3C6384A9.9A440C23%40web.de [...]
catch (...) { using namespace std; terminate(); }
Never do this. regards, alexander.

Michael Glassford <glassfordm <at> hotmail.com> writes:
"MagomedAbdurakhmanov" <maq <at> mail.ru> wrote in message news:200404160031.i3G0VZO1023813 <at> milliways.osl.iu.edu...
Hello. I have some troubles with boost::thread, and think they are important:
1. condition::timed_wait dangerous to use, because waiting delay specified as time when we are will stop waiting. So this is very surprising when system time is changed! I guess that waiting for specified delay is much more safe than waiting until some time arrive.
I'll have to think about this. It would break a lot of code to change this now, and there are some good reasons for specifying an absolute time instead of a relative time. For example, if you use the recommended loop to wait on the condition variable, using an absolute time is at the very least a lot more convenient.
The real problem isn't relative vs absolute (It is clear that absolute is the only choice for accurate timing) but what the clock is. It would be better to expend effort on supporting alternate clocks, in particular something similar to posix CLOCK_MONOTONIC. A monotonic clock would address the original issue (system time changing).
2. The secound trouble is that the thread function has catch(...) at the root and when unhandled exception in thread thrown, it's just silently eated and thread stopped. But the process stay alive, and now nothing about it.
It does know something (that the thread has terminated) about it if it joins the thread. It won't need to know anything about it if the thread doesn't throw unhandled exceptions. I'm not at all sure what Mag actually wants the library to do in this case?
And ever wrong that catch(...) under VC6 will catch OS exceptions.
So don't use VC6...
On the other hand, letting exceptions escape from the thread function is also bad.
Bad = undefined or some other reason?
For what it's worth, the thread_function in the thread_dev branch (which I'm working on merging into the main branch as time allows) has this:
catch (...) { using namespace std; terminate(); }
How is this better? It seems that it punishes other threads (preventing them from doing any sort of orderly termination) because this one has broken exception handling. Along the way, it forces an unwind, then terminate behavior, instead of the "normal" behaviour for an unhandled exception (admitedly what that is is implementatiation defined) which might have given a hint what was wrong. The old approach (which isn't perfect either, I'm just not seeing that this is clearly better) gives some chance of recovery by a joining thread. As we are apparently talking about obsolete compiler support above, one might as well note that calling terminate in an exception handler is documented to cause a deadlock on some versions of Kai C++. Regards Darryl.

"Darryl Green" <darryl.green@unitab.com.au> wrote in message news:loom.20040419T065628-962@post.gmane.org...
Michael Glassford <glassfordm <at> hotmail.com> writes:
"MagomedAbdurakhmanov" <maq <at> mail.ru> wrote in message news:200404160031.i3G0VZO1023813 <at> milliways.osl.iu.edu...
Hello. I have some troubles with boost::thread, and think they
are
important:
1. condition::timed_wait dangerous to use, because waiting delay
specified as time
when we are will stop waiting. So this is very surprising when system time is changed! I guess that waiting for specified delay is much more safe than waiting until some time arrive.
I'll have to think about this. It would break a lot of code to change this now, and there are some good reasons for specifying an absolute time instead of a relative time. For example, if you use the recommended loop to wait on the condition variable, using an absolute time is at the very least a lot more convenient.
The real problem isn't relative vs absolute (It is clear that absolute is the only choice for accurate timing) but what the clock is. It would be better to expend effort on supporting alternate clocks, in particular something similar to posix CLOCK_MONOTONIC. A monotonic clock would address the original issue (system time changing).
The links in Alexander Terekhov's post also mention this. Do you have any suggestions how to implement something along these lines?
2. The secound trouble is that the thread function has
catch(...) at
the root and
when unhandled exception in thread thrown, it's just silently eated and thread stopped. But the process stay alive, and now nothing about it.
It does know something (that the thread has terminated) about it if it joins the thread. It won't need to know anything about it if the thread doesn't throw unhandled exceptions. I'm not at all sure what Mag actually wants the library to do in this case?
And ever wrong that catch(...) under VC6 will catch OS exceptions.
So don't use VC6...
Unfortunately, though I'm open to correction, isn't this also true under later versions of VC? At least as a compiler option?
On the other hand, letting exceptions escape from the thread
function
is also bad. Bad = undefined or some other reason?
Bad = undefined, or at best implementation defined.
For what it's worth, the thread_function in the thread_dev branch (which I'm working on merging into the main branch as time allows) has this:
catch (...) { using namespace std; terminate(); }
How is this better?
I wasn't necessarily implying that it was better. My main reason for mentioning it (though I failed to say so) was to see what people thought about it.
It seems that it punishes other threads (preventing them from doing any sort of orderly termination) because this one has broken exception handling. Along the way, it forces an unwind, then terminate behavior, instead of the "normal" behaviour for an unhandled exception (admitedly what that is is implementatiation defined) which might have given a hint what was wrong. The old approach (which isn't perfect either, I'm just not seeing that this is clearly better) gives some chance of recovery by a joining thread. As we are apparently talking about obsolete compiler support above, one might as well note that calling terminate in an exception handler is documented to cause a deadlock on some versions of Kai C++.
Thanks for your comment.
Regards Darryl.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

"Michael Glassford" <glassfordm@hotmail.com> writes:
"Darryl Green" <darryl.green@unitab.com.au> wrote in message news:loom.20040419T065628-962@post.gmane.org...
Michael Glassford <glassfordm <at> hotmail.com> writes:
"MagomedAbdurakhmanov" <maq <at> mail.ru> wrote in message news:200404160031.i3G0VZO1023813 <at> milliways.osl.iu.edu...
Hello. I have some troubles with boost::thread, and think they
are
important:
1. condition::timed_wait dangerous to use, because waiting delay
specified as time
when we are will stop waiting. So this is very surprising when system time is changed! I guess that waiting for specified delay is much more safe than waiting until some time arrive.
I'll have to think about this. It would break a lot of code to change this now, and there are some good reasons for specifying an absolute time instead of a relative time. For example, if you use the recommended loop to wait on the condition variable, using an absolute time is at the very least a lot more convenient.
The real problem isn't relative vs absolute (It is clear that absolute is the only choice for accurate timing) but what the clock is. It would be better to expend effort on supporting alternate clocks, in particular something similar to posix CLOCK_MONOTONIC. A monotonic clock would address the original issue (system time changing).
The links in Alexander Terekhov's post also mention this. Do you have any suggestions how to implement something along these lines?
On Windows, you could use GetTickCount to do the timing, with GetSystemTimeAsFileTime and SystemTimeToFileTime to get the limits --- e.g. const SYSTEMTIME suppliedEndTime=...; FILETIME endTime={0}; SystemTimeToFileTime(&suppliedEndTime,&endTime); FILETIME startTime={0}; GetSystemTimeAsFileTime(&startTime); DWORD const initialTick=GetTickCount(); ULONGLONG const diff=reinterpret_cast<ULARGE_INTEGER const&)(endTime).QuadPart- reinterpret_cast<ULARGE_INTEGER const&)(startTime).QuadPart; ULONGLONG const elapsedTicks=diff/10000; ASSERT(elapsedTicks<=ULONG_MAX); while((GetTickCount()-initialTick)<elapsedTicks) { doStuff(); }
2. The secound trouble is that the thread function has
catch(...) at
the root and
when unhandled exception in thread thrown, it's just silently eated and thread stopped. But the process stay alive, and now nothing about it.
It does know something (that the thread has terminated) about it if it joins the thread. It won't need to know anything about it if the thread doesn't throw unhandled exceptions. I'm not at all sure what Mag actually wants the library to do in this case?
And ever wrong that catch(...) under VC6 will catch OS exceptions.
So don't use VC6...
Unfortunately, though I'm open to correction, isn't this also true under later versions of VC? At least as a compiler option?
I think this is always true under VC7.1, too. What you can do is call set_unexpected_handler in the thread startup code, but I am not sure that this is any better than catch(...) As for what one should do with an unhandled exception in a thread... don't know. Terminate sounds good, but you could do with a means of forcing other threads to unwind. Anthony -- Anthony Williams Senior Software Engineer, Beran Instruments Ltd.

Anthony Williams wrote:
On Windows, you could use GetTickCount to do the timing, with GetSystemTimeAsFileTime and SystemTimeToFileTime to get the limits --- e.g.
const SYSTEMTIME suppliedEndTime=...; FILETIME endTime={0}; SystemTimeToFileTime(&suppliedEndTime,&endTime); FILETIME startTime={0}; GetSystemTimeAsFileTime(&startTime); DWORD const initialTick=GetTickCount(); ULONGLONG const diff=reinterpret_cast<ULARGE_INTEGER const&)(endTime).QuadPart- reinterpret_cast<ULARGE_INTEGER const&)(startTime).QuadPart; ULONGLONG const elapsedTicks=diff/10000; ASSERT(elapsedTicks<=ULONG_MAX); while((GetTickCount()-initialTick)<elapsedTicks) { doStuff(); }
TickCount can wrap around, though (I think its something like 47.9 days or something like that). There is also QueryPerformanceCounter for high resolution tick count, QueryPerformanceFrequency will give you the frequency of the counter as it will vary from system to system. Russell

Russell Hind <rhind@mac.com> writes:
Anthony Williams wrote:
On Windows, you could use GetTickCount to do the timing, with GetSystemTimeAsFileTime and SystemTimeToFileTime to get the limits --- e.g. const SYSTEMTIME suppliedEndTime=...; FILETIME endTime={0}; SystemTimeToFileTime(&suppliedEndTime,&endTime); FILETIME startTime={0}; GetSystemTimeAsFileTime(&startTime); DWORD const initialTick=GetTickCount(); ULONGLONG const diff=reinterpret_cast<ULARGE_INTEGER const&)(endTime).QuadPart- reinterpret_cast<ULARGE_INTEGER const&)(startTime).QuadPart; ULONGLONG const elapsedTicks=diff/10000; ASSERT(elapsedTicks<=ULONG_MAX); while((GetTickCount()-initialTick)<elapsedTicks) { doStuff(); }
TickCount can wrap around, though (I think its something like 47.9 days or something like that).
Yes, that's what the ASSERT checks. How often do you schedule a software event for 40+ days into the future?
There is also QueryPerformanceCounter for high resolution tick count, QueryPerformanceFrequency will give you the frequency of the counter as it will vary from system to system.
Yes. I expect that the wrap-around for that is longer, since it is 64-bit. Even if it is set to the processor speed, on a 4Ghz CPU you would still have 2^32 seconds, which is a good few years. I can't imagine there are any systems where the resolution is better than the CPU frequency (what good would it do?). There doesn't seem to be any comment about the minimum resolution, but I guess it should be at least as good as GetTickCount, otherwise it wouldn't qualify for High Performance. It's slightly more work to use due to the system variability, but not a lot. Anthony -- Anthony Williams Senior Software Engineer, Beran Instruments Ltd.

Anthony Williams wrote:
Yes, that's what the ASSERT checks. How often do you schedule a software event for 40+ days into the future?
Sorry, missed that line.
Yes. I expect that the wrap-around for that is longer, since it is 64-bit. Even if it is set to the processor speed, on a 4Ghz CPU you would still have 2^32 seconds, which is a good few years. I can't imagine there are any systems where the resolution is better than the CPU frequency (what good would it do?). There doesn't seem to be any comment about the minimum resolution, but I guess it should be at least as good as GetTickCount, otherwise it wouldn't qualify for High Performance.
It's slightly more work to use due to the system variability, but not a lot.
I didn't know if the performance counters would give any advantage, just thought I'd point them out as an alternative to that could be used and see if anyone had any opinions either way. IMHO, GetTickCount is fine for what we use it for. Thanks Russell

Michael Glassford <glassfordm <at> hotmail.com> writes:
"Darryl Green" <darryl.green <at> unitab.com.au> wrote in message It would be better to expend effort on supporting alternate clocks, in particular something similar to posix CLOCK_MONOTONIC. A monotonic clock would address the original issue (system time changing).
The links in Alexander Terekhov's post also mention this. Do you have any suggestions how to implement something along these lines?
Well, xtime currently has the interface for it, but no impl for anything other than TIME_UTC. I assume TIME_MONOTONIC etc are there for future extension. The posix times() function's return value represents a form of monotonic time since a fixed point in time no later than the process start time. This should provide a widely available form of monotonic time information (limited range though). On posix pthreads systems with the relevant options supported, there shouldn't be a problem making xtime support whatever subset of clocks the system supports. That said, is this the right thing to do? xtime is a very "c" sort of a creature and seems to lack basic type safety. Wouldn't it be better to have a different time type for each clock type? A common duration type could be used. One way or another, changing the time type is easy, but the actual timed wait implementation is harder, as evidenced by maq's test on windows. The root of this lies in the fact that xtime's TIME_UTC is "wallclock" time, while the relative time used by the windows Wait.. functions is a "tick count" (afaik) and isn't affected by changing the system time. Unless windows supports some form of timer event that uses "wallclock" time there doesn't seem to be a way to fix this. Posix systems should act consistently (CLOCK_REALTIME used throughout) afaik. The existing windows code would work fine (except for the race/lack of precsion issue that already exists in converting from an absolute time to a duration) using a monotonic timer using the same tick count as the built-in timeout functions. The high resolution timer doesn't seem in any way usefull here as there is no way of scheduling a timeout to a higher precision than the tick period anyway. For posix compatiblity, the type of clock used needs to be an attribute of the condition variable. It isn't possible to have each timed_wait use a different clock. Once again, it seems that this should be a template parameter, not a runtime option. I think nptl + glibc 2.3.3 on linux supports the _POSIX_CLOCK_SELECTION option needed fort the above to work, but I haven't upgraded yet. I'd be happy to have a crack at implementing something and testing on that platform once I do. Does the general direction sound ok? Basically make a timer that takes the clock type as a policy and make classes that support timeouts (condvars, mutexes so far) templated on the timer type? Provide partial specializations for for things like full pthreads-based clock selection, but can allow (as per current windows impl) a fallback to using a duration based timeout running whatever clock the OS uses for timeouts?
2. The secound trouble is that the thread function has
catch(...) at
the root and
when unhandled exception in thread thrown, it's just silently eated and thread stopped. But the process stay alive, and now nothing about it.
It does know something (that the thread has terminated) about it if it joins the thread. It won't need to know anything about it if the thread doesn't throw unhandled exceptions. I'm not at all sure what Mag actually wants the library to do in this case?
And ever wrong that catch(...) under VC6 will catch OS exceptions.
So don't use VC6...
Unfortunately, though I'm open to correction, isn't this also true under later versions of VC? At least as a compiler option?
Sorry, I don't know. Sounds horrible.
On the other hand, letting exceptions escape from the thread
function
is also bad. Bad = undefined or some other reason?
Bad = undefined, or at best implementation defined.
Ok. But then that gets into the whole "isn't everything?" when it comes to C++ and threads.
For what it's worth, the thread_function in the thread_dev branch (which I'm working on merging into the main branch as time allows) has this:
[snip]
How is this better?
I wasn't necessarily implying that it was better. My main reason for mentioning it (though I failed to say so) was to see what people thought about it.
[snip my long comment] Ah. I couldn't see how it fixed anything for maq. I probably didn't really need to make quite so much noise. Just "not any better" would have done I guess.
Thanks for your comment.
No problem. Regards Darryl.

On Tue, 20 Apr 2004 08:16:06 +0000 (UTC), Darryl Green wrote
Michael Glassford <glassfordm <at> hotmail.com> writes:
"Darryl Green" <darryl.green <at> unitab.com.au> wrote in message
That said, is this the right thing to do? xtime is a very "c" sort of a creature and seems to lack basic type safety. Wouldn't it be better to have a different time type for each clock type? A common duration type could be used.
I might suggest boost::posix_time::time_duration and its breathren. Then you could expressions like: time_duration td = milliseconds(100)+microseconds(20); Bill and I had a discussion about moving toward this in the future, but never had a chance to do this.
One way or another, changing the time type is easy, but the actual timed wait implementation is harder, as evidenced by maq's test on windows. The root of this lies in the fact that xtime's TIME_UTC is "wallclock" time, while the relative time used by the windows Wait.. functions is a "tick count" (afaik) and isn't affected by changing the system time. Unless windows supports some form of timer event that uses "wallclock" time there doesn't seem to be a way to fix this. Posix systems should act consistently (CLOCK_REALTIME used throughout) afaik.
As far as I know, all timers are relative...
... Does the general direction sound ok? Basically make a timer that takes the clock type as a policy and make classes that support timeouts (condvars, mutexes so far) templated on the timer type? Provide partial specializations for for things like full pthreads- based clock selection, but can allow (as per current windows impl) a fallback to using a duration based timeout running whatever clock the OS uses for timeouts?
In general the separation of the clock from the representation of time is the correct approach. date_time does this because there is a recognition that different platforms (and systems) have different clock sources that can be adapted to produces a particular time representation with different resolution and accuracy attributes. From what I'm reading here this is exactly the issue you are facing... Jeff

Jeff Garland <jeff <at> crystalclearsoftware.com> writes:
I've been offline a while. Still interested in improving time support in boost threads though.
On Tue, 20 Apr 2004 08:16:06 +0000 (UTC), Darryl Green wrote
xtime is a very "c" sort of a creature and seems to lack basic type safety. Wouldn't it be better to have a different time type for each clock type? A common duration type could be used.
I might suggest boost::posix_time::time_duration and its breathren. Then you could expressions like:
time_duration td = milliseconds(100)+microseconds(20);
I agree boost::posix_time::time_duration would be fine to use as a duration representation. Broadly speaking, time_duration would be used where posix uses the timespec struct.
Bill and I had a discussion about moving toward this in the future, but never had a chance to do this.
Ok - but what is the time part of this then? xtime is a duration since some epoch. Do you mean to directly replace xtime with a duration type? I think xtime needs replacing with a number of distinct clock types. If we simply use a duration type everywhere, there is a possibility that someone passes a duration intended to represent a time when used with epoch A to a function that is using clock/epoch B. This strikes me as something that should be detected at compile time. If all times use the same duration type it is obviously simple enough to perform sensible conversions where needed.
The root of this lies in the fact that xtime's TIME_UTC is "wallclock" time, while the relative time used by the windows Wait.. functions is a "tick count" (afaik) and isn't affected by changing
As far as I know, all timers are relative...
Sorry: The root of this lies in the fact that xtime's TIME_UTC is "wallclock" time, while the duration passed to the windows Wait.. functions is used by a "tick count" clock with an undefined epoch (afaik).
In general the separation of the clock from the representation of time is the correct approach. date_time does this because there is a recognition that different platforms (and systems) have different clock sources that can be adapted to produces a particular time representation with different
resolution
and accuracy attributes. From what I'm reading here this is exactly the issue you are facing...
It seems ptime is "clock independent" in that it can be constructed from (potentially at least) a number of clock types. However, this seems not to address all the issues that arise when trying to deal with various "timers" or "clocks" for realtime systems. The clock and timer concepts are not altogether orthogonal in that time representations imply/know the epoch, which is also a property of the clock. The ptime type is fine for "wallclock" time but the other times I am considering have an epoch rather arbitrarily based on some previous event occuring on the system on which they are running (eg. system start or process start). "Converting" between these times should be possible by doing something like: time<up_time> start_time(); // construct time at start of epoch time<up_time> now_time(up_time_clock::now()); time<utc_time> time_of_day(utc_clock::now()); // time now cout << "system start time was " << time_of_day - (now_time - start_time) << endl; The duration (now_time - start_time) is clock independent. cout << now_time << endl; should probably print something like P1234H56M78,123456789S or perhaps just P4445838.123456789S or 4445838.123456789 if the system has been up for about 51 days. This represents the time as the period since the start of its epoch, which would seem to be the only "universal" representation possible. As an example using boost thread: extern bool something_done; void start_something(); condition<up_time> cond; mutex m; lock lk<m>; bool done_by(time<up_time> tm) { while (!something_done) { if (!cond.timed_wait(lk, tm)) return false; } return true; } int main(int, char*[]) { time<up_time> a(minutes(5)); // construct from duration since epoch time<utc_time> b(date(min_date), minutes(5)); // using posix_time-like ctor start_something(); if (done_by(a)) cout << "Did something within 5 minutes of system start." << endl; if (done_by(b)) // won't compile cout << "This is a very early computer!" << endl; return 0; } For a more realistic use case, consider a remote telemetry unit (RTU) that has a reasonably precise but innacurate clock (it is inaccurate when it has sat in low power state for months or years before being powered up and being expected to start running timing critical software). Every now and again this device communicates with a host system with an accurate clock. The RTU will immediately "jump" its clock to match the host. It transfers recorded data to the host only after setting its own clock. The historical data has all been stored with an "uptime" timestamp, which is converted to utc as part of marshalling. The uptime clock is fine for this as the system doesn't have any persistent data storage. The system doesn't really need/use an RTC that runs while it is powered off, though it may have one so it can report on how long it has been turned off etc. Most timers on the system are not concerned about time of day and want to time out in 10s from now (for example) regardless of whether the system time is adjusted (in either direction) by eg. 3 days during that interval. Have I somehow drawn the clock/time/duration distinctions in the wrong places, or is the above a reasonable approach? I've had a look at the implementation of date_time and I think it might fit ok with the model I was thinking of where any time type is basically identical except for some sort of clock dependent trait/policy. Using date_time this would be time_system. This looks like a reasonable place to start but the base_time seems to require that: A time has a date component. A time has a zone. I guess one could write a time_system that had only a vestigial concept of date and zone? Is there a way of making a pure day based date (no fancy julian date calcs)? Regards Darryl Green.

"Darryl Green" <darryl.green@unitab.com.au> wrote in message news:loom.20040422T041504-129@post.gmane.org...
Jeff Garland <jeff <at> crystalclearsoftware.com> writes:
I've been offline a while. Still interested in improving time support in boost threads though.
On Tue, 20 Apr 2004 08:16:06 +0000 (UTC), Darryl Green wrote
xtime is a very "c" sort of a creature and seems to lack basic type safety. Wouldn't it be better to have a different time type for each clock type? A common duration type could be used.
I might suggest boost::posix_time::time_duration and its breathren. Then you could expressions like:
time_duration td = milliseconds(100)+microseconds(20);
I agree boost::posix_time::time_duration would be fine to use as a duration representation. Broadly speaking, time_duration would be used where
I'm still interested too, though I haven't been commenting much on this thread (not having much of importance to contribute yet). posix uses
the timespec struct.
Bill and I had a discussion about moving toward this in the
future, but never
had a chance to do this.
Ok - but what is the time part of this then? xtime is a duration since some epoch. Do you mean to directly replace xtime with a duration type?
I think xtime needs replacing with a number of distinct clock types. If we simply use a duration type everywhere, there is a possibility that someone passes a duration intended to represent a time when used with epoch A to a function that is using clock/epoch B. This strikes me as something
If this was a public discussion, do you have a link to it? I'd be interested in reading through it. that should
be detected at compile time. If all times use the same duration type it is obviously simple enough to perform sensible conversions where needed.
The root of this lies in the fact that xtime's TIME_UTC is "wallclock" time, while the relative time used by the windows Wait.. functions is a "tick count" (afaik) and isn't affected by changing
As far as I know, all timers are relative...
Sorry: The root of this lies in the fact that xtime's TIME_UTC is "wallclock" time, while the duration passed to the windows Wait.. functions is used by a "tick count" clock with an undefined epoch (afaik).
I agree that this is the root of the OP's problem. For what it's worth, in my response to the OP I made a distinction between "absolute" and "relative" times, indicating there was good reason for Boost.Thread to use an absolute time with the timed_wait function. Using stricter terminology, what I meant by absolute time was time relative to a fixed epoch; what I meant by relative time is time relative to a non-fixed epoch (i.e., relative to the time when the timed_wait function was called). I still think it's a good idea in general to pass what I was calling an absolute time to timed_wait. What do you think? [snipped discussion of types of time]
As an example using boost thread:
extern bool something_done; void start_something();
condition<up_time> cond;
mutex m; lock lk<m>;
bool done_by(time<up_time> tm) { while (!something_done) { if (!cond.timed_wait(lk, tm)) return false; } return true; }
int main(int, char*[]) { time<up_time> a(minutes(5)); // construct from duration since epoch time<utc_time> b(date(min_date), minutes(5)); // using
I didn't comment on this when you mentioned it the first time because I wanted to think about it first, but I'd be pretty hesitant to turn the condition class into a template class merely for the sake of the timed_wait functions. I could see templating the timed_wait functions themselves if necessary, but templating the whole condition class seems to me like trying to solve the problem at the wrong level. posix_time-like ctor
start_something(); if (done_by(a)) cout << "Did something within 5 minutes of system start." <<
endl;
if (done_by(b)) // won't compile cout << "This is a very early computer!" << endl;
return 0; }
Have I somehow drawn the clock/time/duration distinctions in the wrong places, or is the above a reasonable approach? I've had a look at the implementation of date_time and I think it might fit ok with the model I was
[snipped "a more realistic use case"] thinking of
where any time type is basically identical except for some sort of clock dependent trait/policy.
Using date_time this would be time_system. This looks like a reasonable place to start but the base_time seems to require that:
A time has a date component. A time has a zone.
I guess one could write a time_system that had only a vestigial concept of date and zone? Is there a way of making a pure day based date (no fancy julian date calcs)?
Regards Darryl Green.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Michael Glassford <glassfordm <at> hotmail.com> writes:
"Darryl Green" <darryl.green <at> unitab.com.au> wrote in message news:loom.20040422T041504-129 <at> post.gmane.org...
As an example using boost thread:
extern bool something_done; void start_something();
condition<up_time> cond;
I didn't comment on this when you mentioned it the first time because I wanted to think about it first, but I'd be pretty hesitant to turn the condition class into a template class merely for the sake of the timed_wait functions. I could see templating the timed_wait functions themselves if necessary, but templating the whole condition class seems to me like trying to solve the problem at the wrong level.
From The Open Group Base Specifications from http://www.opengroup.org/products/publications/catalog/c032.htm (follow the link to free HTML version).
int pthread_condattr_setclock(pthread_condattr_t *attr, clockid_t clock_id); and int pthread_cond_init(pthread_cond_t *restrict cond, const pthread_condattr_t *restrict attr); To follow this model directly, one would need to create an attributes object, and pass it as a c'tor parameter when creating a condition variable. This works too - it just makes it a runtime rather than a compile time error to mix timer types. I'd pick compile time, but I guess there might be cases where this is bad because of potential template explosion - but I' not convinced. Note that it may require quite some trickery to get some or all of the clock types to work as expected, portably, so the template parameter on the condvar may be quite "reasonable" in that the code may actually need to be quite different (eg. actually use some sort of system realtime event to implement a timeout) for some clock types on some platforms. Do you see a need to have 2 threads waiting on a condvar, one with a timeout next tuesday at 12:34pm EST, taking into account daylight saving, and the other precisely 1.23456789 seconds from now, using monotonic time? That might be nice, but afaics posix doesn't support doing this. I'm not sure if there is a profound reason why, or if it allows some form of scheduler optimisation/design (maybe condvar in some kernel/scheduler maintained list, with a the condvar having a list of waiting tasks sorted by timeout, which would be ugly if the timeouts could be wrt different clocks). Do you think the different clocks for timeouts on the one condvar use case is real? Important? If using different clocks is not supported, do you see any advantage to selecting the clock at time of construction vs. compile time? Regards Darryl.
participants (8)
-
"Magomed Abdurakhmanov
-
Alexander Terekhov
-
Anthony Williams
-
Darryl Green
-
Jeff Garland
-
maq@mail.ru
-
Michael Glassford
-
Russell Hind