
Christopher Kohlhoff wrote:
--- Andrew Schweitzer <a.schweitzer.grps@gmail.com> wrote:
*Maybe there should be a "run_forever" version of demuxer::run?
If you want it to run forever just give it some "work":
asio::demuxer d; asio::demuxer::work w(d); d.run();
Sounds good. [snip]
*It looks like deadline_timer objects need to be dynamically allocated, right?
No, the deadline_timer object itself does not need to be dynamically allocated.
It looks to me that when they destruct they stop themselves.
The destructor implicitly cancels any outstanding asynchronous wait, yes. The handler for that wait operation is still dispatched.
I suppose what I meant was that in order to prevent the timer from dispatching when the deadline_timer goes out of scope you have to prevent the deadline_timer from destructing, which probably often means dynamically allocating it. Now that I think about it... how are you expecting the deadline_timer object to be used? I assume it shouldn't just go out of scope. Since we might want to keep it around to cancel, presumably we shouldn't pass it to the handler to delete, or we could be canceling a deleted object... although this is what I'm actually doing in my code.... So are you expecting some data structure to keep it around? And then how should it be cleaned up? A related question: how do you ask whether a timer is still running? I think you could call expires_at() and compare to current time. It looks like there might be two issues with this: 1) Will crash if the timer is expired? (haven't tried just took a quick look at code). 2) Oddly enough getting the current time can be quite expensive, for example on Wince (I think because you have to get data from the kernel). It might be a lot faster to just say "I'm expired" if impl_ is null. [snip]
*There aren't really unique IDs for timers, right? The address of the timer object must be unique as long as the timer is running, but after the timer completes it could be re-used for another timer, so that doesn't seem like a great idea.
This address is used internally as a cancellation token. Since the destruction of a deadline_timer automatically cancels the timer (and removes the token from the backend), it's not a problem if the address gets reused.
It won't be a problem for asio, but it could be a problem for user code that compares timer addresses to see which one is occurring or which one is being passed around, since the address could be a more recent incarnation of the timer.
Does anyone else think unique timer IDs belong in the library? I'm not sure it's requirement, but most timer code I've worked on has used a unique ID. It's not hard for the application to provide this, but it is a bit more work.
What use case are you thinking of here? I have never had a need for a timer ID when instead I can use boost::bind to associate whatever data I need directly with the completion handler.
Good question. At this point I don't see our use case occurring with asio. It's what I'm used to, and it seems like a good idea, possibly just from habit, but maybe there's an underlying reason. I think it comes down to whether the user's context usually naturally provides enough easily bindable data to differentiate between past and multiple current executions of the handler. Also, I think this data might need to be stored by the user outside of the deadline_timer so that the user can decide which timer to cancel, or check which timers are running. My sense is that by invoking the same code over and over again asynchronously we are just asking to be confused about which invocation we are in. It might be nice if the library just generated a unique 32 or 64 bit value for each timer. Or maybe that's more trouble than it's worth. Here's our use case: timer completions go to a queue (unlike asio which fires them immediately). Once they are in the queue they can't be canceled. So we had situations like this: 1) start timer 1. 2) timer 1 completes, goes to queue. 3) just at that instance, cancel timer 1. 4) start timer 2. 5) pop timer 1 from queue for processing. Now it's very nice to be able to tell that it's timer 1 (to be ignored) and not timer 2 (to be handled). On the surface at least, asio doesn't have this particular problem, since timer's are not stuck in a queue. I think the problem arises in our case because the handler can execute after the timer has been canceled, whereas I think if I read the asio code correctly, cancel either prevents the handler from ever executing (without the "aborted" code) or if the execution is already in process waits for it to complete via the wait for the select_reactor's mutex_. I think user code might still run into this problem if it doesn't lock correctly. If it changes any code that will assume that timer 1 won't execute before canceling timer 1, the timer 1 handler could still execute and get confused. Maybe that's just an issue of good multi-threaded programming practice.
Cheers, Chris
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost