RE: [boost] Re: Re: Future of threads (III)

On Behalf Of scott Subject: RE: [boost] Re: Re: Future of threads (III)
Do you guys have any thoughts on how to "get work into" the class (now that it is truly a running thread)? Cos this is the essence of what an AO is. I think :-)
Cheers, Scott
AO's should either have a queue for work or hook into a queue. They are a bit like a worker pool with one worker ;-) Another idiom is a future value, where you ask for a value and don't block until you "read" the value. ACE also has an implementation of this. You can think of it as a bit like overlapped I/O. I also advocate a different architectural neutral style where you can have things at look like function calls that can be: 1. a function call 2. a call to queue work pool or active object 3. an IPC mechanism for same machine transfer, e.g. shared memory 4. a network mechanism, TCP, UDP, TIB or something, for If it can be enabled by a policy driven approach then you can change the architecture of your application by simply changing policies. I think David Abraham's named parameters could help greatly with this approach, along with the serialization lib for marshalling support. This should provide a solid basis for this. I've also thought that perhaps boost::signals might be the correct approach to take for this but I'm unsure. To support this approach you also need mutex aspects that are also policy driven. I am preparing to submit such locking that builds on the existing mutexes and locks. Regards, Matt Hurd _______________________ Susquehanna Pacific P/L hurdm@sig.com +61.2.8226.5029 _______________________ IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

On Mon, 16 Feb 2004 18:43:32 -0500 "Hurd, Matthew" <hurdm@sig.com> wrote:
AO's should either have a queue for work or hook into a queue. They are a bit like a worker pool with one worker ;-)
Another idiom is a future value, where you ask for a value and don't block until you "read" the value. ACE also has an implementation of this. You can think of it as a bit like overlapped I/O.
Again, as you have said previously, I think ACE has managed to handle these ideas fairly well. However, there still exist several ways in which their implementation may be improved, especially if it is decoupled from the rest of the lib. I have used ACE+TAO for years, and have grown accustomed to the baggage of the whole system, but I understand the reluctance of some. However, please note that to obtain the level of functionality that everyone seems to desire would necessitate a certain amount of baggage (and dependency) as well.
I think David Abraham's named parameters could help greatly with this approach, along with the serialization lib for marshalling support. This should provide a solid basis for this. I've also thought that perhaps boost::signals might be the correct approach to take for this but I'm unsure.
Indeed. In fact, I have added on a piece to Boost.Signals that allows general dispatching (still typesafe, and based on types of signals/slots). Boost.Signals dispatches through a specific signal type, and my extension allows any signal type. I have used this with thread policies to handle interthread communication between specific ACE threads and the main thread for wxWindows.
To support this approach you also need mutex aspects that are also policy driven. I am preparing to submit such locking that builds on the existing mutexes and locks.
As you can see, to push this requires almost as much underpinning as the same mechanism in ACE. Do not get me wrong, I am not saying that since it is in one place, it should not be in another. However, if Boost.Threads goes down this path, it would be quite naive to think that it can be done without all the drawbacks of current implementations of the same thing. That being said, I'd be willing to lend some time (and what some may, or may not, call expertise) to investigate the issue from a linux point of view (most of the discussion so far has come from the windows world, I think). -- Jody Hagins

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org]On Behalf Of Hurd, Matthew Sent: Tuesday, February 17, 2004 12:44 PM To: Boost mailing list Subject: RE: [boost] Re: Re: Future of threads (III)
AO's should either have a queue for work or hook into a queue. They are a bit like a worker pool with one worker ;-)
yep.
Another idiom is a future value, where you ask for a value and don't block until you "read" the value. ACE also has an implementation of this. You can think of it as a bit like overlapped I/O.
Yes. This was also in the ActiveObject pattern. It was on the "client- side", i.e. available to the thread that submits Method Requests as a means to acquire the results. While I understand its place in the ActiveObject pattern I would hope that if a boost::active_object came into existence, there would be a single (i.e. not _simple_) mechanism by which data passed from one thread to the other, with an associated notification. In a behavioural sense, the future mechanism wouldnt be appropriate for this. This way of thinking does require that all threads are active_objects which may be too aggressive <shrug>. A small sample of empirical data says its fine.
I also advocate a different architectural neutral style where you can have things at look like function calls that can be: 1. a function call 2. a call to queue work pool or active object 3. an IPC mechanism for same machine transfer, e.g. shared memory 4. a network mechanism, TCP, UDP, TIB or something, for
Understand the reasons you ask for 1...4 but would like to add some other thoughts to see what you think. An ActiveObject servant contains code. That code is implementation of behaviour and its what (I believe) we tacitly want when we all say that "thread-inherited-classes" or "active objects" are a nice manner in which to deal with threads. (note: more accurately the pattern says that thread==scheduler and a scheduler executes servant code. but i think we can skip this detail) If the code is in the servant, what do 1 and 2 mean? My best matching of your thinking with the servant-based thinking would be that somehow the "client" can submit something into the "worker pool" that _identifies_ a function that is in the servant. It would be nice if this "work submission" could carry some arguments for presentation to the servant. A detail that I am labouring is the separation of "calling" (would much rather that this was "sending") and execution. Our active object has no synchronous i/f with methods for us to call; it is simply a "work pool" that we add to. As soon as we make a sync i/f available then we go down that road that leads to servant code in the client. I suggest that notions of "function call" should be exorcised from discussions that are targeting active objects. That type of thinking is perfectly valid in some other thread (oops) but can scuttle work targeting active objects; the behaviour (i.e. the code) is inherently in the object. a client can only submit "worker orders" or "Method Requests" or jobs or messages or signals. I hope there is enough there to make a case :-)
If it can be enabled by a policy driven approach then you can change the architecture of your application by simply changing policies.
I think David Abraham's named parameters could help greatly with this approach, along with the serialization lib for marshalling support. This should provide a solid basis for this. I've also thought that perhaps boost::signals might be the correct approach to take for this but I'm unsure.
To support this approach you also need mutex aspects that are also policy driven. I am preparing to submit such locking that builds on the existing mutexes and locks.
<phew> That is quite a scope. But yes, I can see that ultimately it should cover all these dimensions. Maybe a phasing? Cheers, Scott

On Tue, 17 Feb 2004 15:27:21 +1300 "scott" <scottw@qbik.com> wrote:
While I understand its place in the ActiveObject pattern I would hope that if a boost::active_object came into existence, there would be a single (i.e. not _simple_) mechanism by which data passed from one thread to the other, with an associated notification. In a behavioural sense, the future mechanism wouldnt be appropriate for this.
The "typical" (i.e., most commonly implemented) method is to use a thread-safe message queue, coupled with a condition variable. While there are messages to be processed, the "active object" pulls them from the messages queue, and responds appropriately (the AO specific logic). When there are no messages available, the AO sits on the condition variable, awaiting an indication of the arrival of a new message. I am sure you understand this, but I could not figure out any other interpretation of your question...
If the code is in the servant, what do 1 and 2 mean? My best matching of your thinking with the servant-based thinking would be that somehow the "client" can submit something into the "worker pool" that _identifies_ a function that is in the servant. It would be nice if this "work submission" could carry some arguments for presentation to the servant.
Right. Typically, the message in the queue contains all information necessary, and the AO simply calls a method, passing the message, and it is the implementation of that method which determines the functionality of the AO.
A detail that I am labouring is the separation of "calling" (would much rather that this was "sending") and execution. Our active object has no synchronous i/f with methods for us to call; it is simply a "work pool" that we add to. As soon as we make a sync i/f available then we go down that road that leads to servant code in the client.
Ahhh. This fits with the message queue protocol exactly.
I suggest that notions of "function call" should be exorcised from discussions that are targeting active objects. That type of thinking is perfectly valid in some other thread (oops) but can scuttle work targeting active objects; the behaviour (i.e. the code) is inherently in the object.
a client can only submit "worker orders" or "Method Requests" or jobs or messages or signals.
I hope there is enough there to make a case :-)
I think what you are saying is right in line with traditional methods for implementing an AO, but I thought we were trying to find alternate methods, unless I mistakenly misinterpreted the preceeding communications. -- Jody Hagins Drinking is not a spectator sport. -- Jim Brosnan

scott <scottw <at> qbik.com> writes:
AO's should either have a queue for work or hook into a queue. They are a bit like a worker pool with one worker
yep.
(Aside: the Rhapsodia scheduler mentioned recently had an interesting implementation of the worker pool, which could presumably be as simple as a single thread serially processing scheduled tasks.)
Another idiom is a future value, where you ask for a value and don't block until you "read" the value. ACE also has an implementation of this. You can think of it as a bit like overlapped I/O.
Yes. This was also in the ActiveObject pattern. It was on the "client- side", i.e. available to the thread that submits Method Requests as a means to acquire the results.
While I understand its place in the ActiveObject pattern I would hope that if a boost::active_object came into existence, there would be a single (i.e. not _simple_) mechanism by which data passed from one thread to the other, with an associated notification. In a behavioural sense, the future mechanism wouldnt be appropriate for this.
To my way of thinking, the advantage of the active object is that all its methods are processed in a single thread, preventing concurrent access to any of its resources. Unless there is only a single resource, then the active object denies opportunities for overlapping concurrency that would be inherent through the use of other designs. The upside I can see is the simplicity for the client, in that they needn't be aware of any issues relating to concurrency. Of course, an active object could internally contain any number of threads, provided that correct internal synchronisation was maintained. Allowing the use of asynchronous I/O seems to break the abstraction, which leaves me wondering why the object should be active? (This is in the context of *local* objects, BTW - if the object is remote there is very little difference between a client/server solution and one using active objects, is there?) Apologies in advance if I'm not getting something obvious... Matt

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org]On Behalf Of Matthew Vogt
Another idiom is a future value, where you ask for a value and don't block until you "read" the value. ACE also has an implementation of this. You can think of it as a bit like overlapped I/O.
Yes. This was also in the ActiveObject pattern. It was on the "client- side", i.e. available to the thread that submits Method Requests as a means to acquire the results.
While I understand its place in the ActiveObject pattern I would hope that if a boost::active_object came into existence, there would be a single (i.e. not _simple_) mechanism by which data passed from one thread to the other, with an associated notification. In a behavioural sense, the future mechanism wouldnt be appropriate for this.
To my way of thinking, the advantage of the active object is that all its methods are processed in a single thread, preventing concurrent access to any of its resources. Unless there is only a single resource, then the active object denies opportunities for overlapping concurrency that would be inherent through the use of other designs. The upside I can see is the simplicity for the client, in that they needn't be aware of any issues relating to concurrency.
yeah baby. with some talk of the overheads its nice to touch on some benefits. simplicity of client and simplicity of servant (or active object). i have accidentally implemented something very similar to the ActiveObject pattern. recently i have had to apply that implementation within a codebase involving millions of lines of code - and it worked! there were expected benefits along the lines of what you mention but there were some pretty wild ones as well. cant really give a full explanation without consideration of my employer but heres a summary. without an active object approach, development has to deal with the "normal" issues anyhow, e.g. encapsulation/OO and concurrency. code actually "grows" according to certain rules that cant be ignored by anyone (or it dont work :-) imposing the active object model "over the top" of this type of code went well because the model is a model of what we do! thats sounds too cute but there it is. a class had been written that needed to be a thread - so it was. it had its own message queue and mechansims for submitting work. there was a second class that was a proper encapsulation of significant function - it deserved to be a separate class. it was only ever accessed by the first class. after "activating" this code, the previous interfaces (pre-existing message queue and sync methods) were left entirely alone. but with the new (completely distinct) message queue, we could exchange messages with the first class (instance, of course) _and_ with the second (cos it was a uniquely identified servant owned by the first class). this was a pleasant surprise. dont know that this case study reads very well so i'll go out on a limb - first class was a DNS server and the second was a DNS cache.
Of course, an active object could internally contain any number of threads, provided that correct internal synchronisation was maintained.
Allowing the use of asynchronous I/O seems to break the abstraction, which leaves me wondering why the object should be active? (This is in the context of *local* objects, BTW - if the object is remote there is very little difference between a client/server solution and one using active objects, is there?)
Apologies in advance if I'm not getting something obvious...
all bang on, as far as i can tell anyhow :-) we're all swimming up the same creek i think. cheers, scott
participants (4)
-
Hurd, Matthew
-
Jody Hagins
-
Matthew Vogt
-
scott