RE: [boost] Re: [Threads] Simple active object wrapper, take 2

Hi Matty,
-----Original Message----- From: Matthew Vogt [mailto:mvogt@juptech.com] Subject: [boost] Re: [Threads] Simple active object wrapper, take 2
Actually, the paper does note that for performance reasons, the thread could be replaced with a thread pool. This requires support, however, in that the Method Request needs to be able to lock the resources needed to perform the task prior to doing so.
I haven't read the paper you mentioned, but I do have POSA volume 2 by Schmidt et al in front of me. Based on the description in it, your code definitely does (very cleanly) provide a way to implement the active object pattern.
In terms of my wrapper, the object wrapped needs to have 'hooks' for locking its resources, and the active<> wrapper class would need to have a policy class to perform the 'Scheduler' role, which knew enough to use the resource locking hooks.
I don't see a great deal of use for the thread-pool based active object variant briefly described in the book, where the active object has a whole thread-pool to itself and the object needs its own internal locking. This seems to me to be an implementation detail of the object itself - nothing to do with the active object model. Further I have real difficulty imagining a circumstance in which I'd want all those threads sitting around dedicated to one AO (like I can talk...). However, it sounds like you are describing something different. I can (if I try hard enough) imagine that I'd like to specify the level of concurrency allowed between methods of an object and have a scheduler clever enough to order the execution of those methods efficiently based on this (and probably request priority as well) on a (shared by multiple such AOs) thread pool. A dynamic thread pool could then scale the number of threads used based on the actual observed/required concurrency. However, I suspect that the above isn't going to fly, if only because it is likely to make the scheduler a bottleneck. Anyway, nobody seems to want to give me a box with enough processors and I/O to make this much fun - and I can't think of anything (useful) to run on it :-) Is that anything like what you had in mind? Does anyone actually re-use some of the more exotic veriations of these patterns often enough that they consider a framework for implementing them is actually anything more than a (fun?) excercise? Regards Darryl. ########################################################################## This e-mail is for the use of the intended recipient(s) only. If you have received this e-mail in error, please notify the sender immediately and then delete it. If you are not the intended recipient, you must not use, disclose or distribute this e-mail without the author's prior permission. We have taken precautions to minimise the risk of transmitting software viruses, but we advise you to carry out your own virus checks on any attachment to this message. We cannot accept liability for any loss or damage caused by software viruses. ##########################################################################

On Fri, 20 Feb 2004 18:11:50 +1000, Darryl Green wrote
Does anyone actually re-use some of the more exotic veriations of these patterns often enough that they consider a framework for implementing them is actually anything more than a (fun?) excercise?
I can certainly imagine the need for multiple threads in an active object. Consider an object that does complex graphics rendering. That object might want to divide and conquer taking advantage of multi-processors on a machine. But I think it isn't really very important -- if someone really needs it they can pick up the source and modify it :-) BTW, haven't had time to read the new version... Jeff

Darryl Green <Darryl.Green <at> unitab.com.au> writes:
Hi Matty,
Hi! Sorry, I didn't make the name-to-person connection before.
I don't see a great deal of use for the thread-pool based active object variant briefly described in the book, where the active object has a whole thread-pool to itself and the object needs its own internal locking. This seems to me to be an implementation detail of the object itself - nothing to do with the active object model. Further I have real difficulty imagining a circumstance in which I'd want all those threads sitting around dedicated to one AO (like I can talk...). However, it sounds like you are describing something different.
Yes, I know what you're saying, since I'm also used to the client/server model where the concurrency is built-in, rather than bolted on. (Apology preemptively tendered to anyone who uses active objects and construes this as a denigration). That said, I'm open to the idea that some problems are more easily solved by concealing the concurrency behind object boundaries. Further, if the object can perform tasks concurrently, and the scheduler is sufficiently sophistocated to use a thread group, then you do in fact have a server. The difference being that the communication is through in-process function calls rather than via IPC or network channels.
I can (if I try hard enough) imagine that I'd like to specify the level of concurrency allowed between methods of an object and have a scheduler clever enough to order the execution of those methods efficiently based on this (and probably request priority as well) on a (shared by multiple such AOs) thread pool.
I don't think it's really a question of allowing a concurrency level between methods, but of permitting concurrency of execution while protecting the object's internal resources. I doubt that prioritisation can be used in active object pattern, due to the access model of C++ object interactions; you can't invoke two methods on an object and have them executed in the opposite to order to that in which you called them. Perhaps prioritisation of one client over another is useful, but that's not obvious to me.
A dynamic thread pool could then scale the number of threads used based on the actual observed/required concurrency.
However, I suspect that the above isn't going to fly, if only because it is likely to make the scheduler a bottleneck. Anyway, nobody seems to want to give me a box with enough processors and I/O to make this much fun - and I can't think of anything (useful) to run on it
Is that anything like what you had in mind?
No, I don't think so. If you take the characterisation I used earlier of a server with an in-process function call interface, then it doesn't matter at what scale you apply the pattern, and the scheduling doesn't need to be complex. I think it's quite generally applicable, although to use it you need to approach a problem with a particular viewpoint. Without using the active object pattern, you may think something like, "I'm going to need a service to handle requests of type 'X', and I'll make that a server using named pipes...", whereas with an active object wrapper, you might think, "I'll write a class that does 'X', and if it later needs to be used from multiple threads then I'll transform it into an active object to ensure thread safety...". And subsequently, you might think "The class that does 'X' is a bottle-neck; I had better add some mutexes to it and schedule it with a thread pool rather than have clients blocking on it..."
Does anyone actually re-use some of the more exotic veriations of these patterns often enough that they consider a framework for implementing them is actually anything more than a (fun?) excercise?
I don't know, but I think it's more a question of the approach taken, rather than the applicability of the concept. And, it is a fun exercise!
Regards Darryl.
Matt
participants (3)
-
Darryl Green
-
Jeff Garland
-
Matthew Vogt