Kevin Wheatley wrote:
- multi threaded containers
like a generic work item queue that buffers between threads, and should allow for fixed size (blocks when full), growable (but bounded) and 'unlimited' size policies. This would allow for the pipeline model.
yes. in my current project i'm using a queue like this:
template<typename T>
class queue
- locking_ptr
Isn't this what shared_ptr already does... I'd like to see a shared ptr without locking :-)
I was thinking of a pointer that locks on dereference (maybe the name wasn't a good choice). It can be used to synchronize access to objects. foo f; synchronize_ptr<foo> p(&f, monitor); p->do_something(); // sync call to method Of course this isn't very efficient most of the time: p->do1(); p->do2(); p->do3(); This would lock/unlock three times although a single lock/unlock around the calls would have been sufficient. In this case it would be nice to have a lock-mechanism similar to boost::weak_ptr. This way we avoid multiple locks/unlocks: synchronize_ptr<T> p(&f); // sync access to f { lock_ptr<T> l = p.lock(); // blocks // we have exclusive access now l->do1(); l->do2(); l->do3(); } // l goes out of scope -> release lock p itself is not dereferencable (as boost::weak_ptr is not). Does this make any sense?
I'd maybe like something layered upon the thread group concept that handles event notifications for the group of threads tied in with the work queue idea. This should work in a generic way so that you would not need to know if it is a single thread or a collection of them available to do the work (Boss-Worker Model)
It may be you wish to share the thread groups by considering the type of work done, e.g. separating I/O threads from CPU bound threads by putting them in different pools. Under the pipeline model you could then have separate queues for each stage which signal the appropriate container of the type of thread that there is work to do.
This is an off the top of my head formulation which is never a good idea in parallel programming.... so may need refinement :-)
Interesting. Kind of dispatcher delegating queued work items to different worker pools. This should be easy to implement once there is a solid thread_pool implementation available. Sascha