bounded_blocking_queue, unbounded_blocking_queue

Hello, I've been tasked to implement approximate C++ equivalents of http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/ArrayBlockingQu... http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/LinkedBlockingQ... http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/ConcurrentHashM... (and I'm generously allowed to use the Boost license.) What I have so far is (synopsis): template<class V> class bounded_blocking_queue { public: // exclusive access explicit bounded_blocking_queue( size_t n ); ~bounded_blocking_queue(); // queries size_t size() const; // aproximate queue size // non-blocking bool try_push( V const & v ); bool try_pop( V & v ); // blocking void push( V const & v ); void pop( V & v ); }; with a "reference" two-lock implementation at http://www.pdimov.com/cpp/bounded_blocking_queue.cpp (with at least one race condition and at least two visibility bugs, but still...) and: template<class V> class unbounded_blocking_queue { explicit unbounded_blocking_queue(); ~unbounded_blocking_queue(); // non-blocking void push( V const & v ); bool try_pop( V & v ); // blocking void pop( V & v ); }; with a two-lock implementation at http://www.pdimov.com/cpp/unbounded_blocking_queue.cpp I appreciate any comments on the interface, the implementation, or whether it'd be a good idea to include these components in Boost.Threads. Thanks in advance for any feedback. Incidentally, I want to join the "catch(...) in thread_proxy is EVIL" camp. While developing these queues the catch(...) clause ate all access violations, transforming them into different access violations in totally unrelated parts of the code, making debugging a pain. I've removed the catch(...) in my local copy, and I think that we need to do the same in the CVS. Oh, and comments about the interface of the concurrent hash map are very much appreciated; my current line of thinking is that the only operation that is allowed to invalidate should be rehash() and it'd be the user's responsibility to call it during "quiet" periods when no other thread is holding an iterator or is inserting/removing elements. But if you have a better idea, please let me know! -- Peter Dimov http://www.pdimov.com

Peter Dimov wrote: [snip]
// blocking
void push( V const & v ); void pop( V & v ); };
There does not appear to be any mechanism to shutdown an application which uses the blocking interface. Options that occur to me are : 1) destructor releases threads blocking on push / pop with : a) exception b) bool return 2) timeout parameter and bool return I think I would go for 1a [snip]
Incidentally, I want to join the "catch(...) in thread_proxy is EVIL"
ditto Keith Burton

Keith Burton wrote:
Peter Dimov wrote:
// blocking
void push( V const & v ); void pop( V & v ); };
There does not appear to be any mechanism to shutdown an application which uses the blocking interface.
Options that occur to me are : 1) destructor releases threads blocking on push / pop with : a) exception b) bool return 2) timeout parameter and bool return I think I would go for 1a
Yes... the lack of a general cancellation mechanism is a pain and I don't like the idea of every component providing its own (incompatible) version of that. But on the other hand this _is_ important in practice. I've decided to temporarily ignore it for now and work on the rest. Maybe boost::condition is the right place where the cancellation support needs to be implemented. In one typical scenario that I have in mind, destructor unblocking doesn't buy much; the queue is at file scope and its destructor is being run on process exit, which will kill all blocked threads anyway. Awakening these will simply slow down the exit. But the option may be worth exploring. You are probably right that the interface is missing timed_push and timed_pop in addition to try_push and try_pop.

"Peter Dimov" <pdimov@mmltd.net> writes:
I appreciate any comments on the interface
I have a related two lock queue class I wrote a few years ago that uses a special set of nested handles to grant certain visibility and modification privileges to clients. These handles, called "front" or similar variations, are necessary to see values at the front or head of the queue. Creating one encapsulates blocking and waiting for at least one element to be present, holding of the head lock to prevent concurrent popping or mutation of the element, and optionally having the privilege to pop the current element and move to the next one before some other waiting client can. The code uses ACE's mutex, condition, and lock ("ACE_Guard") classes, but the translation to a more boost-like set of primitives would be trivial. Find the code at http://www.panix.com/~seh/temp/two_lock_queue.hh http://www.panix.com/~seh/temp/error.hh http://www.panix.com/~seh/temp/unconditional_pop.hh I have mentioned some design rationale for this class before: http://groups.google.com/group/comp.lang.c++.moderated/browse_frm/thread/1c8... Note the following types: o two_lock_queue::const_front Permits const access to the head element (via get()) o two_lock_queue::front Permits non-const access to the head element (via get()) Permits popping the head element once (via release()) o two_lock_queue::renewable_front Permits non-const access to the head element (via get()) Permits popping the head element (via pop()) Permits advancing to the next element (via next()) Additionally, the queue exposes these member functions: o pop_front o push_back o interrupt (optional) I hope there are some ideas in there worth comparing or borrowing. -- Steven E. Harris

Steven E. Harris wrote:
"Peter Dimov" <pdimov@mmltd.net> writes:
I appreciate any comments on the interface
I have a related two lock queue class I wrote a few years ago that uses a special set of nested handles to grant certain visibility and modification privileges to clients.
These handles, called "front" or similar variations, are necessary to see values at the front or head of the queue. Creating one encapsulates blocking and waiting for at least one element to be present, holding of the head lock to prevent concurrent popping or mutation of the element, and optionally having the privilege to pop the current element and move to the next one before some other waiting client can.
Thanks, I'll take a look at your rationale link. One argument against your enhancements is that (to the untrained eye) they seem to mandate a lock-based implementation. I wanted to permit a lock-free implementation, if possible.

"Peter Dimov" <pdimov@mmltd.net> writes:
One argument against your enhancements is that (to the untrained eye) they seem to mandate a lock-based implementation. I wanted to permit a lock-free implementation, if possible.
Guilty as charged. At the time I designed this class, I had never studied lock-free programming techniques. The focus on these "front" handles came partly from thinking about std::stack's and std::queue's top()/front() and pop() separation. Just as exception safety requirements prevent these two operations from being merged, concurrency requirements prevent them from being separated -- without some kind of a lock. I haven't thought it through much, but I doubt the same idea (exclusive by-reference access to the queue head) could translate to a lock-free interface. -- Steven E. Harris

Peter Dimov wrote:
Hello,
I've been tasked to implement approximate C++ equivalents of
http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/ArrayBlockingQu... http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/LinkedBlockingQ... http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/ConcurrentHashM...
I have produced a read/write mutex implementation as a side effect, available at http://www.pdimov.com/cpp/rw_mutex.cpp that is lock-free when there is no contention (i.e. when a reader attempts a lock and no writers are active, or when a writer attempts a lock and the lock is free.) It appears to work (famous last words.) If you see any subtle problems or can stress test it on a multiprocessor, please let me know.

"Peter Dimov" <pdimov@mmltd.net> writes:
I have produced a read/write mutex implementation as a side effect, available at
http://www.pdimov.com/cpp/rw_mutex.cpp
that is lock-free when there is no contention (i.e. when a reader attempts a lock and no writers are active, or when a writer attempts a lock and the lock is free.)
It appears to work (famous last words.) If you see any subtle problems or can stress test it on a multiprocessor, please let me know.
This implementation either starves writers or starves readers. Running your test code lets the writers all do some work, then the readers take over until they're all finished, then the writers alternate until they're all done. Running my test harness against it (based on Howards sample at http://home.twcny.rr.com/hinnant/cpp_extensions/rw_perf.html), I get lots of writer work, followed by a little reader work, then more writer work, then a little more reader work, such that all the writer threads finish considerably before the reader threads. This is a single CPU machine. Anthony -- Anthony Williams Software Developer Just Software Solutions Ltd http://www.justsoftwaresolutions.co.uk

Anthony Williams wrote:
"Peter Dimov" <pdimov@mmltd.net> writes:
I have produced a read/write mutex implementation as a side effect, available at
http://www.pdimov.com/cpp/rw_mutex.cpp
that is lock-free when there is no contention (i.e. when a reader attempts a lock and no writers are active, or when a writer attempts a lock and the lock is free.)
It appears to work (famous last words.) If you see any subtle problems or can stress test it on a multiprocessor, please let me know.
This implementation either starves writers or starves readers. Running your test code lets the writers all do some work, then the readers take over until they're all finished, then the writers alternate until they're all done.
Contented readers and all writers compete on the same mutex (mx_); I believe that this makes the implementation as fair as an ordinary mutex. If you run my test with all yields removed, you'll see that all threads finish more or less at the same time, approximately in the order in which they were started.

"Peter Dimov" <pdimov@mmltd.net> writes:
I have produced a read/write mutex implementation as a side effect, available at
http://www.pdimov.com/cpp/rw_mutex.cpp
that is lock-free when there is no contention (i.e. when a reader attempts a lock and no writers are active, or when a writer attempts a lock and the lock is free.)
wrlock() always locks not one, but *two* mutexes. I don't call that "lock free". It might be that the mutex implementation doesn't require an OS resource when there is no contention, but that's a separate matter. Anthony -- Anthony Williams Software Developer Just Software Solutions Ltd http://www.justsoftwaresolutions.co.uk

Anthony Williams wrote:
"Peter Dimov" <pdimov@mmltd.net> writes:
I have produced a read/write mutex implementation as a side effect, available at
http://www.pdimov.com/cpp/rw_mutex.cpp
that is lock-free when there is no contention (i.e. when a reader attempts a lock and no writers are active, or when a writer attempts a lock and the lock is free.)
wrlock() always locks not one, but *two* mutexes. I don't call that "lock free".
Yes, you are right, of course. Only rdlock() is lock-free when no writers are active.
participants (4)
-
Anthony Williams
-
Keith Burton
-
Peter Dimov
-
Steven E. Harris