
hi matthew,
If Boost.Lockfree will be accepted, it won't be merged into trunk before Boost.Atomic is accepted.
That's good to hear. So, um, when is Boost.Atomic going to be accepted? :)
boost.atomic should be treated on its own. personally i would encourate a fast- track review because it is an implementation of a c++0x feature, so there is no real need to evaluate the interface for example ;)
- size()/empty(): These should be available and not just "for debugging." I'm aware that using these doesn't make a lot of sense in multithreaded programming (i.e. just because the queue is empty doesn't mean the producer is done), but the std adaptors have them so I think the lockfree versions should as well. A size() can quite useful for displaying an (estimated) asynchronous counter, especially for the case of a filled-up-front fifo with multiple consumers. The front()/back() functions are understandably omitted though.
afaict, empty() cannot correctly be implemented for the fifo (unless using an atomic counter). size() would require an atomic integer (read: one more atomic operation, which will probably be rather expensive) and cannot be implemented accurately (one can just give an lower or upper bound).
- range constructors: it's often convenient to construct a container from another container; since lockfree fifo is a real container and not just an adaptor, its ctors should have standard container ctors.
here again i hesitate: stack and fifo use an internal freelist, which can be configured with a policy to be constant-sized (which is required for real-time systems where you cannot allocate memory). range constructors would therefore also require an additional argument to specify the number of nodes of the free-list.
The only other things that bother me are the class template specializations. The documentation had me thinking for several minutes that the fifo.hpp reference was duplicated. It wasn't until I read the description that I understood the specialization. Which got me to thinking about a way to eliminate that redundancy: instead of a class template specialization, can dequeue's parameter be a template that must be constructible from T? That way you could have: fifo<Foo*> q; q.enqueue(new Foo); q.enqueue(new Foo); Foo* rawPtr; q.dequeue(rawPtr); shared_ptr<Foo> sharedPtr; q.dequeue(sharedPtr); This would also support different pointer types, e.g. tr1::shared_ptr or std::unique_ptr or even some UDT.
this is an interesting idea! however it cannot be used to dequeue to a scoped_ptr. although i must admit that dequeueing to a scoped_ptr is debatable ...
- What is your evaluation of the documentation?
i won't comment on all of your suggestions: most of them i'll simply agree with (i am not a native speaker and the docs haven't been proofread, as i rewrote it after the boostcon pre-review)
What is the difference between guard and lock? Since this is lockfree and not guardfree, the docs could just have "lock" :)
yes and no: lock-free is not the same as non-blocking (according to the terminology of herlihy, which i am trying to follow). `obstruction-free' data structures would not be lock-free, but would not use locks, either.
Why is the is_lock_free() function needed? If it is needed, why can't it be static? What is a real-world use case for it? And doesn't the caching_freelist_t type count as lockfree even though it may lock for memory allocations?
atomic operations are not necessarily lock-free. boost.lockfree requires double- word compare-and-swap, which is not available on all architectures. e.g. ppc does not provide them. c++0x atomics define atomic<>::is_lock_free as non-static member functions (which is reasonable, because on some architectures, a specific alignment is required, so atomic<> instances would be lock-free or not depending on their memory address). targetting c++0x, it cannot be static. even worse: to give a fully correct answer, we would have to check every internal node, which is not possible, if the internal nodes are allocate dynamically. and it is true, that the caching_freelist_t may lock during memory allocations. so maybe instead of a boolean, i shall return an enum with the states `lockfree', `blocking_on_allocation' and `blocking_on_atomic_operation' ...
In reference.html, why are all three headers linked but only fifo.hpp is displayed? The hyperlinks are screwed up. When I click on fifo.hpp I want to go to the full page, not the class declaration. For that matter, what is the purpose of having a page just for the class declaration and its specialization? If you want to show class declarations, show them all on one page (with links to the full pages).
well, this is what quickbook/doxygen generates by default. i'd prefer to list all headers on reference.html, but i'd need some advice, how to achieve this.
What does it mean for a fifo node to be "lockfree"?
Why is it impossible for C++0x atomics to provide "a completely accurate implementation"? I think this was already asked and answered but it should be in the docs. And it's especially strange given your leading sentence "Boost.Lockfree depends on C++0x atomics". I'm probably missing something.
as stated above: is_lock_free is a per-instance member function of atomic<>. i am not sure, if i really should describe the reasons in detail, because it would go quite into the details of the implementation.
Why is dequeue_unsafe unsafe if the platform supports atomics?
the _unsafe operations are unsafe, because they do not use atomics. the reason for this is performance: atomics are slow
What is the ringbuffer specialization for? Why have both the template-fixed-size and constructor-fixed-size versions?
i want to provide the same data structure for specifying the size at compile- time and at run-time. the main reason for compile-time is that using a power of two, the indices math can use bitmasks for performance reasons.
The array methods for ringbuffer need examples. I actually have no idea how to use them by looking at the methods signatures.
ok, will try to add
How is the range enqueue method non-blocking?
it does not enqueue all objects, but only tries to enqueue the first N objects. will improve the docs in this part. thanks for your review, i will try to adapt the docs according to your suggestions. cheers, tim