
hi julien, thanks for your review. few comments!
*- What is your evaluation of the design?*
Usage of boost.lockfree is simple and elegant. I'd like to see some name changed, if I may express my vote I'd say circular_buffer, queue and stack in the boost::lockfree namespace. I was thinking about adding an unsafe "view" of the structures that would allow normal container operations while the main interface would only have "safe" functions. I support the proposition to make boost::lockfree work with boost::interprocess.
after some earlier discussion, i think the best names are queue, stack and spsc_queue (to make the single-produce/single-consumer safety as explicit as possible) also enqeueue/dequeue will be renamed to push/pop so that all data structures share the same interface. (std::queue also uses the names push/pop, so it is probably the best way to go).
"fifo" will be more useful in real-time software because it's performance is not as scalable as I was expecting: on a 4 core CPU it took more time to deal with the same amount of data with 4 threads than with 2 (half producers half consumers).
the reason for this is that different threads interfere with each other. i know of this problem, but also i am not sure if this is a real problem for cases other than benchmarks. a benchmark will stress the data structures in a rather unnatural way, but in a real application one probably won't pipe millions of integers through a queue, but probably the produce threads will spend a lot of time generating them and the consumer a lot of time processing them ;)
It's not verbose enough. There's a great deal of work in the code, but as it has been stated many times, this is a delicate topic and more help is always welcome. * * * - What is your evaluation of the potential usefulness of the library?*
Not developing real time solutions myself, so I can't fully judge. On the other hand, a wait free buffer is quite likely to be a very welcomed solution for many non real time high performance applications (starting by IPC and boost::interprocess ;).
i've spent some time to analyse if using it via boost.interprocess will be reasonable/feasible. the answer is: it depends on the CPU and the implementation of atomic<>. if the CPU is supported by boost.atomic, then it is no problem. if the standard library provides atomic<>, it shouldn't be a problem, either. however if boost.atomic has to emulate the atomic operations, it will use a pool of spinlocks (the same spinlock pool that is used for the smart_ptr library). afaict there spinlocks won't be shared among the different proceses ... unfortunately there is no way to catch this at compile-time :/ cheers, tim