
Tim Blechmann <tim@klingt.org> writes:
Firstly, I'd like to help with this. I have a couple of lock-free queue implementations lying around from my book. I've attached a sample implementation that uses atomic reference-counting --- it would need to be "boostified", as it's written for C++0x, but that should be relatively straightforward. I don't make any claims for performance, but I believe it to be "correct", with no data races or undefined behaviour --- I *really* want to know if that's not the case, so I can correct it before the book goes to press.
while the implementation of your queue looks good to me, i am a bit concerned about the use of dynamic memory allocation ... it is `lockfree' in terms of the data structure implementation, but allocating memory from the os (for node and T), its behavior is depending on the new/delete implementation.
Yes, with a poor allocator there will be a performance hit and potentially locking inside the allocator. I would just replace the global allocator with a proper MT allocator such as Hoard in the first instance, and I hope that the C++0x compilers ship with such an allocator. If the allocator performance is still an issue it can easily be replaced with a custom allocator designed for the specific use case.
in my implementation pop is lockfree in any case, while push is lockfree as long as the number of nodes does not exceed the preallocated node count ... while this may be negligible for an average-case scenario, it is quite significant for the worst-case ...
Yes. You could amortize the cost by allocating chunks of nodes rather than single nodes, but then you need to track the chunks... Anthony -- Anthony Williams Author of C++ Concurrency in Action | http://www.manning.com/williams Custom Software Development | http://www.justsoftwaresolutions.co.uk Just Software Solutions Ltd, Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK