Greg Barron
Hi All,
I would like to refer you to this bug report: https://svn.boost.org/trac/boost/ticket/9203
I won't paste the full contents of the bug report into this message. The executive summary is that the allocation algorithm for the implementation of asio::strand can and does allocate the implementation of an existing strand to strands that are subsequently allocated.
This can have the effect that handlers wrapped in strand (b) will actually be executed on strand (a).
I consider this to be a bug but if there is something that I'm misunderstanding about the operation of strands it would be great if you could help me clear it up.
If the consensus is that this is a bug, what are the chances for it to be included in the 1.56 release? It is probably too late to be included in the 1.55 release.
I've implemented a work around which could only be described as a hack and would be embarrassed to show it to anybody. But it gets me over my hump :).
Thanks!
Out of curiosity, have you tried #define BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION? This allocates the strand implementations in a round-robin fashion rather than relying upon the hash function. For posterity, the strand_service::construct implementation is as follows: void strand_service::construct(strand_service::implementation_type& impl) { boost::asio::detail::mutex::scoped_lock lock(mutex_); std::size_t salt = salt_++; #if defined(BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION) std::size_t index = salt; #else // defined(BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION) std::size_t index = reinterpret_caststd::size_t(&impl); index += (reinterpret_caststd::size_t(&impl) >> 3); index ^= salt + 0x9e3779b9 + (index << 6) + (index >> 2); #endif // defined(BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION) index = index % num_implementations; if (!implementations_[index].get()) implementations_[index].reset(new strand_impl); impl = implementations_[index].get(); } where salt_ is initialized to 0, and num_implementations is by default 193. The hashing function is pretty standard, using the golden-ratio derived int and the pointer hash which is also implemented the same way in boost.hash. The address of the actual strand is used as the initial index value, hashed via the common pointer hash, then combined with the salt. Effectively I believe this should be the same as using boost.hash on the &impl and then hash_combine with the salt. It seems that using the address of the strand as input may make it feasible to incidentally generate a collision, though I have not seen it in person. But of course, there will always be the possibility of two pointers such that both generate the same eventual hash. My personal use cases generally involve a large amount of strands and the callbacks never block, so the occasional collision is of little consequence. The alternative seems to be using BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION with its caveats, or perhaps investigating alternative hash strategies that does not depend on the address or has better mixing properties. Even randomizing the initial salt_ might help, as the distribution of the indexes seems to improve as it moves from zero, or modifying the salt_ differently than simply incrementing. As an aside, I have also investigated using lock-free and other concurrent data structures and patterns to reduce some locking within asio, and would be interested to see your approach. -Adam D. Walling