Greg Barron
writes: Out of curiosity, have you tried #define BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION? This allocates the strand implementations in a round-robin fashion rather than relying upon the hash function.
For posterity, the strand_service::construct implementation is as follows:
void strand_service::construct(strand_service::implementation_type& impl) { boost::asio::detail::mutex::scoped_lock lock(mutex_);
std::size_t salt = salt_++; #if defined(BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION) std::size_t index = salt; #else // defined(BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION) std::size_t index = reinterpret_caststd::size_t(&impl); index += (reinterpret_caststd::size_t(&impl) >> 3); index ^= salt + 0x9e3779b9 + (index << 6) + (index >> 2); #endif // defined(BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION) index = index % num_implementations;
if (!implementations_[index].get()) implementations_[index].reset(new strand_impl); impl = implementations_[index].get(); }
where salt_ is initialized to 0, and num_implementations is by default
The hashing function is pretty standard, using the golden-ratio derived
and the pointer hash which is also implemented the same way in boost.hash. The address of the actual strand is used as the initial index value, hashed via the common pointer hash, then combined with the salt. Effectively I believe this should be the same as using boost.hash on the &impl and then hash_combine with the salt.
It seems that using the address of the strand as input may make it feasible to incidentally generate a collision, though I have not seen it in person. But of course, there will always be the possibility of two pointers such that both generate the same eventual hash.
My personal use cases generally involve a large amount of strands and the callbacks never block, so the occasional collision is of little consequence.
The alternative seems to be using BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION with its caveats, or
int perhaps
investigating alternative hash strategies that does not depend on the address or has better mixing properties. Even randomizing the initial salt_ might help, as the distribution of the indexes seems to improve as it moves from zero, or modifying the salt_ differently than simply incrementing.
As an aside, I have also investigated using lock-free and other concurrent data structures and patterns to reduce some locking within asio, and would be interested to see your approach.
-Adam D. Walling
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Adam, Would you be able to elaborate on the caveats of using BOOST_ASIO_ENABLE_SEQUENTIAL_STRAND_ALLOCATION? My application has recently been encountering collisions with the default hash allocation scheme, resulting in poor concurrency. It seems that sequential allocation would eliminate the collisions, as the application uses less strands than are available per BOOST_ASIO_STRAND_IMPLEMENTATIONS. However, I'm unclear on why hashing is the default to begin with, it seems that sequential would always be better since it gives even distribution. What are the advantages/disadvantages of the two allocation schemes? Thanks, Chris