
I need assistance on how to properly use boost::shared_mutex. Suppose I have this routine, which I want to make thread-safe: class FunctionEvaluationCache { // Assume singleton std::unordered_map<Function, std::map<Input, int>> cache; // cache of results to Function::evaluate() } class Function { // Copyable bool operator==(const Function& fn2); // All Functions that operator==() share a cache entry int evaluate(const Input& in) { // Expensive operation, hence the cache if (cache.count(*this) == 0 || cache[*this].count(in) == 0) { // Not in cache, do expensive operation // Add result to cache cache[*this][in] = result; } return cache[*this][in]; } } Clearly, the big obstacle to a threadsafe version of Function::evaluate() is this cache. I believe that I will need a boost::shared_mutex for cache, as well as each cache entry. I'm not sure on how to use boost::shared_mutex, however: the straightforward way would be to expose the boost::shared_mutex instances publicly and modify evaluate() to get the proper locks, while the other line of thinking is that I should not expose the boost::shared_mutex instances at all and have a series of methods that evaluate() can use. But what I would like to do is not modify evaluate() in any way. Is it possible to do this?