
Batov, Vladimir wrote:
[Christopher Currie]
In the need of extreme memory efficiency and speed: I already have, from
prior use, a scoped_try_lock 'l' around my mutex, currently unlocked. I need to lock my mutex in a blocking fashion. I do not wish to pay the memory and time cost to create an instance of scoped_lock just to do this. I therefore call 'l.lock()' and not 'l.try_lock()'.
[Batov, Vladimir] I am sorry but I am far from being convinced that "extreme memory efficiency" can be achieved by saving on one lock.
One lock, called once, no; One lock created and destroyed repeatedly in a function that gets called thousands of times, possibly. I'll be the first to admit that I don't have any empirical evidence to support the scenario though, I simply tossed it out there as an example of why one might wish to have the operation supported. I work on a project where new designs are being considered that eliminate mutexes entirely because the performance penalty of locking them is too great.
Locks are very transient creatures and created on the stack. Commonly the stack is predefined by OS regardless if you use it or not. Consequently, even when you do not create another object on the stack, that memory is not available to you for anything else anyway.
Does this hold true in embedded environments, where memory is at a premium?
... Twisting design for the sake of perceived performance/memory gains is often (I'd say always) a very bad idea.
I'd contest the assertion the design is (or is being) "twisted," simply different design decisions were made to serve differently perceived goals. There's nothing wrong with designing with performance as a goal.
... For example, general memory allocation is often slow for me. Then, I write my own special memory allocator. I am not asking for the general-purpose allocator to be changed.
A fair argument, although as designed, there is no facility to support writing my own special locking class; perhaps there should be. For example, say I'm writing a special queue class that supports separate locks for the enqueue and dequeue operations. Sometimes, I know that there is going to be only thread adding to the queue, so I'd like to specify that a dummy lock be used that doesn't actually lock the enqueue mutex. If there were a framework for a custom lock, I could support this. I'm willing to run with the idea that try_locks and timed_locks don't need to have anything in common with basic locks. So, taking it further, do we then need to have lock() and unlock() operations at all? It enforces a usage paradigm: if you want to unlock your mutex, make sure the lock is deleted. The only drawback is that, in the case of a try or timed lock, you wouldn't be able to try again on the same lock object, but perhaps this is a design advantage, forcing users to clearly define their critical sections. If you can't get your lock, fail out with an error. -- Christopher Currie <codemonkey@gmail.com>