boost::mutex::scoped_lock conventions

Is there a convention about whether or not to explicitly call .unlock()? In the case where everything to the end of the scope should be locked, there is no NEED to do it, but isn't it a bit sloppy to leave locks locked? Much like not using braces on one-line 'if' statements, it could cause maintenance problems later. What's the current thinking? Thanks, - Mark P.S. I'm currently always unlocking, because sometimes I want to unlock before the scope ends. This makes it clear exactly what I intend.

Mark Sizer said:
Is there a convention about whether or not to explicitly call .unlock()?
In the case where everything to the end of the scope should be locked, there is no NEED to do it, but isn't it a bit sloppy to leave locks locked?
Certainly not. That's the purpose of the RAII idiom.
Much like not using braces on one-line 'if' statements, it could cause maintenance problems later.
I wouldn't think so, but maybe you have a use case to share where you think it could? -- William E. Kempf

The hypothetical is easy enough: void myclass::doSomething() { scoped_lock lockData( _mutexData ); <do something using the data> } After two years and three developers: void myclass::doSomething() { scoped_lock lockData( _mutexData ); <do the original thing that needed the lock> <do all sorts of additional stuff not needing lock> } Of course we all know that no programmer would ever be so lazy as to modify code with which he was not completely familiar (I REALLY need thread IDs! - sound eerily familiar? [btw: TSS is working great.]). No one would ever make one method do several different things, either. All code is kept optimally factored over years of development <chortle>. In an ideal world, unlocking is rarely necessary. In the real world, I think it helps define the developer's intent. At the very least the next programmer is presented with the obvious choice of putting the new code before or after the "unlock". He can still do it wrong, but it has to be concious choice. Haven't you ever tracked down this bug (usually introduced by those who think indentation is for wimps): if ( <condition> ) { <do true> } else <do false> that becomes: if ( <condition> ) { <do true> } else <do false> <do more false> // oops! Same pattern, different situation. If I ever get around to creating a language, it will be indent sensitive. Screw the punctuation ('{', 'begin', '(', etc...). Thanks, - Mark P.S. On the same note, does anyone indent inside locks (I don't)? William E. Kempf wrote:
Mark Sizer said:
Is there a convention about whether or not to explicitly call .unlock()?
In the case where everything to the end of the scope should be locked, there is no NEED to do it, but isn't it a bit sloppy to leave locks locked?
Certainly not. That's the purpose of the RAII idiom.
Much like not using braces on one-line 'if' statements, it could cause maintenance problems later.
I wouldn't think so, but maybe you have a use case to share where you think it could?

Same pattern, different situation. If I ever get around to creating a language, it will be indent sensitive. Screw the punctuation ('{', 'begin', '(', etc...).
incidentally - and this is *way* off-topic - python's control flow is defined by indentation. iirc, it works just like you'd want. however, i should offer the disclaimer that that's pretty much all i know about python, other than the fact that ESR likes it: http://www.linuxjournal.com/article.php?sid=3882 -Ryan ------------------------------- "...real recognize real." -Rakim

Mark Sizer said:
The hypothetical is easy enough:
void myclass::doSomething() { scoped_lock lockData( _mutexData ); <do something using the data> }
After two years and three developers:
void myclass::doSomething() { scoped_lock lockData( _mutexData ); <do the original thing that needed the lock>
<do all sorts of additional stuff not needing lock> }
This doesn't illustrate it for me, for several reasons: 1) Doing more stuff that doesn't need to be locked doesn't necessarily mean dire consequences. We all know that holding a lock too long *can* lead to problems, but generally it's only a problem of performance. 2) If you have a block of code in which things grow to the point that you'd not be able to easily see the scoped lock, and thus not make this mistake, then you've got more maitenance issues than explicit unlocking is going to help you with. 3) Generally, most functions/blocks will be factored in such a way that the mutex *would* have to be held the entire scope. It's a rare case in which you'd have a function like you illustrate above, and such cases are generally self evident from the beginning and don't result from maintenance changes.
Of course we all know that no programmer would ever be so lazy as to modify code with which he was not completely familiar (I REALLY need thread IDs! - sound eerily familiar? [btw: TSS is working great.]). No one would ever make one method do several different things, either. All code is kept optimally factored over years of development <chortle>.
What do thread IDs have to do with this? But yes, I generally do believe that properly designed code won't have any of the characteristics you sarcastically seem to indicate does occur. If you work with code like that, you have much worse problems than holding onto a lock longer than you should will help prevent. Do you call reset() explicitly on all of your auto_ptr's? I'm not going to tell anyone else that they shouldn't call unlock() explicitly. If you think it will help during maintenance, then by all means, do so. But I'm not convinced enough to recommend this to others.
In an ideal world, unlocking is rarely necessary.
I hope you mean explicit unlocking!
In the real world, I think it helps define the developer's intent. At the very least the next programmer is presented with the obvious choice of putting the new code before or after the "unlock". He can still do it wrong, but it has to be concious choice.
In my own experience, it's been a concious choice to do so with implicit unlocking as well. I've never made the mistake you illustrate above. It's probably due to the fact that when dealing with synchronization, you *HAVE* to fully understand the code being synchronized, lest you create deadlocks or race conditions, so such areas of code are kept short, explicit and well documented.
Haven't you ever tracked down this bug (usually introduced by those who think indentation is for wimps): if ( <condition> ) { <do true> } else <do false>
that becomes:
if ( <condition> ) { <do true> } else <do false> <do more false> // oops!
Same pattern, different situation.
I don't see it as the same pattern at all.
If I ever get around to creating a language, it will be indent sensitive. Screw the punctuation ('{', 'begin', '(', etc...).
Try Python.
P.S. On the same note, does anyone indent inside locks (I don't)?
Yes: void foo() { { boost::mutex::scoped_lock lock(mutex); // code } // other code } Not quite what you meant, I know, but I think this illustrates the point about RAII not necessarily leading to the problem you see. Locks should be held for as short of a period of time as possible, generally, which means short blocks, even if artificial. Short code blocks combined with the need to carefully analyze synchronization leads to little chance of making the mistake you illustrate. -- William E. Kempf

"William E. Kempf" wrote: [...]
means short blocks, even if artificial. Short code blocks combined with the need to carefully analyze synchronization leads to little chance of making the mistake you illustrate.
But explicit unlocking (also "RAII" based) sometimes IS "needed". http://terekhov.de/DESIGN-futex-CV.cpp ~futex_condvar() { mutex::guard guard( m_mutex ); assert( m_waiters[0] == m_wakeups ); while ( m_waiters[0] ) { int ftx = m_futex = EOC(); mutex::release_guard release_guard( guard ); cancel_off_guard no_cancel; m_futex.wait( ftx ); } } regards, alexander.

Alexander Terekhov said:
"William E. Kempf" wrote: [...]
means short blocks, even if artificial. Short code blocks combined with the need to carefully analyze synchronization leads to little chance of making the mistake you illustrate.
But explicit unlocking (also "RAII" based) sometimes IS "needed".
No one claimed otherwise. -- William E. Kempf

"William E. Kempf" wrote:
Alexander Terekhov said:
"William E. Kempf" wrote: [...]
means short blocks, even if artificial. Short code blocks combined with the need to carefully analyze synchronization leads to little chance of making the mistake you illustrate.
But explicit unlocking (also "RAII" based) sometimes IS "needed".
No one claimed otherwise.
Yeah. The intent was to show the usage if "release_guard"... and to give you just one more hint with respect to currently missing functionality (sync) in boost::~condition(). Well, it didn't seem to work. OK, http://tinyurl.com/btdd -- please read the message 5213... starting at <quote>We later "adjusted"...</quote>. Thank you. regards, alexander. -- Pro-"pthread_null/compare/hash" lobbying association.

Alexander Terekhov said:
"William E. Kempf" wrote:
Alexander Terekhov said:
"William E. Kempf" wrote: [...]
means short blocks, even if artificial. Short code blocks combined
with the need to carefully analyze synchronization leads to little chance of making the mistake you illustrate.
But explicit unlocking (also "RAII" based) sometimes IS "needed".
No one claimed otherwise.
Yeah. The intent was to show the usage if "release_guard"... and to give you just one more hint with respect to currently missing functionality (sync) in boost::~condition(). Well, it didn't seem to work. OK, http://tinyurl.com/btdd -- please read the message 5213... starting at <quote>We later "adjusted"...</quote>. Thank you.
I'm going to try this one time. Say what you mean to say, or don't say anything at all. Chasing down your links, especially when you very often post a link to something that only links to what you're really interested in saying!, is unproductive use of my (or anyone else's) time. In this case, I missed the only relevant point in your posting, because the "release_guard" was hidden in code and never mentioned in your text. And there was *NO* mention of the "missing functionality in boost::~condition()". If you intended me to find that by following a wild goose chase of links in your posts, at the very least you had darn well better tell me what I'll be looking for in that chase. If you *really* want to make your case, you'll actually do what everyone else does, and say what you want to say in your post, rather than ask someone to go on a wild goose chase, but at this point I'll settle for just being told what I'm looking for. -- William E. Kempf

"William E. Kempf" wrote: [...]
I'm going to try this one time.
Say what you mean to say, or don't say anything at all.
This time, I want to say that you should finally join

Alexander Terekhov said:
"William E. Kempf" wrote: [...]
I'm going to try this one time.
Say what you mean to say, or don't say anything at all.
This time, I want to say that you should finally join
and start your activity there with some explanation why the AND the upcoming <cthread> header should really define something like pthread_null(), pthread_compare() and pthread_hash(), extern "C++" pthread_once()-and-etc.-with-function-ptrs from <cthread> aside for a moment. I'd just say that this needs to done in order to provide some "symmetry" with respect to <thread>/boost.thread stuff, or something like that.
Slightly better communication than you've done in the past... but: 1) What is pthread_null() (I can certainly guess on this one, but guessing could make a fool of both of us)? 2) What is pthread_compare() and how does it differ from pthread_equal() (again, I could guess)? 3) What is pthread_hash() (probably the easiest to guess... but again...)? 4) Why do *you* think any of these are necessary? 5) What "needs to be done in order to provide 'symmetry' with respect to <thread>/boost.thread stuff"? My participation in the Open Group? My understanding of what they're doing? The inclusion of the above functions to POSIX threads? This is pretty much the first time I've heard of these functions, so asking me to champion them with out more meat is still not the best form of "saying what you mean". BTW: If the above functions are what I think they are, I can provide them in Boost.Threads with out anything being changed in the POSIX standard. (In fact, the only one not provided for in the implementation in thread_dev is thread_null(), assuming I'm making the right guesses as to what these functions are. It means specifying some requirements on the ID in the documentation, but the implementation already provides this. I've debated thread_null(), as I do see uses for it, but that would require a backwards incompatible change to the interface, so I've got to make the decision carefully.) So there's not a lot of reason for me to champion them in the Open Group, though there is reason for me to be interested in any debate or resolution that occurs there. -- William E. Kempf

"William E. Kempf" wrote: [...]
1) What is pthread_null() (I can certainly guess on this one, but guessing could make a fool of both of us)?
extern "C" pthread_t pthread_null() { return pthread_t(); }
2) What is pthread_compare() and how does it differ from pthread_equal() (again, I could guess)?
extern "C" int pthread_compare(pthread_t tid1, pthread_t tid2) { return tid1 < tid2 ? -1 : tid1 > tid2 ? +1 : 0; }
3) What is pthread_hash() (probably the easiest to guess... but again...)?
extern "C" size_t pthread_hash(pthread_t tid) {
return std::hash
4) Why do *you* think any of these are necessary?
``Why not?'' ;-)
5) What "needs to be done in order to provide 'symmetry' with respect to <thread>/boost.thread stuff"? My participation in the Open Group? My understanding of what they're doing? The inclusion of the above functions to POSIX threads?
Well, pls see above and below.
This is pretty much the first time
I've 'mentioned' it here before -- in the 'Pro-"pthread_null/compare/hash" lobbying association' sig.
I've heard of these functions, so asking me to champion them with out more meat is still not the best form of "saying what you mean".
But it's somewhat better than before, right? ;-)
BTW: If the above functions are what I think they are, I can provide them in Boost.Threads with out anything being changed in the POSIX standard.
We need a POSIX.C++ standard, that's why you and others should join the Austin group. (check out some messages on the "Austin Off Topic Discussion" reflector)
(In fact, the only one not provided for in the implementation in thread_dev is thread_null(), assuming I'm making the right guesses as to what these functions are. It means specifying some requirements on the ID in the documentation, but the implementation already provides this. I've debated thread_null(), as I do see uses for it, but that would require a backwards incompatible change to the interface, so I've got to make the decision carefully.) So there's not a lot of reason for me to champion them in the Open Group, though there is reason for me to be interested in any debate or resolution that occurs there.
http://tinyurl.com/buj9 regards, alexander. -- typedef std::kinda_dumb_thread_ptr pthread_t;

Alexander Terekhov said:
5) What "needs to be done in order to provide 'symmetry' with respect to <thread>/boost.thread stuff"? My participation in the Open Group? My understanding of what they're doing? The inclusion of the above functions to POSIX threads?
Well, pls see above and below.
This is pretty much the first time
I've 'mentioned' it here before -- in the 'Pro-"pthread_null/compare/hash" lobbying association' sig.
That is hardly "mentioning" it. You can't expect someone to even read your sig. And what is in your sig is hardly enough information to warrant the term "mentioned", even if I had read it.
I've heard of these functions, so asking me to champion them with out more meat is still not the best form of "saying what you mean".
But it's somewhat better than before, right? ;-)
Better, but hardly good enough. Look... I respect your knowledge on this subject, but for your knowledge to be useful to anyone you have to work on your delivery.
BTW: If the above functions are what I think they are, I can provide them in Boost.Threads with out anything being changed in the POSIX standard.
We need a POSIX.C++ standard, that's why you and others should join the Austin group. (check out some messages on the "Austin Off Topic Discussion" reflector)
Why do we need that? I'm not trying to say that it's a bad idea, but POSIX is not a standard that's universally adopted, and is focused on language extensions. It's more than worth considering what the POSIX standard says and does by Boost.Threads, and it would be folly for me to recommend to the C++ standards committee any library that violated POSIX in any way, or was counter to POSIX, or couldn't leverage current or future POSIX standards. But that doesn't mean that I should have a vested interest in shaping a POSIX.C++ standard. All that said, I will at least be reading about this, so despite my frustration, I'll thank you for the heads up.
(In fact, the only one not provided for in the implementation in thread_dev is thread_null(), assuming I'm making the right guesses as to what these functions are. It means specifying some requirements on the ID in the documentation, but the implementation already provides this. I've debated thread_null(), as I do see uses for it, but that would require a backwards incompatible change to the interface, so I've got to make the decision carefully.) So there's not a lot of reason for me to champion them in the Open Group, though there is reason for me to be interested in any debate or resolution that occurs there.
Another classic example of your poor communication skills. This link leads to a lengthy discussion. It will take me a significant time to read the full thread, and even more time to figure out why I should even care (in relation to *this* thread of discussion). At least tell me what I'm looking for. Better yet, summarize the point you want to make by referencing this discussion. You're method *might* save you a minute or two in posting, but it will cost me many factors more than that in figuring out what you want to say. If you don't care enough about what you're saying to actually say it, why should I care enough about it to spend the time trying to figure it out? -- William E. Kempf

William E. Kempf said:
I'm not trying to say that it's a bad idea, but POSIX is not a standard that's universally adopted, and is focused on language extensions.
OK, saying it's focused on "language extensions" isn't 100% accurate, but I think you can understand what I meant here. -- William E. Kempf

"William E. Kempf" wrote: [...]
BTW: If the above functions are what I think they are, I can provide them in Boost.Threads with out anything being changed in the POSIX standard.
We need a POSIX.C++ standard, that's why you and others should join the Austin group. (check out some messages on the "Austin Off Topic Discussion" reflector)
Why do we need that?
I'm not trying to say that it's a bad idea, but POSIX is not a standard that's universally adopted, and is focused on language extensions. It's more than worth considering what the POSIX standard says and does by Boost.Threads, and it would be folly for me to recommend to the C++ standards committee any library that violated POSIX in any way, or was counter to POSIX, or couldn't leverage current or future POSIX standards. But that doesn't mean that I should have a vested interest in shaping a POSIX.C++ standard.
Here's an illustration. (no link this time; see c.p.t. for details) <quote>
Ok, ok. I agree that a strictly confirming *application* should rely on guaranteed thread termination and always-unwinding. Perhaps we have a "scope" problem here, again. All I want is that a strictly conforming *function* shall not rely on unwinding IF it can be invoked within a C++ scope/functions that could restrict propagation and unwinding using exception specifications. There just ought to be some loophole/hint for that, don't you think so?
No, I don't, because none of this is even remotely within the scope of the POSIX standard. POSIX deals with thread cleanup handlers, which are called before the thread terminates. Period. There's no finalization; there's no chance of process termination. It is completely irrelevant to "strictly conforming" POSIX implementations or applications what might happen if it were, hypothetically, possible for an "unhandled" "exception" to terminate the process, because neither "unhandled" nor "exception" are meaningful concepts. IF there were a "C++ binding to POSIX", and IF that binding said that the mechanism for POSIX cleanup was actually "C++" exception propagation, this would need to be covered. But that hasn't happened and very likely won't happen. </quote> regards, alexander. -- "It's basically pthreads with a "we're not pthreads" attitude.." -- http://tinyurl.com/c73l

Alexander Terekhov said:
"William E. Kempf" wrote: [...]
BTW: If the above functions are what I think they are, I can provide them in Boost.Threads with out anything being changed in the POSIX standard.
We need a POSIX.C++ standard, that's why you and others should join the Austin group. (check out some messages on the "Austin Off Topic Discussion" reflector)
Why do we need that?
I'm not trying to say that it's a bad idea, but POSIX is not a standard that's universally adopted, and is focused on language extensions. It's more than worth considering what the POSIX standard says and does by Boost.Threads, and it would be folly for me to recommend to the C++ standards committee any library that violated POSIX in any way, or was counter to POSIX, or couldn't leverage current or future POSIX standards. But that doesn't mean that I should have a vested interest in shaping a POSIX.C++ standard.
Here's an illustration. (no link this time; see c.p.t. for details)
<quote>
Ok, ok. I agree that a strictly confirming *application* should rely on guaranteed thread termination and always-unwinding. Perhaps we have a "scope" problem here, again. All I want is that a strictly conforming *function* shall not rely on unwinding IF it can be invoked within a C++ scope/functions that could restrict propagation and unwinding using exception specifications. There just ought to be some loophole/hint for that, don't you think so?
No, I don't, because none of this is even remotely within the scope of the POSIX standard. POSIX deals with thread cleanup handlers, which are called before the thread terminates. Period. There's no finalization; there's no chance of process termination. It is completely irrelevant to "strictly conforming" POSIX implementations or applications what might happen if it were, hypothetically, possible for an "unhandled" "exception" to terminate the process, because neither "unhandled" nor "exception" are meaningful concepts.
IF there were a "C++ binding to POSIX", and IF that binding said that the mechanism for POSIX cleanup was actually "C++" exception propagation, this would need to be covered. But that hasn't happened and very likely won't happen.
</quote>
From my perspective (and I would assume the perspective of the C++ committee), what's important is that anything I do in Boost.Threads needs to be compatible with POSIX (as well as other threading systems), but
The motivations are backwards here, though. If the C++ language adopts a threading library, POSIX systems will have a lot of motivation for defining a POSIX C++ binding, or at the very least, making a particular implementation's POSIX binding compatible with the C++ threading. that's really it. I don't have any vested interest in extending POSIX for any reason. So, I'm intested in what's going on, but I'm not a good candidate for helping champion any proposals you're making for POSIX. -- William E. Kempf

"William E. Kempf" wrote: [...]
The motivations are backwards here, though. If the C++ language adopts a threading library, POSIX systems will have a lot of motivation for defining a POSIX C++ binding, or at the very least, making a particular implementation's POSIX binding compatible with the C++ threading.
How about moving this discussion to c.p.t.?
regards,
alexander.
--
"// Possible implementation for

Alexander Terekhov wrote:
"William E. Kempf" wrote: [...]
The motivations are backwards here, though. If the C++ language adopts a threading library, POSIX systems will have a lot of motivation for defining a POSIX C++ binding, or at the very least, making a particular implementation's POSIX binding compatible with the C++ threading.
How about moving this discussion to c.p.t.?
Well, just in case... <Forward Quoted> David Butenhof wrote:
Alexander Terekhov wrote:
"William E. Kempf" wrote: The motivations are backwards here, though. If the C++ language adopts a threading library, POSIX systems will have a lot of motivation for defining a POSIX C++ binding, or at the very least, making a particular implementation's POSIX binding compatible with the C++ threading.
Right now, the C++ language has, by default and convention, a POSIX binding; 1003.1-2001. The C and C++ languages are sufficiently interoperable that this presents only a few restrictions on the use by C++ code, around exceptions and member functions. OK, so the thread start routine needs to be 'extern "C"' -- a minor inconvenience. OK, so there's no portable standard on interoperability between POSIX cleanup and C++ exceptions, and I'll resist suggesting that only an idiot would fail to make them completely compatible and interoperable; but at least most people can be educated to realize that they ought to be.
The big hurdle for a true C++ binding is that the current state of affairs is "good enough" for most people, and the political process of developing a full native C++ binding would be painful. (Remember, it's not just saying that the thread::create method takes a class member at which the thread will start... it means reviewing every method and template in the STL to determine which have thread safety requirements, and deciding precisely what those requirements are and how to meet them. Then there's the matter of cancellation points... and so forth.)
When and if the C++ standard adds true thread support, that will be, by default and in practice, the thread binding for C++; whether the underlying thread environment is POSIX, Win32, or something else. This is great, as long as it doesn't do or say anything stupid, but it still leaves a few loopholes because inevitably people will continue to write applications that mix languages. Mixing C and C++ has never been a problem; but if the thread model in C++ is radically different, it could become a problem. Furthermore, there's a missing piece that neither POSIX 1003.1-2001 plus ISO C++ 2005 (or whatever), or even 1003.1-2001 plus a hypothetical "1003.C++" will necessarily (or even likely) supply -- and that's how the two interoperate.
If C++ or 1003.C++ says that thread::cancel raises an exception, and 1003.1 says that pthread_cancel() invokes cleanup handlers, does that mean that cancelling a thread with pthread_cancel() will trigger catch(...), or even destructors? Well, maybe not. This could more easily be solved with a 1003.C++, perhaps, since at least the two standards are in a family. Since the C++ standard is unlikely to mention POSIX any more than now, it's unlikely to provide any guarantees.
Perhaps that would provide an opportunity for a smaller POSIX project, though; a PROFILE that would chink the holes where the two walls meet. In effect, specifying a "POSIX platform" supporting both threads and C++ that simply says "C++ cancellation is the same as POSIX cancellation", "POSIX cleanup handlers are logically and semantically the same as C++ object destructors", and "POSIX cancellation is visible as a C++ exception".
-- /--------------------[ David.Butenhof@hp.com ]--------------------\ | Hewlett-Packard Company Tru64 UNIX & VMS Thread Architect | | My book: http://www.awl.com/cseng/titles/0-201-63392-2/ | \----[ http://homepage.mac.com/dbutenhof/Threads/Threads.html ]---/
regards,
alexander. < playing "arabic telephone", an electronic one ;-) >
--
"// Possible implementation for

Alexander Terekhov said:
Alexander Terekhov wrote:
The motivations are backwards here, though. If the C++ language adopts a threading library, POSIX systems will have a lot of motivation for defining a POSIX C++ binding, or at the very least, making a particular implementation's POSIX binding compatible with
"William E. Kempf" wrote: [...] the C++ threading.
How about moving this discussion to c.p.t.?
Well, just in case... <Forward Quoted>
Thanks... I currently can't access c.p.t. in any reasonable manner. I'm working to rectify this, but in the mean time, I appreciate the cross post.
David Butenhof wrote:
Alexander Terekhov wrote:
"William E. Kempf" wrote: The motivations are backwards here, though. If the C++ language adopts a threading library, POSIX systems will have a lot of motivation for defining a POSIX C++ binding, or at the very least, making a particular implementation's POSIX binding compatible with the C++ threading.
Right now, the C++ language has, by default and convention, a POSIX binding; 1003.1-2001. The C and C++ languages are sufficiently interoperable that this presents only a few restrictions on the use by C++ code, around exceptions and member functions. OK, so the thread start routine needs to be 'extern "C"' -- a minor inconvenience. OK, so there's no portable standard on interoperability between POSIX cleanup and C++ exceptions, and I'll resist suggesting that only an idiot would fail to make them completely compatible and interoperable; but at least most people can be educated to realize that they ought to be.
The cleanup issues are some of the bigger ones, IMHO. And my experience indicates there are a lot of people out there that Mr. Butenhoff would consider "idiots", I think. But I agree with what he's said.
The big hurdle for a true C++ binding is that the current state of affairs is "good enough" for most people, and the political process of developing a full native C++ binding would be painful. (Remember, it's not just saying that the thread::create method takes a class member at which the thread will start... it means reviewing every method and template in the STL to determine which have thread safety requirements, and deciding precisely what those requirements are and how to meet them. Then there's the matter of cancellation points... and so forth.)
Most STL libraries are thread-safe today, so the analysis there wouldn't be too difficult. It just needs to be stated explicitly in the standard. Cancellation points are another issue... but I don't think C++ will add too many to the list already provided by POSIX.
When and if the C++ standard adds true thread support, that will be, by default and in practice, the thread binding for C++; whether the underlying thread environment is POSIX, Win32, or something else. This is great, as long as it doesn't do or say anything stupid, but it still leaves a few loopholes because inevitably people will continue to write applications that mix languages. Mixing C and C++ has never been a problem; but if the thread model in C++ is radically different, it could become a problem.
Absolutely agreed. I've said all along that Boost.Threads has to be very aware of what POSIX says. We can not deviate in any way from POSIX that will result in conflicts between the threading systems.
Furthermore, there's a missing piece that neither POSIX 1003.1-2001 plus ISO C++ 2005 (or whatever), or even 1003.1-2001 plus a hypothetical "1003.C++" will necessarily (or even likely) supply -- and that's how the two interoperate.
Agreed, but that's the case today when mixing languages. There are a lot of areas which are left unspecified, even when the languages being mixed are C and C++.
If C++ or 1003.C++ says that thread::cancel raises an exception, and 1003.1 says that pthread_cancel() invokes cleanup handlers, does that mean that cancelling a thread with pthread_cancel() will trigger catch(...), or even destructors? Well, maybe not. This could more easily be solved with a 1003.C++, perhaps, since at least the two standards are in a family. Since the C++ standard is unlikely to mention POSIX any more than now, it's unlikely to provide any guarantees.
No gaurantees. But with a C++ definition, one can at least hope that implementations will try and deal with these cross-language binding issues in some reasonable manner.
Perhaps that would provide an opportunity for a smaller POSIX project, though; a PROFILE that would chink the holes where the two walls meet. In effect, specifying a "POSIX platform" supporting both threads and C++ that simply says "C++ cancellation is the same as POSIX cancellation", "POSIX cleanup handlers are logically and semantically the same as C++ object destructors", and "POSIX cancellation is visible as a C++ exception".
Reasonable things for POSIX to do, or at least consider, IMO. -- William E. Kempf

"William E. Kempf" wrote: [...]
How about moving this discussion to c.p.t.?
Well, just in case... <Forward Quoted>
Thanks... I currently can't access c.p.t. in any reasonable manner. I'm working to rectify this,
http://news.cis.dfn.de might help. ;-) but in the mean time, I appreciate the cross
post.
I'll wait a day or two and post a reply addressing some of your points
to comp.programming.threads. You might also want to keep an eye on the
discussion of "validity" of the following DCSI** pattern (not exposing
thread_specific_ptr::release() or thread_specific_ptr::reset() to the
clients... Butenhof believes that since <quote>You at least need to be
sure all threads are "done" with the current key and will never read it
again, because pthread_key_delete() cannot invalidate the key</quote>,
the clients ("individual threads") should <quote>confirm that
synchronization by clearing their value for the key</quote>).
class stuff { /* ... */ };
class thing {
public:
thing(/* ... */) : /* ... */ stuff_shared_ptr(0) /* ... */ { /*...*/ }
~thing() { /* ... */ delete stuff_shared_ptr; /* ... */ }
/* ... */
const stuff & stuff_instance();
/* ... */
private:
/* ... */
mutex stuff_mtx;
stuff * stuff_shared_ptr;
thread_specific_ptr

That's interesting. Is it really necessary to have a release_guard that re-locks in the destructor? What's the point? I could read the spec, but I'll ask instead: Isn't the compiler allowed to optimize away some scoping? Is it required that objects be destructed in the order they were constructed (if they're in the same stack frame)? If an exception is thrown while the release_guard is in scope, is it guaranteed that the release_guard will be destructed before the guard? Bad things will happen if it's done out-of-order. I thought I understood most of these issues, but reading your stuff on this list makes me feel like a newbie. Thanks, - Mark Alexander Terekhov wrote:
"William E. Kempf" wrote: [...]
means short blocks, even if artificial. Short code blocks combined with the need to carefully analyze synchronization leads to little chance of making the mistake you illustrate.
But explicit unlocking (also "RAII" based) sometimes IS "needed".
http://terekhov.de/DESIGN-futex-CV.cpp
~futex_condvar() { mutex::guard guard( m_mutex ); assert( m_waiters[0] == m_wakeups ); while ( m_waiters[0] ) { int ftx = m_futex = EOC(); mutex::release_guard release_guard( guard ); cancel_off_guard no_cancel; m_futex.wait( ftx ); } }
regards, alexander.
Info: http://www.boost.org Wiki: http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl Unsubscribe: mailto:boost-users-unsubscribe@yahoogroups.com
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/

Mark Sizer wrote:
That's interesting.
Is it really necessary to have a release_guard that re-locks in the destructor? What's the point?
The point is to save some key strokes, basically.
I could read the spec, but I'll ask instead: Isn't the compiler allowed to optimize away some scoping?
The compiler is not allowed to do that (see 3.7.2/3).
Is it required that objects be destructed in the order they were constructed (if they're in the same stack frame)?
See 6.7/2 and 6.6/2.
If an exception is thrown ...
Then rather interesting things COULD happen (please try not to miss the "it's time to fix the standard" link below ;-) ). <Forward Inline> -------- Original Message -------- Newsgroups: comp.programming.threads Subject: Re: __attribute__((cleanup(function)) versus try/finally David Butenhof wrote: [...]
--- in C/POSIX module ---
void cleanup(void *) { printf("hello\n"); }
void c_f() { pthread_cleanup_push(cleanup, 0); pthread_exit(0); pthread_cleanup_pop(0); }
--- in C++ 'main' module ---
struct object { ~object() { printf("hello\n"); } };
void f() throw() { object o; c_f(); }
int main() { f(); }
How many times will we see "hello"?
I see it three times, because that's how many times you typed it. (Was that a trick question? ;-) )
;-)
Seriously, though, I think that's very much the crux of your argument, isn't it?
Yep.
I would expect "hello" to be output by the POSIX cleanup handler as c_f() is unwound in response to pthread_exit(). THAT is clearly specified by current standards.
NO! I can see nothing in the POSIX standard that would prohibit the following implementation of pthread_exit(): extern "C" void pthread_exit(void * ptr) { std::thread_exit(ptr); } using something along the lines of: (from the "std" namespace) class thread_termination_request : public std::exception ... class thread_cancel_request : public std::thread_termination_request ... class thread_exit_request : public std::thread_termination_request ... template<typename T> class thread_exit_value : public std::thread_exit_request ... template<typename T> void thread_exit(T value) { assert(std::thread_self().can_exit_with<T>()); throw thread_exit_value(value); } < as an aside: Attila, do you follow me? > I see almost-no-problems** catching and "finalizing" ANY of these exceptions. If one can catch-and-finalize "thread termination" and cause an abnormal process termination right after "finalizing" it, I don't see why this can't be done by the implementation at throw point due to ES violation. **) http://www.opengroup.org/austin/mailarchives/austin-group-l/msg05202.html (Subject: XSH ERN 77, 81 and 82)
Now, though, we face what may be a philosophical issue, if it's not carefully tied down by the C++ standard (which I haven't read, much less analyzed in detail). If C++ is required to have 2-phase exceptions AND if
It isn't required, currently.
an implementation is not allowed to SEARCH (phase 1) through f()'s empty throw() specification, then I would expect to see std::unexpected() fire before the second "hello" can be written by o's destructor.
NO! Before the FIRST "hello"! (I know that you know that the answer I wanted is a sort of /pthread_/null of "hello" ;-) ). Because the "handler" inject by pthread_cleanup_push() shall be modeled upon a C++ destructor, not archaic try/finally.
However, you have suggested that a single phase unwind implementation is allowed, and in such an implementation I would expect o's destructor to run, printing a second "hello", and THEN for std::unexpected() to fire as the runtime attempts to unwind through the empty throw() specification.
Yes. Well, please take a loot at: http://groups.google.com/groups?selm=3EC0ECAA.6520B266%40web.de (Subject: Exception handling... it's time to fix the standard)
I can tell you that the C++ implementation on both Tru64 UNIX and OpenVMS, compiled and run with default options, print "hello" twice and THEN abort with unexpected().
That's exactly what the C++ standard currently mandates, so to speak.
(Proving that, as I said, while OpenVMS has always supported 2-phase exceptions and recommends cleanup on the unwind phase, not everyone actually uses it that way. ;-) )
Yeah, unfortunately, that's seems to be true with respect to majority of C++ committee members too.
I can also say that on Tru64 UNIX (I didn't bother going through the extra gyrations to get and analyze a process core dump on OpenVMS), the core file leads one to the f() frame (though there's terminate and abort and raise and all that stuff on top of it) so that someone diagnosing the abort might be lead to examine the f() function and notice the empty throw() specification. (While OpenVMS reports that "terminate or unexpected" has been invoked, on Tru64 you get only "Resources lost(coredump)".)
This at least meets my minimal requirements, that the cleanup and unwind be properly synchronized. I'd be vexed at an implementation that maintained a separate stack of cleanup handlers, for example, and called them without actually unwinding the call stack, so that the final core file might show c_f() as the active frame even though cleanup had occurred out through f(). I'd be annoyed if the core file showed NO active frames, because there'd be no clue to what happened.
Yes.
I also understand that YOU would prefer that std::unexpected() would fire without running ANY cleanup OR unwinding any frames as the SEARCH exception pass (phase 1) ran up against the empty throw() specification.
YES!
I'm inclined to agree with you philosophically, but with one foot (plus a heel of the other foot) planted firmly in the real world, I'd worry about the consequences of breaking external invariants by suddenly adding a requirement that ~o() not be run in this case. Even your trivial example shows where this could cause problems, because you do have an external invariant. The output of this program might be redirected into a file that might be used as a benchmark for testing (or for other purposes),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Oh, http://groups.google.com/groups?selm=3EA6B22B.50C5B37%40web.de
#include <cassert> // The C++ stuff
#include
and the change in output from "hello\nhello\n" to "hello\n" between one version of the C++ runtime and another could indeed be an issue.
Yes, I understand. But please note that propagation of exceptions (unwinding) when "no matching handler found" is implemention-defined (well, ES aside) in the current C++ standard. Portably, folks just can't rely on always-unwind... if they don't use catch(...) and/or "current" version of ES. The funny thing is that exceptions specs are considered sort-of "harmful" by many "prominent" members of the C++ community and aren't recommended. http://www.gotw.ca/publications/mill22.htm http://www.boost.org/more/lib_guide.htm#Exception-specification regards, alexander. -- http://groups.google.com/groups?selm=3D3C0BCA.A5E2F8B2%40web.de

Thanks. I like your tight scoping. It does what I want (maintenance-wise), looks tidy, and avoids the redundant unlock. I've adopted it.
void foo() { { boost::mutex::scoped_lock lock(mutex); // code } // other code }
- Mark P.S. You must work either with very new code or with much better than average programmers if you haven't run into maintenance nightmares revolving around people changing what they don't understand or writing 6000 line methods. William E. Kempf wrote:
Mark Sizer said:
The hypothetical is easy enough:
void myclass::doSomething() { scoped_lock lockData( _mutexData ); <do something using the data> }
After two years and three developers:
void myclass::doSomething() { scoped_lock lockData( _mutexData ); <do the original thing that needed the lock>
<do all sorts of additional stuff not needing lock> }
This doesn't illustrate it for me, for several reasons:
1) Doing more stuff that doesn't need to be locked doesn't necessarily mean dire consequences. We all know that holding a lock too long *can* lead to problems, but generally it's only a problem of performance.
2) If you have a block of code in which things grow to the point that you'd not be able to easily see the scoped lock, and thus not make this mistake, then you've got more maitenance issues than explicit unlocking is going to help you with.
3) Generally, most functions/blocks will be factored in such a way that the mutex *would* have to be held the entire scope. It's a rare case in which you'd have a function like you illustrate above, and such cases are generally self evident from the beginning and don't result from maintenance changes.
Of course we all know that no programmer would ever be so lazy as to modify code with which he was not completely familiar (I REALLY need thread IDs! - sound eerily familiar? [btw: TSS is working great.]). No one would ever make one method do several different things, either. All code is kept optimally factored over years of development <chortle>.
What do thread IDs have to do with this?
But yes, I generally do believe that properly designed code won't have any of the characteristics you sarcastically seem to indicate does occur. If you work with code like that, you have much worse problems than holding onto a lock longer than you should will help prevent.
Do you call reset() explicitly on all of your auto_ptr's?
I'm not going to tell anyone else that they shouldn't call unlock() explicitly. If you think it will help during maintenance, then by all means, do so. But I'm not convinced enough to recommend this to others.
In an ideal world, unlocking is rarely necessary.
I hope you mean explicit unlocking!
In the real world, I think it helps define the developer's intent. At the very least the next programmer is presented with the obvious choice of putting the new code before or after the "unlock". He can still do it wrong, but it has to be concious choice.
In my own experience, it's been a concious choice to do so with implicit unlocking as well. I've never made the mistake you illustrate above. It's probably due to the fact that when dealing with synchronization, you *HAVE* to fully understand the code being synchronized, lest you create deadlocks or race conditions, so such areas of code are kept short, explicit and well documented.
Haven't you ever tracked down this bug (usually introduced by those who think indentation is for wimps): if ( <condition> ) { <do true> } else <do false>
that becomes:
if ( <condition> ) { <do true> } else <do false> <do more false> // oops!
Same pattern, different situation.
I don't see it as the same pattern at all.
If I ever get around to creating a language, it will be indent sensitive. Screw the punctuation ('{', 'begin', '(', etc...).
Try Python.
P.S. On the same note, does anyone indent inside locks (I don't)?
Yes:
void foo() { { boost::mutex::scoped_lock lock(mutex); // code } // other code }
Not quite what you meant, I know, but I think this illustrates the point about RAII not necessarily leading to the problem you see. Locks should be held for as short of a period of time as possible, generally, which means short blocks, even if artificial. Short code blocks combined with the need to carefully analyze synchronization leads to little chance of making the mistake you illustrate.

This question sparked another "usage convetion" question in my memory that came up around the office lately. I've been slow introducing the members of my development team to a subset of Boost.Threads and trying to get them to use RAII thinking when implementing their locks. There seem to be two schools of thought on how to synchronize access to a class. One school, primarly from the developers who have experience in Java, tend to write classes that do their own mutex locking: class CalleeLocked { private: typedef boost::mutex mutex_type; mutex_type m_mutex; public: void synchronizedFunction() { mutex_type::scoped_lock lock( m_mutex ); // ... do work ... } }; The other school, to which I must admit I belong, believes that it should be the caller's responsibility to perform the locking: class CallerLocked { public: typedef boost::mutex mutex_type; void unsynchronizedFunction() { // ... do work ... } mutex_type & myMutex() { return m_mutex; } private: mutex_type m_mutex; } void Caller( CallerLocked & c ) { CallerLocked::mutex_type::scoped_lock lock( c.myMutex() ); c.unsynchronizedFunction(); } The Callee lockers believe that is more important for the class to implement the locking, because the class knows what data it has that requires synchronized access, and that putting the locking inside the class means that you only have to get it right once. The Caller lockers argue that adding locks to all your class functions introduces overhead that is unnecessary if you never share instances between threads, and that making the locking explicit in the caller makes it somewhat easier to prevent, or at least track down, deadlock errors. Is there a community consensus about this? I can imagine that there are cases where either might be appropriate, but I'd like to hear peoples experiences with both implementations. Thanks in advance, Christopher Currie

Christopher Currie said:
This question sparked another "usage convetion" question in my memory that came up around the office lately. I've been slow introducing the members of my development team to a subset of Boost.Threads and trying to get them to use RAII thinking when implementing their locks.
There seem to be two schools of thought on how to synchronize access to a class. One school, primarly from the developers who have experience in Java, tend to write classes that do their own mutex locking:
Please note that Java allows both types of synchronization to objects.
class CalleeLocked { private: typedef boost::mutex mutex_type; mutex_type m_mutex;
public: void synchronizedFunction() { mutex_type::scoped_lock lock( m_mutex ); // ... do work ... } };
The other school, to which I must admit I belong, believes that it should be the caller's responsibility to perform the locking:
class CallerLocked { public: typedef boost::mutex mutex_type;
void unsynchronizedFunction() { // ... do work ... }
mutex_type & myMutex() { return m_mutex; }
private: mutex_type m_mutex; }
void Caller( CallerLocked & c ) { CallerLocked::mutex_type::scoped_lock lock( c.myMutex() ); c.unsynchronizedFunction(); }
The Callee lockers believe that is more important for the class to implement the locking, because the class knows what data it has that requires synchronized access, and that putting the locking inside the class means that you only have to get it right once. The Caller lockers argue that adding locks to all your class functions introduces overhead that is unnecessary if you never share instances between threads, and that making the locking explicit in the caller makes it somewhat easier to prevent, or at least track down, deadlock errors.
With the exception of accessing external, global data, inside a class method, if any of the data needs protecting, all of it does. So I'm not sure I agree with that argument. The argument about added overhead, even when not accessed by multiple threads, is a legitimate concern. There's a design pattern for this, where the mutex type used is a template parameter and a "null mutex" which does no synchronization can be chosen in cases where there will be no sharing. I'm not sure I agree that making the locking explicit at the call site will make it any easier to prevent or track down deadlock errors. A bigger issue with internal locking that you've not mentioned is that this approach limits the granularity of the lock to the method call. This is one reason why Java supports both types of synchronization. Andrei Alexandrescu has a nice article on this very subject at http://www.informit.com/isapi/product_id~{E3967A89-4E20-425B-BCFF-B84B6DEED6CA}/session_id~{021CB913-CD4D-49C4-9B32-4240CC8BA93C}/content/index.asp. -- William E. Kempf

Christopher Currie wrote:
This question sparked another "usage convetion" question in my memory that came up around the office lately. I've been slow introducing the members of my development team to a subset of Boost.Threads and trying to get them to use RAII thinking when implementing their locks.
There seem to be two schools of thought on how to synchronize access to a class. One school, primarly from the developers who have experience in Java, tend to write classes that do their own mutex locking: snipped... The Callee lockers believe that is more important for the class to implement the locking, because the class knows what data it has that requires synchronized access, and that putting the locking inside the class means that you only have to get it right once. The Caller lockers argue that adding locks to all your class functions introduces overhead that is unnecessary if you never share instances between threads, and that making the locking explicit in the caller makes it somewhat easier to prevent, or at least track down, deadlock errors.
Is there a community consensus about this? I can imagine that there are cases where either might be appropriate, but I'd like to hear peoples experiences with both implementations.
I think the Callee approach is the right one if the class might need a lock. However, I would combine this with an approach which allows the Caller to turn on or off the Callee's internal locking ability. It does seem silly to require the Caller to implement necessary locking in a class each time when the Callee can implement it once and allow a simple function call to turn it on or off. OTOH, as a Callee, I would need to be very sure that the usage of my class really needs locking in a multi-threaded envrionment before I implemented the necessary locking as an option of my class. And of course I would use whatever preprocessor information I could gather for my code to make sure that I do not include my implemented internal locking when my class's functionality is compiled for a single-threaded environment.
participants (6)
-
Alexander Terekhov
-
Christopher Currie
-
Edward Diener
-
Mark Sizer
-
Ryan Barrett
-
William E. Kempf