Locking mechanisms other than scoped_lock
Hello, I'm currently trying to achieve the following: A series of threads handle jobs (threads A). Of each of these jobs, in the end, a statistics object is placed in a container. This container will later be read-from by another thread (thread B), dedicated to continuously reading statistics data from other threads, and parse those statistics. Now, since it is very well possible a thread handles at least 1,000 jobs before statistics data is actually read from, I want it to hold a mutex lock to the data at all time. The class also has an internal volatile boolean, which flags whether another thread (thread B) wants to read from the container. If so, the writing thread (threads A) releases the lock, after which thread B will pick up the lock and write the data and resets the boolean flag. After releasing the lock, threads A will immediately try to re-acquire the lock. However, as far as I can see, for this approach, a scoped_lock will not be the most fantastic solution ? Or am I missing something really crucial about the scoped_lock being able to hold a lock outside a certain scope ? :) Thanks in advance for any suggestions about a locking type... :) Regards, Leon Mergen http://www.solatis.com/
Leon Mergen
However, as far as I can see, for this approach, a scoped_lock will not be the most fantastic solution ? Or am I missing something really crucial about the scoped_lock being able to hold a lock outside a certain scope ? :)
You can always allocate it on the heap and hold it in a smart pointer. -- Dave Abrahams Boost Consulting www.boost-consulting.com
Leon Mergen wrote:
Hello,
I'm currently trying to achieve the following:
A series of threads handle jobs (threads A). Of each of these jobs, in the end, a statistics object is placed in a container. This container will later be read-from by another thread (thread B), dedicated to continuously reading statistics data from other threads, and parse those statistics.
Now, since it is very well possible a thread handles at least 1,000 jobs before statistics data is actually read from, I want it to hold a mutex lock to the data at all time.
How long do these jobs take? Unless they take less than 100,000 cycles (that's about 50 microseconds on a current processor) I would be surprised if the use of a mutex when adding a statistics object would slow things down much, given that you seem to be saying the mutex will generally be uncontended.
The class also has an internal volatile boolean, which flags whether another thread (thread B) wants to read from the container. If so, the writing thread (threads A) releases the lock,
When does the writing thread poll this flag?
after which thread B will pick up the lock and write the data and resets the boolean flag. After releasing the lock, threads A will immediately try to re-acquire the lock. <snip>
You might find that this works on your development machine, but it will likely fail elsewhere. There is no guarantee that releasing a mutex that's blocking another thread is will wake that other thread. Ben.
Ben Hutchings wrote:
after which thread B will pick up the lock and write the data and resets the boolean flag. After releasing the lock, threads A will immediately try to re-acquire the lock. <snip>
You might find that this works on your development machine, but it will likely fail elsewhere. There is no guarantee that releasing a mutex that's blocking another thread is will wake that other thread.
This is an interesting question. The situation is that thread B waits on a mutex and thread A first released mutex and then locks it again. POSIX says that: If there are threads blocked on the mutex object referenced by mutex when pthread_mutex_unlock() is called, resulting in the mutex becoming available, the scheduling policy is used to determine which thread shall acquire the mutex. Which I interpret as saying that if B waits on the lock when A releases it, B will acquire the lock, as it's the only thread waiting on the lock. Am I wrong? OTOH hand, I don't understand why flag is needed. Obviously, there are some performance concerns which are not made clear in the original post. - Volodya
Vladimir Prus wrote:
Ben Hutchings wrote:
after which thread B will pick up the lock and write the data and
resets the boolean flag. After releasing the lock, threads A will immediately try to re-acquire the lock.
<snip>
You might find that this works on your development machine, but it will likely fail elsewhere. There is no guarantee that releasing a mutex that's blocking another thread is will wake that other thread.
This is an interesting question. The situation is that thread B waits on a mutex and thread A first released mutex and then locks it again. POSIX says that:
If there are threads blocked on the mutex object referenced by mutex when pthread_mutex_unlock() is called, resulting in the mutex becoming available, the scheduling policy is used to determine which thread shall acquire the mutex.
Which I interpret as saying that if B waits on the lock when A releases it, B will acquire the lock, as it's the only thread waiting on the lock. Am I wrong? <snip>
I think you are right, but Boost.Threads isn't just a wrapper for pthreads. Also, it strikes me now that there is also no guarantee that thread B is blocked by the time thread A unlocks the mutex; it could be descheduled immediately after it sets the volatile flag, allowing A to unlock and relock straight away. I don't expect the results of this to be disastrous, as A will presumably give B another chance the next time it polls the flag, but it's not a very reliable means of communication. Ben.
Ben Hutchings wrote:
Vladimir Prus wrote:
Ben Hutchings wrote:
after which thread B will pick up the lock and write the data and
resets the boolean flag. After releasing the lock, threads A will immediately try to re-acquire the lock.
<snip>
You might find that this works on your development machine, but it will likely fail elsewhere. There is no guarantee that releasing a mutex that's blocking another thread is will wake that other thread.
This is an interesting question. The situation is that thread B waits on a mutex and thread A first released mutex and then locks it again. POSIX says that:
If there are threads blocked on the mutex object referenced by mutex when pthread_mutex_unlock() is called, resulting in the mutex becoming available, the scheduling policy is used to determine which thread shall acquire the mutex.
Which I interpret as saying that if B waits on the lock when A releases it, B will acquire the lock, as it's the only thread waiting on the lock. Am I wrong? <snip>
I think you are right, but Boost.Threads isn't just a wrapper for pthreads.
I wonder if Windows semantics differ... the docs for ReleaseMutex are not very clear on this.
Also, it strikes me now that there is also no guarantee that thread B is blocked by the time thread A unlocks the mutex; it could be descheduled immediately after it sets the volatile flag, allowing A to unlock and relock straight away.
Yes, that's what I though too.
I don't expect the results of this to be disastrous, as A will presumably give B another chance the next time it polls the flag, but it's not a very reliable means of communication.
I interpret the intentions like this: 1. The simplest way is to for thread A to push data after each job is done, and for thread B to get data when it wants to, using mutex to protect data. 2. Another approach is for thread A to always hold the mutex, and briefly unlock it after each job is done. Thread B will wait on the mutex, and eventually A will unlock the mutex when B waits on it, so B will acquire the mutex and do the job. 3. Even more complex approach is to make A unlock the mutex only if some variable is set. In that case, the situation where B sets the flag, A releases and locks the mutex and then B tries to lock the mutex is harmless -- B will get the mutex on the next cycle. However, this is rather complicated design! - Volodya
On Mar 19, 2005, at 1:29 PM, Leon Mergen wrote:
Hello,
I'm currently trying to achieve the following:
A series of threads handle jobs (threads A). Of each of these jobs, in the end, a statistics object is placed in a container. This container will later be read-from by another thread (thread B), dedicated to continuously reading statistics data from other threads, and parse those statistics.
Now, since it is very well possible a thread handles at least 1,000 jobs before statistics data is actually read from, I want it to hold a mutex lock to the data at all time. The class also has an internal volatile boolean, which flags whether another thread (thread B) wants to read from the container. If so, the writing thread (threads A) releases the lock, after which thread B will pick up the lock and write the data and resets the boolean flag. After releasing the lock, threads A will immediately try to re-acquire the lock.
However, as far as I can see, for this approach, a scoped_lock will not be the most fantastic solution ? Or am I missing something really crucial about the scoped_lock being able to hold a lock outside a certain scope ? :)
Thanks in advance for any suggestions about a locking type... :)
This sounds like a good application for condition variables: http://www.boost.org/doc/html/condition.html -Howard
Howard Hinnant wrote:
On Mar 19, 2005, at 1:29 PM, Leon Mergen wrote:
[...]
Now, since it is very well possible a thread handles at least 1,000 jobs before statistics data is actually read from, I want it to hold a mutex lock to the data at all time. [...]
This sounds like a good application for condition variables:
Yes. Mutexes should never be used to block a thread until some event occurs. Their purpose is to serialize access to a shared resource, and you should strive to make the section between the lock and the unlock as short as possible. The design of scoped_lock tries to point you in that direction. Condition variables are the proper primitive to make a thread wait for an event.
participants (6)
-
Ben Hutchings
-
David Abrahams
-
Howard Hinnant
-
Leon Mergen
-
Peter Dimov
-
Vladimir Prus