
Guys, could you please clarify whether the following usage of weak_ptr is thread safe: //thread A: shared_ptr<Foo> p = wp.lock(); //thread B: p.reset(); I believe it's not thread safe as it looks very similar to the example 3 from the shared_ptr documentation: // thread A p = p3; // reads p3, writes p // thread B p3.reset(); // writes p3; undefined, simultaneous read/write Am I right in my assumption? And if yes, could you please give any recommendations for the following use case. I have a networking thread which owns a set of net session objects stored as shared_ptrs. I also have another thread where some entities have weak_ptrs members pointing to net session objects from networking thread. From time to time net sessions are erased and I think when at the very this moment entity object from another thread tries to lock() the net session and access it seg. fault happens. Thanks in advance. -- Best regards, Pavel

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wednesday 22 July 2009, Pavel Shevaev wrote:
It's safe, presuming your "p" in thread A is a different shared_ptr object from the "p" reset in thread B. They can safely be copies sharing ownership of the same object.
In this unsafe case, thread A and B are manipulating the same shared_ptr object. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) iEYEARECAAYFAkpnZ2gACgkQ5vihyNWuA4VUPACfXSYjZHiK9dkNnjAziUHFK5S9 KAAAnRYrm4HRILt05OBE/9iw2NaKJSqa =nfKj -----END PGP SIGNATURE-----

Oh, sorry for the confusing example, it should be more clear: //thread A: shared_ptr<Foo> a = wp.lock(); //wp is a weak pointer for p from thread B //thread B: p.reset(); What I'm actually asking about whether it's thread safe to call simultaneously wp.lock() in one thread and p.reset() in another one. -- Best regards, Pavel

Is Http Server 3 a good approach for this? http://www.boost.org/doc/libs/1_39_0/doc/html/boost_asio/examples.html#boost... I've got a server, that listens on a socket. It receives requests, and does CPU-bound work, then writes out a response. So I'd like it to handle N requests at a time, where N is the number of cores. HTTP Server 3 looks like it does this, but questions: - is this the best approach? - what (practically speaking) will happen to HTTP Server 3 if all of its threads are busy processing requests, and say 10 requests arrive. Will they fail? Will the O/S queue them up? - Alex
participants (4)
-
Alex Black
-
Frank Mori Hess
-
Pavel Shevaev
-
Peter Dimov