
El 09/11/2012 19:05, Brink, Robert (GE Aviation, US) escribió:
I'm running Boost 1.52 on Windows 7, Visual Studio 2010.
I noticed in sync\interprocess_mutex.hpp there is a windows mutex implementation that is hidden behind the BOOST_INTERPROCESS_FORCE_GENERIC_EMULATION macro. The comment indicates that it is "Experimental".
Could someone shed some light on what is Experimental about it and what the general plans for this feature are?
I was having a problem with many processes all spin-locking when allocating shared memory that I traced down to the old mutex implementation for shared memory.
Which kind of problem? Spinlocks are quite simple so unless it's a performance-related issue, I can't imagine what can be wrong with spinlock-based mutexes.
I enabled the experimental feature in my local build ( by commenting out the above macro in the detail\workaround.hpp file) and it seems to fix my problems. My performance is much improved. But what are my risks for using this implementation? I'm a little nervous to depend on something marked as "Experimental".
A better windows implementation was needed, but it's not easy to write it. I don't know of any PTHREAD_PROCESS_SHARED windows emulation. Cygwin does not even try to emulate it. The implementation is not optimized, I had no time to test it thoroughly, and in some aspects it's a bit tricky (each process stores the already created synchronization handles in a header-only singleton implementation that stores the synchronization map into the current and max count of a named semaphore!). I also creates windows named resources (named mutexes, and semaphores) on the fly, I don't know if we'll hit some process or kernel related limits with this approach. If you and some Interprocess users can test it and give some feedback, we could make them default in a future Boost release, but this will create a binary incompatibility in windows systems so we must be sure this implementation is better than the old one. Before we break the ABI, we need to make sure performance is correct. Current implementation is suboptimal in many aspects (a hash and map lookup to obtain the handle of the primitive is overkill IMHO and performance might worse as the number of synchronization primitives grow). We maybe need to write a spinlock-based fast path and use named synchronization primitives in case of contention. I'm glad to hear it worked for you, but I can't guarantee the implementation is well tested and the ABI will surely change. Any feedaback you could give me it would be very valuable. Best, Ion