
Hello everyone, The review of the proposed Boost Shmem library written by Ion Gaztañaga starts today (February 6th, 2006) and ends after 10 days (February 16, 2006). Documentation for the library can be found online here: http://tinyurl.com/e2os3 Both documentation and the library itself can be downloaded here: http://tinyurl.com/9t5qy --------------------------------------------------- About the library: The Shmem Library simplifies the use of shared memory and provides some STL compatible allocators that allow shared memory STL container placement. It also offers helper classes, such as offset pointers, process shared mutexes, condition variables, and named semaphores. Library users can apply all STL algorithms and utilities with created shared memory objects. Shmem also wants to present a portable implementation, unifying UNIX and Windows shared memory object creation, providing simple wrapper classes. Shmem offers the following to the user: - Portable synchronization primitives for shared memory, including shared mutexes and shared condition variables. - Dynamic allocation of portions of a shared memory segment. - Named allocation in shared memory. Shmem can create objects in shared memory and associate it with a c-string offering a similar mechanism to a named new/delete. Created object can be found by other processes using Shmem framework. - Offset pointers that can be safely placed in shared memory to point to another object of the same shared memory segment even if the memory is mapped to a different address. - STL compatible shared memory allocators so that STL containers can be placed in shared memory. Shmem also offers a pooled node allocator to save and optimize shared memory allocation in node containers. - STL compatible containers for systems with STL implementations that cannot deal with shared memory. The user can map shared memory to different addresses and use STL compatible containers and algorithms. Useful to store common data or implement shared memory databases. These containers also show how STL compatible containers can make use of shared memory STL compatible allocators to place such containers in the shared memory. Shmem also offers the basic_string pseudo-container to use full-powered C++ strings in shared memory. --------------------------------------------------- Please always state in your review, whether you think the library should be accepted as a Boost library! Additionally please consider giving feedback on the following general topics: - What is your evaluation of the design? - What is your evaluation of the implementation? - What is your evaluation of the documentation? - What is your evaluation of the potential usefulness of the library? - Did you try to use the library? With what compiler? Did you have any problems? - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? - Are you knowledgeable about the problem domain? Fred Bertsch Review Manager

Documentation for the library can be found online here: http://tinyurl.com/e2os3
This is a mini-review based on a quick read of the documentation. Generally I thought the documentation was good and thorough. 1. I wanted to see at least a paragraph describing what kinds of problems shared memory solves. I assume it is for inter-process communication on the same machine. What are the pros/cons compared to using TCP/IP sockets, and pipes? Sockets have the advantage that they work beyond the local machine, but presumably shared memory is quicker: I wanted to see a number to say just how much quicker. 2. "Performance of Shmem" is better called "Optimization Options". http://ice.prohosting.com/newfunk/boost/libs/shmem/doc/html/shmem/performanc... 3. In the quick guide (http://ice.prohosting.com/newfunk/boost/libs/shmem/doc/html/shmem/quick_guid...): 3a. in the first example code you have "(void)offset;" on a line by itself. Is this a typo? If not it is unusual so needs more explanation. The same for "(void)msg;" in the second example. 3b. I'd have liked to see a full example showing the interprocess communication. Or at least expand out "//Copy message to buffer" to show writing a string into the buffer. 4. I didn't understand the first sentence in the limitations.html page: "Shmem wants to be portable across multiple operating systems so that it can not count with an operating systems that maps shared memory to the same base address in all processes in the system" I think I'm stuck on "can not count with".
Please always state in your review, whether you think the library should be accepted as a Boost library!
A "low-weight" yes.
- What is your evaluation of the documentation?
It looks professional.
- What is your evaluation of the potential usefulness of the library?
I don't know. For applications I can think of I would instead choose to use sockets, so I can easily move part of an application to another machine, or across a cluster. Also I can easily write part of an application in a different language.
- How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
About 30 minutes reading the docs.
- Are you knowledgeable about the problem domain?
No. Darren

Hi Darren.
"Shmem wants to be portable across multiple operating systems so that it can not count with an operating systems that maps shared memory to the same base address in all processes in the system"
I think I'm stuck on "can not count with".
I believe OP meant 'can not count on the OS mapping a shared memory segments to the same base memory address in all processes in the system that connect to it'. Some (like HP-UX) do this, while others (like Windows) do not (at least not by default) so this is not something that can be relied on and still have a portable library. Hope this helps. Best regards, Jurko Gospodnetic

Hi Darren,
Generally I thought the documentation was good and thorough.
Thanks!
1. I wanted to see at least a paragraph describing what kinds of problems shared memory solves. I assume it is for inter-process communication on the same machine. What are the pros/cons compared to using TCP/IP sockets, and pipes? Sockets have the advantage that they work beyond the local machine, but presumably shared memory is quicker: I wanted to see a number to say just how much quicker.
At least UNIX sockets/pipes are built using shared memory, so it should be fast! Think about a data-base in shared memory: it will be always faster to do a lookup directly than sending a message to an application, and waiting response. With sockets, you must serialize the data first to a buffer, the OS copies that to internal memory and after that, to another process' memory, and the other process must deserialize it. With shared memory there is just one copy. Above from that, shared memory is used to construct more advanced IPC mechanisms, like message queues/pipes, UNIX domain sockets, so it's a basic building block for more advanced mechanisms. Anyway, I have no problem to add a paragraph describing this.
2. "Performance of Shmem" is better called "Optimization Options". http://ice.prohosting.com/newfunk/boost/libs/shmem/doc/html/shmem/performanc...
Yes, it's a better name. Performance should talk about numbers, not about where you should optimize. Thanks.
3. In the quick guide (http://ice.prohosting.com/newfunk/boost/libs/shmem/doc/html/shmem/quick_guid...):
3a. in the first example code you have "(void)offset;" on a line by itself. Is this a typo? If not it is unusual so needs more explanation. The same for "(void)msg;" in the second example.
Sorry, it's a copy-paste from a compilable example, where I use that to avoid "not used variable" warnings. I will remove it. Thanks.
3b. I'd have liked to see a full example showing the interprocess communication. Or at least expand out "//Copy message to buffer" to show writing a string into the buffer.
The problem is that there is no standard C++ IPC mechanism. I could use the boost::shmem::shared_message_queue to send the pointer and introduce also this class.
4. I didn't understand the first sentence in the limitations.html page:
"Shmem wants to be portable across multiple operating systems so that it can not count with an operating systems that maps shared memory to the same base address in all processes in the system"
I think I'm stuck on "can not count with".
Sorry about my bad English. As other boosters have explained it tries to say, in Jurko's words: 'can not count on the OS mapping a shared memory segments to the same base memory address in all processes in the system that connect to it'
- What is your evaluation of the potential usefulness of the library?
I don't know. For applications I can think of I would instead choose to use sockets, so I can easily move part of an application to another machine, or across a cluster. Also I can easily write part of an application in a different language.
Shared memory data-bases, construction of advanced IPC mechanisms (pipes, message queues...) with lower overhead Thanks and regards, Ion

I'll be posting a more complete review later on, but I thought I'd post a few preliminary questions early. First, there tends to be a create and object member function of a lot of objects in the library. This forced Ion to create partially constructed objects with the constructor, but it allows the use of return values instead of exceptions for error handling. I personally do not like partially constructed objects, and I do not mind exceptions that are thrown if there's a serious problem detected. I'm curious what the rationale was for that decision. Do the create/open functions fail frequently in typical use cases? Second, there isn't as much type safety as there could be in a lot of these classes. For example, shared_message_queue does not have a template parameter to determine what is stored in the queue. Instead, its send and receive member functions take void*'s. Is there a good reason for this? I suppose another process could use the same shared_message_queue with another type, but I'd really like to see some type safety within the same process. -Fred

Fred Bertsch wrote:
Second, there isn't as much type safety as there could be in a lot of these classes. For example, shared_message_queue does not have a template parameter to determine what is stored in the queue. Instead, its send and receive member functions take void*'s. Is there a good reason for this? I suppose another process could use the same shared_message_queue with another type, but I'd really like to see some type safety within the same process.
Moving a reinterpret_cast from user code, where it's visible, to the queue implementation, where it's hidden, decreases type safety instead of increasing it. A typed interface is only meaningful if the queue enforces type safety, perhaps by encoding the type (and ideally the compiler version) somehow.

On 2/7/06, Peter Dimov <pdimov@mmltd.net> wrote:
Fred Bertsch wrote:
Second, there isn't as much type safety as there could be in a lot of these classes. For example, shared_message_queue does not have a template parameter to determine what is stored in the queue. Instead, its send and receive member functions take void*'s. Is there a good reason for this? I suppose another process could use the same shared_message_queue with another type, but I'd really like to see some type safety within the same process.
Moving a reinterpret_cast from user code, where it's visible, to the queue implementation, where it's hidden, decreases type safety instead of increasing it. A typed interface is only meaningful if the queue enforces type safety, perhaps by encoding the type (and ideally the compiler version) somehow.
I will admit that the queue will have trouble enforcing type safety across process boundaries. However, the queue can certainly enforce type safety *within* one process. It can make sure, for example, that exactly one type is written to a particular queue and exactly one type is read from the queue. Guaranteeing that the same type is written and read is more of a problem. I'll admit that it would certainly be better if it could write out a type_info::name or something when the queue is created or opened, but it's hard to imagine how that could work without putting in workarounds for each supported compiler. Actually, you'd have to write out compiler switches as well as the compiler version if you wanted it to be even slightly safe. I don't think that's worth doing. -Fred

On 2/7/06, Fred Bertsch <fred.bertsch@gmail.com> wrote:
Second, there isn't as much type safety as there could be in a lot of these classes. For example, shared_message_queue does not have a template parameter to determine what is stored in the queue. Instead, its send and receive member functions take void*'s. Is there a good reason for this? I suppose another process could use the same shared_message_queue with another type, but I'd really like to see some type safety within the same process.
After rereading the documentation and reading the code, I think I was confused when I wrote this. It appears that the shared_message_queue cannot be used for objects created with the named_shared_object::construct member function. A memcpy is done on the buffer passed into send, so objects cannot be sent. Thus, a void* is correct. I do think that the documentation should be improved on that. I wasn't sure until I dug into the source code. If there are other places in the shmem library that don't support objects, I'd like to see those documented as well. Shmem is the first library I've seen that tries to support passing a C++ object through shared memory to another process. It seems like a really good idea, and I'd be happier if it were supported throughout the library. Maybe the shared_message_queue should support only offset_ptr's to objects that are created elsewhere in the named_shared_object?

Hi Fred,
After rereading the documentation and reading the code, I think I was confused when I wrote this. It appears that the shared_message_queue cannot be used for objects created with the named_shared_object::construct member function. A memcpy is done on the buffer passed into send, so objects cannot be sent. Thus, a void* is correct.
I do think that the documentation should be improved on that. I wasn't sure until I dug into the source code. If there are other places in the shmem library that don't support objects, I'd like to see those documented as well.
Surely documentation can be improved to show that shared_message_queue is a common, byte copying message queue between processes, just like localhost UDP socket messaging. The meaning of the bytes should be common between applications, the queue just forwards byte packets. The queue is an example of a higher level IPC mechanism built using Shmem primitives (shared memory, shared memory conditions and mutexes). You can pass structured data using shared message queue, just like the example of the section shows. You can build all your objects in a user buffer and byte-serialize it through the message queue. This is different from building it in shared memory, but like using shared memory, both processes must be ABI-compatible.
Shmem is the first library I've seen that tries to support passing a C++ object through shared memory to another process. It seems like a really good idea, and I'd be happier if it were supported throughout the library. Maybe the shared_message_queue should support only offset_ptr's to objects that are created elsewhere in the named_shared_object?
I think that a message queue like this should not care about the contained bytes. If you need a queue of objects of the same type. It's clear that you can build a STL-like object queue above shared_message_queue doing your casts. If that is a need, I'm ready to implement a "shared_object_queue". But these passed objects must be self-contained, I mean, you can pass a boost::shmem::vector<> because I would need to know how to bye-serialize it. However, you can use named_heap_object, construct in a single buffer all your complex data, and byte-copy it to another process to pass all the information. Regards, Ion

Hi fred,
First, there tends to be a create and object member function of a lot of objects in the library. This forced Ion to create partially constructed objects with the constructor, but it allows the use of return values instead of exceptions for error handling. I personally do not like partially constructed objects, and I do not mind exceptions that are thrown if there's a serious problem detected. I'm curious what the rationale was for that decision. Do the create/open functions fail frequently in typical use cases?
Can you elaborate a bit on this? I can't understand well what you are trying to point out. Thanks Ion

This is a fairly minor point, in my opinion, and I don't think I can say which way is better for shmem, but I'll try to explain it a bit better. For example, named_shared_object can only be created through its default constructor. Once created this way, you probably can't actually use your named_shared_object. Instead, you can complete the construction of the object by using open, create, or one of the other methods that "construct" the object. An error is indicated by the return value of these member functions. I don't actually know the name of this technique, and I suspect my point would be clearer if I knew its name. The alternative technique is to have several constructors or static member functions in named_shared_object. These would create an opened or created named_shared_object if nothing bad happened, and they would throw an exception if an error came up. I mostly brought this up because few things in boost use the first technique, so someone might have a strong opinion about it. -Fred On 2/7/06, Ion Gaztañaga <igaztanaga@gmail.com> wrote:
Hi fred,
First, there tends to be a create and object member function of a lot of objects in the library. This forced Ion to create partially constructed objects with the constructor, but it allows the use of return values instead of exceptions for error handling. I personally do not like partially constructed objects, and I do not mind exceptions that are thrown if there's a serious problem detected. I'm curious what the rationale was for that decision. Do the create/open functions fail frequently in typical use cases?
Can you elaborate a bit on this? I can't understand well what you are trying to point out.
Thanks
Ion
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- F

For example, named_shared_object can only be created through its default constructor. Once created this way, you probably can't actually use your named_shared_object. Instead, you can complete the construction of the object by using open, create, or one of the other methods that "construct" the object. An error is indicated by the return value of these member functions. I don't actually know the name of this technique, and I suspect my point would be clearer if I knew its name.
The alternative technique is to have several constructors or static member functions in named_shared_object. These would create an opened or created named_shared_object if nothing bad happened, and they would throw an exception if an error came up.
Ok. I think I can provide both approaches so that who doesn't want (or can't because of a restricted embedded enviroment) use exceptions can have its alternative. Regards, Ion

Ion Gaztañaga <igaztanaga@gmail.com> writes:
Ok. I think I can provide both approaches so that who doesn't want (or can't because of a restricted embedded enviroment) use exceptions can have its alternative.
< C R I N G E > If you really *must* provide 2-phase construction, then please make it a special mode that can be enabled with an #ifdef. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Ok. I think I can provide both approaches so that who doesn't want (or can't because of a restricted embedded enviroment) use exceptions can have its alternative.
< C R I N G E >
If you really *must* provide 2-phase construction, then please make it a special mode that can be enabled with an #ifdef.
I see you have strong opinions like me. So, no ifdefs, please. I would like to point out that as it's using the same model as Shmem, fstream from the standard library is also broken. If I can do: std::fstream file; //... some code file.open(); //The destructor closes the file Is this a extremely harmful code? If that's ok, what's the difference with: boost::shmem::shared_memory segment; //... some code segment.open(); //The destructor closes the segment Is "std::fstream file;" a "half-baked", "zombie" object? If you only have to use the constructor to open the shared memory I can't make boost::shmem::shared_memory a member of a class if the class has a different lifetime than the shared memory. Even if I have the same lifetime, maybe I have to do some validations/operations to obtain the parameters for the segment that can't be made (or would be really ugly and complicate to do) in the member initialization list. Using RAII only approach I have to dynamically allocate it. And to open another segment I can't reuse the object, I have to destroy it and allocate a new one. Default "empty" state (which is not the same as "zombie", because the class is fully constructed) allows also move semantics between classes. If you can only only the constructor and the destructor to open/close the segment you can't move the object to the target, since the source object can't be in a constructed state without owning the shared memory. I really don't like the "you should ONLY do this" approach. That's because what now is cool, some years later is demonized. I want choice. I don't like the approach of boost::thread, so I have to create a new boost::thread object every time I want to launch a thread. And I'm not the only one. With boost::thread, I can't have member boost::thread objects in classes (or I have to use optional<> like hacks) No silver bullet. No Good Thing (tm). If RAII is considered a good approach, I agree with you that I should provide it. But if fstream is good enough for the standard library, I think it can be a good model for Shmem without ifdefs. Apart from this, little discussion, now that you are in the Shmem review thread, I would love if you could find time to review it. I think that your C++ knowledge is very interesting for a library that tries to put STL containers in shared memory and memory mapped files. Regards (and waiting your review), Ion

Ion Gaztañaga wrote:
Ok. I think I can provide both approaches so that who doesn't want (or can't because of a restricted embedded enviroment) use exceptions can have its alternative.
< C R I N G E >
If you really *must* provide 2-phase construction, then please make it a special mode that can be enabled with an #ifdef.
I see you have strong opinions like me. So, no ifdefs, please. I would like to point out that as it's using the same model as Shmem, fstream from the standard library is also broken. If I can do:
std::fstream file; //... some code file.open(); //The destructor closes the file
Is this a extremely harmful code? If that's ok, what's the difference with:
boost::shmem::shared_memory segment; //... some code segment.open(); //The destructor closes the segment
Is "std::fstream file;" a "half-baked", "zombie" object? If you only
In my opinion 'yes'. Look at the frequency of comp.lang.c++ posts related to problems of the stream being in an unanticipated state when attempting to re-open it. I don't think streams are one of the most exemplary of standard library classes.
have to use the constructor to open the shared memory I can't make boost::shmem::shared_memory a member of a class if the class has a different lifetime than the shared memory. Even if I have the same
Then use the accepted idiom of dynamically creating a new instance.
lifetime, maybe I have to do some validations/operations to obtain the parameters for the segment that can't be made (or would be really ugly and complicate to do) in the member initialization list. Using RAII
Can you show specific examples?
only approach I have to dynamically allocate it. And to open another segment I can't reuse the object, I have to destroy it and allocate a new one.
Can you show how that cost would be prohibitive versus open/close?
Default "empty" state (which is not the same as "zombie", because the class is fully constructed) allows also move semantics between classes. If you can only only the constructor and the destructor to open/close the segment you can't move the object to the target, since the source object can't be in a constructed state without owning the shared memory.
I'm not familiar enough with the requirements of move semantics to be able to judge that.
I really don't like the "you should ONLY do this" approach. That's because what now is cool, some years later is demonized. I want choice. I don't like the approach of boost::thread, so I have to create a new boost::thread object every time I want to launch a thread. And I'm not the only one. With boost::thread, I can't have member boost::thread objects in classes (or I have to use optional<> like hacks)
No silver bullet. No Good Thing (tm). If RAII is considered a good approach, I agree with you that I should provide it. But if fstream is good enough for the standard library, I think it can be a good model for Shmem without ifdefs.
I don't view RAII as fad that someone thought was cool. RAII has been proven to avoid many problems that are pandemic with two-phase-construction. I've also seen that RAII reduces the amount of error checking code required to handle attempts to call functions while the object is not in a valid state. It also simplifies the interface and required documentation. As is, you would need to clearly document what combinations of conditions are required in order to be able to use every member function. Jeff Flinn

In my opinion 'yes'. Look at the frequency of comp.lang.c++ posts related to problems of the stream being in an unanticipated state when attempting to re-open it. I don't think streams are one of the most exemplary of standard library classes.
But that's because the flags are not reset after a close. Not because of the two phase initialization. If close() guarantees default constructor state, there are no such problems.
have to use the constructor to open the shared memory I can't make boost::shmem::shared_memory a member of a class if the class has a different lifetime than the shared memory. Even if I have the same
Then use the accepted idiom of dynamically creating a new instance.
Why? If I provide both RAII and default+open you can choose what to use. I'm not saying that RAII can't be used. I'm saying that with Shmem you can use the method you want. Dynamically allocation requires also external management against exceptions (smart pointers or whatever). The same management you need with two phase initialization (a final rollback/release) to get a rollback if an exception is thrown.
lifetime, maybe I have to do some validations/operations to obtain the parameters for the segment that can't be made (or would be really ugly and complicate to do) in the member initialization list. Using RAII
Can you show specific examples?
only approach I have to dynamically allocate it. And to open another segment I can't reuse the object, I have to destroy it and allocate a new one.
Can you show how that cost would be prohibitive versus open/close?
Cleary in Shmem primitives is not prohibitive. You don't map one million segments. But as a general Good Thing rule as it's presented here, it's clear that dynamic allocation is a serious overhead (even worse if you use shared_ptr).
I don't view RAII as fad that someone thought was cool. RAII has been proven to avoid many problems that are pandemic with two-phase-construction. I've also seen that RAII reduces the amount of error checking code required to handle attempts to call functions while the object is not in a valid state. It also simplifies the interface and required documentation. As is, you would need to clearly document what combinations of conditions are required in order to be able to use every member function.
I won't discuss that RAII is very convenient and useful. I use RAII very often and I try to avoid try/catch using destructors to free resources. But this is like problem like providing mutex locks ONLY through lockers. In some sw small enterprises I know (mine included), exceptions are not recommended (allowed) in the code. That can be because of misinformation but that's the reality. Even some programmers (I'm one of them), don't like exception model at all. And I agree with you that I need more documentation with two phase construction. Do you think that providing both methods will lead programmers to the "bad way"? Regards, Ion

Ion Gaztañaga wrote:
I won't discuss that RAII is very convenient and useful. I use RAII very often and I try to avoid try/catch using destructors to free resources. But this is like problem like providing mutex locks ONLY through lockers. In some sw small enterprises I know (mine included), exceptions are not recommended (allowed) in the code. That can be because of misinformation but that's the reality. Even some programmers (I'm one of them), don't like exception model at all. And I agree with you that I need more documentation with two phase construction. Do you think that providing both methods will lead programmers to the "bad way"?
100% agreement from my side! Your current design is the only clean way to allow the user to choose whether to use exceptions or not. As said in my review, your documentation misses to menion this intent. RAII can be implemented as a matter of minutes in form of a convenience wrapper. So I really don't get the point of this discussion. Just my €-,02. Regards, Tobias

Tobias Schwinger <tschwinger@neoscientists.org> writes:
Ion Gaztañaga wrote:
I won't discuss that RAII is very convenient and useful. I use RAII very often and I try to avoid try/catch using destructors to free resources. But this is like problem like providing mutex locks ONLY through lockers. In some sw small enterprises I know (mine included), exceptions are not recommended (allowed) in the code. That can be because of misinformation but that's the reality. Even some programmers (I'm one of them), don't like exception model at all. And I agree with you that I need more documentation with two phase construction. Do you think that providing both methods will lead programmers to the "bad way"?
100% agreement from my side!
Your current design is the only clean way to allow the user to choose whether to use exceptions or not.
Really, the ONLY clean way? Are you confident you've imagined all possibilities? -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
Tobias Schwinger <tschwinger@neoscientists.org> writes:
Ion Gaztañaga wrote:
I won't discuss that RAII is very convenient and useful. I use RAII very often and I try to avoid try/catch using destructors to free resources. But this is like problem like providing mutex locks ONLY through lockers. In some sw small enterprises I know (mine included), exceptions are not recommended (allowed) in the code. That can be because of misinformation but that's the reality. Even some programmers (I'm one of them), don't like exception model at all. And I agree with you that I need more documentation with two phase construction. Do you think that providing both methods will lead programmers to the "bad way"?
100% agreement from my side!
Well, I don't generally dislike the exception model...
Your current design is the only clean way to allow the user to choose whether to use exceptions or not.
Really, the ONLY clean way? Are you confident you've imagined all possibilities?
;-) Granted, maybe not. s/only/a/ Still, your "#ifdef suggestion" seems much worse to me -- it might have been just provocative rhetoric rather than a serious proposal, I guess... Regards, Tobias

Tobias Schwinger <tschwinger@neoscientists.org> writes:
Still, your "#ifdef suggestion" seems much worse to me -- it might have been just provocative rhetoric rather than a serious proposal, I guess...
No, it was serious. But if you don't like #ifdefs there are two other proposals on the table that address the problem without using them. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
Tobias Schwinger <tschwinger@neoscientists.org> writes:
Still, your "#ifdef suggestion" seems much worse to me -- it might have been just provocative rhetoric rather than a serious proposal, I guess...
No, it was serious. But if you don't like #ifdefs there are two other proposals on the table that address the problem without using them.
I have nothing against #ifdefs in general and it has nothing to do with personal taste: #ifdeffing (what a wonderful neologism) an interface means: client code that wants to work regardless of whether the macro is defined or not requires #ifdefs as well. Further it seems to me a very bad idea to have the interface that's inside a compiled library file depend on other aspects than the what's mangled into the filename (as you most certainly can imagine things can especially get ugly with shared libraries). Regards, Tobias

Tobias Schwinger <tschwinger@neoscientists.org> writes:
David Abrahams wrote:
Tobias Schwinger <tschwinger@neoscientists.org> writes:
Still, your "#ifdef suggestion" seems much worse to me -- it might have been just provocative rhetoric rather than a serious proposal, I guess...
No, it was serious. But if you don't like #ifdefs there are two other proposals on the table that address the problem without using them.
I have nothing against #ifdefs in general and it has nothing to do with personal taste:
#ifdeffing (what a wonderful neologism) an interface means: client code that wants to work regardless of whether the macro is defined or not requires #ifdefs as well.
Client code that wants to work whether exceptions are turned on or not also requires #ifdefs, no matter what you do with the interface to this particular library. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams wrote:
Tobias Schwinger <tschwinger@neoscientists.org> writes:
David Abrahams wrote:
Tobias Schwinger <tschwinger@neoscientists.org> writes:
Still, your "#ifdef suggestion" seems much worse to me -- it might have been just provocative rhetoric rather than a serious proposal, I guess...
No, it was serious. But if you don't like #ifdefs there are two other proposals on the table that address the problem without using them.
I have nothing against #ifdefs in general and it has nothing to do with personal taste:
#ifdeffing (what a wonderful neologism) an interface means: client code that wants to work regardless of whether the macro is defined or not requires #ifdefs as well.
Client code that wants to work whether exceptions are turned on or not also requires #ifdefs, no matter what you do with the interface to this particular library.
To ensure we understand each other correctly: a non-throwing interface can always be used, regardless whether exceptions are available or not, right? A non-throwing interface may even be attractive when generally using exceptions in case you would enclose the initialization with a try-catch block, anyway. In the particular case of this library the initialization is likely to happen at application start-up, so it isn't too difficult to imagine cases for both the RAII and the success-return approach to come handy. I just noticed Boost.Build supports optional exception handling (at least for some compilers), so the second point in my previous post probably isn't too much of an issue. Still -- I can't help but find it cleaner to have another class for RAII which simply won't be availible if BOOST_NO_EXCEPTIONS is defined rather than to have a class interface shape-shift based on compiler settings or deficiencies. No docmatic about exceptions from my side in any way; I believe, however, it's a good thing to have (at least some) libraries that allow to turn exceptions off -- it adds to the value of Boost because it welcomes users working with resource-critical execution environments and AFAIK it's impossible to implement zero-overhead stack unwinding (you're the exception expert, so please correct me if I'm wrong). BTW. I hope we're not in a flame-war ;-), are we? I also hope it doesn't seem like it... Regards, Tobias

Tobias Schwinger <tschwinger@neoscientists.org> writes:
David Abrahams wrote:
Tobias Schwinger <tschwinger@neoscientists.org> writes:
David Abrahams wrote:
Client code that wants to work whether exceptions are turned on or not also requires #ifdefs, no matter what you do with the interface to this particular library.
To ensure we understand each other correctly: a non-throwing interface can always be used, regardless whether exceptions are available or not, right?
Whoops. OK, that's right.
A non-throwing interface may even be attractive when generally using exceptions in case you would enclose the initialization with a try-catch block, anyway. In the particular case of this library the initialization is likely to happen at application start-up, so it isn't too difficult to imagine cases for both the RAII and the success-return approach to come handy.
If you get a failure at application start-up and you *don't* throw an exception, what do you hope your application will do? Continue without shared memory?
I just noticed Boost.Build supports optional exception handling (at least for some compilers), so the second point in my previous post probably isn't too much of an issue. Still -- I can't help but find it cleaner to have another class for RAII which simply won't be availible if BOOST_NO_EXCEPTIONS is defined rather than to have a class interface shape-shift based on compiler settings or deficiencies.
Great, that was the 2nd of my 3 suggestions for addressing this problem.
No docmatic about exceptions from my side in any way; I believe, however, it's a good thing to have (at least some) libraries that allow to turn exceptions off -- it adds to the value of Boost because it welcomes users working with resource-critical execution environments
Absolutely. No argument from me.
and AFAIK it's impossible to implement zero-overhead stack unwinding (you're the exception expert, so please correct me if I'm wrong).
If by "zero-overhead stack unwinding" you mean exception handling that has no speed cost unless an exception is thrown, then consider yourself corrected.
BTW. I hope we're not in a flame-war ;-), are we? I also hope it doesn't seem like it...
No, I don't think we are. -- Dave Abrahams Boost Consulting www.boost-consulting.com

On 2/12/06, David Abrahams <dave@boost-consulting.com> wrote: [snip]
and AFAIK it's impossible to implement zero-overhead stack unwinding (you're the exception expert, so please correct me if I'm wrong).
If by "zero-overhead stack unwinding" you mean exception handling that has no speed cost unless an exception is thrown, then consider yourself corrected.
This means that it is impossible to implement stack unwind with no speed cost when an exception is not thrown? I'm definitely not an expert, not even a newbie in this field probably, but I thought it was possble. Or did I misunderstood your statement? -- Felipe Magno de Almeida

and AFAIK it's impossible to implement zero-overhead stack unwinding (you're the exception expert, so please correct me if I'm wrong). If by "zero-overhead stack unwinding" you mean exception handling that has no speed cost unless an exception is thrown, then consider yourself corrected.
This means that it is impossible to implement stack unwind with no speed cost when an exception is not thrown? I'm definitely not an expert, not even a newbie in this field probably, but I thought it was possible. Or did I misunderstood your statement?
I don't know anything about exception implementation, but I suppose that the code must mark some "check-points" to know how many object are already constructed, to know how many destructors it must call when the exception occurs. I suppose that can be implemented as an integer/pointer increment or assignment. But the code must add something to the normal path to know what to do when the exception occurs. Regards, Ion

Ion Gaztañaga <igaztanaga@gmail.com> writes:
and AFAIK it's impossible to implement zero-overhead stack unwinding (you're the exception expert, so please correct me if I'm wrong). If by "zero-overhead stack unwinding" you mean exception handling that has no speed cost unless an exception is thrown, then consider yourself corrected.
This means that it is impossible to implement stack unwind with no speed cost when an exception is not thrown? I'm definitely not an expert, not even a newbie in this field probably, but I thought it was possible. Or did I misunderstood your statement?
I don't know anything about exception implementation,
Then please, with all due respect, don't speculate.
but I suppose that the code must mark some "check-points" to know how many object are already constructed, to know how many destructors it must call when the exception occurs. I suppose that can be implemented as an integer/pointer increment or assignment. But the code must add something to the normal path to know what to do when the exception occurs.
No. All the necessary information is contained in the program counter at the point where the exception is thrown. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Felipe Magno de Almeida <felipe.m.almeida@gmail.com> writes:
On 2/12/06, David Abrahams <dave@boost-consulting.com> wrote:
[snip]
and AFAIK it's impossible to implement zero-overhead stack unwinding (you're the exception expert, so please correct me if I'm wrong).
If by "zero-overhead stack unwinding" you mean exception handling that has no speed cost unless an exception is thrown, then consider yourself corrected.
This means that it is impossible to implement stack unwind with no speed cost when an exception is not thrown?
What do you mean "implement stack unwind?" If an exception is not thrown, there is no unwinding, so naturally unwinding has no cost when no exception is thrown.
I'm definitely not an expert, not even a newbie in this field probably, but I thought it was possble. Or did I misunderstood your statement?
I think you misunderstood. It is possible to implement EH so that there is no cost at runtime until an exception is thrown. When compared with code that implements the same error handling functionality by means other than exceptions, the code using exceptions can even run faster when no error is detected. In practice we've observed code using EH running about 1000x slower when an error /is/ detected, but that's expected to be a very rare occurrence. A good EH implementation optimizes for the no-error case. -- Dave Abrahams Boost Consulting www.boost-consulting.com

G'day all. Tobias Schwinger <tschwinger <at> neoscientists.org> writes:
RAII can be implemented as a matter of minutes in form of a convenience wrapper.
If this is true, then there's no reason not to provide the wrapper as standard, and document the intent. I'm tempted to weakly reject the incorporation into Boost pending this wrapper layer being written. Cheers, Andrew Bromage

Hi Andrew,
I'm tempted to weakly reject the incorporation into Boost pending this wrapper layer being written.
I think this discussion has gone too far. I have never talked about NOT providing RAII. If you can re-read my previous posts, I have intention to provide RAII *and* current approach in all classes before releasing the library. So you *will* have have RAII without wrapper classes. Cheers, Ion

"Ion Gaztañaga" wrote:
If you really *must* provide 2-phase construction, then please make it a special mode that can be enabled with an #ifdef.
I see you have strong opinions like me. So, no ifdefs, please. I would like to point out that as it's using the same model as Shmem, fstream from the standard library is also broken.
Filesystem is quite expected to fail. The classes were designed before exceptions were fully accepted (I think) and would not be sucessful if they didn't work on systems w/o exceptions. ----------- Anyway: shmem can use factory method returning a objact acting like smart pointer. This would hide the construction/initialisation phases and work in exception-less environment. For systems forced to live without heap this factory method can take as parameter buffer char[sizeof(shmem)] where the shmem object gets instantiated. Ability to provide "heap" for such systems should be listed on "What I can do with shmem" docs page. /Pavel

Ion Gaztañaga <igaztanaga@gmail.com> writes:
Ok. I think I can provide both approaches so that who doesn't want (or can't because of a restricted embedded enviroment) use exceptions can have its alternative.
< C R I N G E >
If you really *must* provide 2-phase construction, then please make it a special mode that can be enabled with an #ifdef.
I see you have strong opinions like me. So, no ifdefs, please.
Then consider making the class with 2-phase initialization a different type.
I would like to point out that as it's using the same model as Shmem, fstream from the standard library is also broken.
I never said that was broken. I did say that the guarantees it offers are so weak that reasoning about correctness becomes more difficult than it would otherwise be.
If I can do:
std::fstream file; //... some code file.open(); //The destructor closes the file
Is this a extremely harmful code? If that's ok, what's the difference with:
boost::shmem::shared_memory segment; //... some code segment.open(); //The destructor closes the segment
Is "std::fstream file;" a "half-baked", "zombie" object?
Even though I never used the term half-baked, I will accept it. In fact, it's perfectly apt. The object can't be used in all the normal ways one uses a file.
If you only have to use the constructor to open the shared memory I can't make boost::shmem::shared_memory a member of a class if the class has a different lifetime than the shared memory.
Use boost::optional or scoped_ptr.
Even if I have the same lifetime, maybe I have to do some validations/operations to obtain the parameters for the segment that can't be made (or would be really ugly and complicate to do) in the member initialization list.
Sorry, I can't visualize what you're describing.
Using RAII only approach I have to dynamically allocate it.
There's always optional.
And to open another segment I can't reuse the object, I have to destroy it and allocate a new one.
There's no reason you can't make it reopenable.
Default "empty" state (which is not the same as "zombie", because the class is fully constructed) allows also move semantics between classes.
Having such an empty "destructible-only" state is a an unfortunate but necessary side effect of move optimization, but it is not necessary to make such objects available for immediate use as the ctor does. After moving away, the object is impossible for the user to touch (unless move() has been used explicitly).
If you can only only the constructor and the destructor to open/close the segment you can't move the object to the target, since the source object can't be in a constructed state without owning the shared memory.
I really don't like the "you should ONLY do this" approach.
I don't know what you mean.
That's because what now is cool, some years later is demonized. I want choice. I don't like the approach of boost::thread, so I have to create a new boost::thread object every time I want to launch a thread. And I'm not the only one. With boost::thread, I can't have member boost::thread objects in classes (or I have to use optional<> like hacks)
No silver bullet. No Good Thing (tm). If RAII is considered a good approach, I agree with you that I should provide it. But if fstream is good enough for the standard library, I think it can be a good model for Shmem without ifdefs.
That logic is flawed. There are lots of examples of bad design in the standard library. fstream is hardly the worst.
Apart from this, little discussion, now that you are in the Shmem review thread, I would love if you could find time to review it. I think that your C++ knowledge is very interesting for a library that tries to put STL containers in shared memory and memory mapped files.
What I've been discussing here is one of the few deep aspects of my C++ knowledge, and one of the few aspects that you're not likely to get from others -- i.e. it is to some extent my unique contribution -- yet it seems like you're not really very interested in it. That doesn't leave me feeling very encouraged about further participation. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams <dave@boost-consulting.com> writes:
Ion Gaztañaga <igaztanaga@gmail.com> writes:
Ok. I think I can provide both approaches so that who doesn't want (or can't because of a restricted embedded enviroment) use exceptions can have its alternative.
< C R I N G E >
If you really *must* provide 2-phase construction, then please make it a special mode that can be enabled with an #ifdef.
I see you have strong opinions like me. So, no ifdefs, please.
Then consider making the class with 2-phase initialization a different type.
Or consider providing a function that tests whether initialization was successful and is guaranteed to always return true unless exceptions are disabled. I'm sure given time we could think of even more ways to address this without providing a default ctor. The point is that the guarantees made to those who use a C++ compiler shouldn't be weakened just in order to accomodate people who want to disable standard C++ features. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Hi David, I think this Shmem review has become a general C++ usage/pattern flame war that does not benefit Boost or Shmem itself. So I will try to make some comments about your response. Due to my bad english, please don't see any irony, or provocation comment in my post. I don't know enough English to do that.
If you really *must* provide 2-phase construction, then please make it a special mode that can be enabled with an #ifdef. I see you have strong opinions like me. So, no ifdefs, please.
Then consider making the class with 2-phase initialization a different type.
What I want to know is if you consider a library rejection reason if Shmem provides *both* RAII *and* open/close functions.
If you only have to use the constructor to open the shared memory I can't make boost::shmem::shared_memory a member of a class if the class has a different lifetime than the shared memory.
Use boost::optional or scoped_ptr.
Isn't optional a two phase initialization? The optional may be or not constructed, and you have to check that before using optional. I'm trying to avoid dynamic allocation so scoped_ptr is not a good enough solution in my opinion.
Even if I have the same lifetime, maybe I have to do some validations/operations to obtain the parameters for the segment that can't be made (or would be really ugly and complicate to do) in the member initialization list.
Sorry, I can't visualize what you're describing.
I meant that with a class that wants to contain a pure-RAII object without dynamic allocation/optional, it has to initialize the object in the member constructor list. Imagine that depending constructor arguments and the constructed other member, you want to open, open_or_create or create a RAII object. Since you have to initialize raii in the initializer list, Iyou can only use a constructor type. And for example, to open raii I need 3 arguments, and to create raii resource I need 4. class Holder { RAII raii; public: Holder(/*some conditions*/) : raii(/*Create or open depending arguments, and other temporary results*/) {} }; If I have two phase construction (error handling omitted): class Holder { TwoPhase twophase; public: Holder(/*some conditions*/) : raii() { /*Depending on passed conditions, open or create*/ if() twophase.open(/*3 arguments*/) else twophase.create(/*4 arguments*/) //If we throw, the twophase destructor will free resources } }; I think that two phase initialization is sometimes easier two programmer and more clear when following code. I can use optional, but the syntax is uglier (I have to use operator->) and I think a simple member is easier to understand.
Using RAII only approach I have to dynamically allocate it.
There's always optional.
Like I've said, optional provides two-phase initialization, which is something you want to avoid, no? An empty optional is clearly "half-baked".
And to open another segment I can't reuse the object, I have to destroy it and allocate a new one.
There's no reason you can't make it reopenable.
You are right. But I've understood that some boosters wanted a pure constructor/destructor approach to open/close semantics, with no reopen(). I may have understood it wrong, though.
Default "empty" state (which is not the same as "zombie", because the class is fully constructed) allows also move semantics between classes.
Having such an empty "destructible-only" state is a an unfortunate but necessary side effect of move optimization, but it is not necessary to make such objects available for immediate use as the ctor does. After moving away, the object is impossible for the user to touch (unless move() has been used explicitly).
I agree. But I only wanted to point out that move semantics need an empty state. I agree that it's clear that normally, once moved, you clearly don't want to use it. But consider a std::set with of std::string objects that you have to fill from a a container of c-strings (I think this problem was in Alexandrescu's last CUJ article). If you want to avoid any overhead you can: std::string temp_string; auto it = source.begin(), itend = source.end(); for( ;it != itend, ++it) { temp_string = *it; string_set.insert(std::move(temp_string)); } Clearly, the moved object reuse can produce optimal code with move semantics. Imagine now a container of shared_memory or file_mappings (which are noncopyable but moveable). I think that a container of shared_memory elements is more efficient than a container of smart_ptr<shared_memory> and you don't need to pass through the pointer syntax. So I think that the reuse of a moved object can produce optimal code. And surely we will discover more uses to this "moved recycling" concept.
If you can only only the constructor and the destructor to open/close the segment you can't move the object to the target, since the source object can't be in a constructed state without owning the shared memory.
I really don't like the "you should ONLY do this" approach.
I don't know what you mean.
When the first reviews commented the RAII absence, I immediately proposed both RAII *and* open/close approach. The first one would involve exceptions and the second one return values. But I've understood that you were proposing *only* RAII approach, considering an additional two-phase possibility approach a bad design. So I've understood that you wanted to force me to use a RAII approach under "we should keep the programmer away from this" excuse. I repeat: I *want* to provide RAII. But not *only* RAII.
That's because what now is cool, some years later is demonized. I want choice. I don't like the approach of boost::thread, so I have to create a new boost::thread object every time I want to launch a thread. And I'm not the only one. With boost::thread, I can't have member boost::thread objects in classes (or I have to use optional<> like hacks)
No silver bullet. No Good Thing (tm). If RAII is considered a good approach, I agree with you that I should provide it. But if fstream is good enough for the standard library, I think it can be a good model for Shmem without ifdefs.
That logic is flawed. There are lots of examples of bad design in the standard library. fstream is hardly the worst.
Yes. But at that time, surely they were designed under "good practices" approach. Otherwise they wouldn't be in the standard. Only time will say if "only RAII" approach won't be broken with future language features.
Apart from this, little discussion, now that you are in the Shmem review thread, I would love if you could find time to review it. I think that your C++ knowledge is very interesting for a library that tries to put STL containers in shared memory and memory mapped files.
What I've been discussing here is one of the few deep aspects of my C++ knowledge, and one of the few aspects that you're not likely to get from others -- i.e. it is to some extent my unique contribution -- yet it seems like you're not really very interested in it. That doesn't leave me feeling very encouraged about further participation.
I *really* understand your reasons, and I *am* interested: that's why I *will* provide RAII (without wrappers, I repeat). But I want to have freedom to offer also a exception-less, two phase initialization so that the programmer has freedom to choose, without ifdefs, and ready to "move recycling". If RAII is so good, the programmer will use it. But I want people with no exception support (which is common is restricted environments) or exception-alergic to have an alternative. I think that apart from this C++ aspect of your knowledge, there is plenty of your C++ knowledge that you can use in other aspects of the library. And I'm really interested in them. Specially, if they are related to correct uses or better idioms like the previous one. And I want to support those idioms. I just want to provide some alternatives apart from your advices. Regards, Ion

Ion Gaztañaga <igaztanaga@gmail.com> writes:
Hi David,
I think this Shmem review has become a general C++ usage/pattern flame war
I find that a rather self-fulfilling statement. It's insulting to characterize my expression of concern for this design principle as flame.
that does not benefit Boost or Shmem itself.
I think if you tried to learn something from my comments, Shmem's design would in fact benefit.
So I will try to make some comments about your response. Due to my bad english, please don't see any irony, or provocation comment in my post. I don't know enough English to do that.
If you really *must* provide 2-phase construction, then please make it a special mode that can be enabled with an #ifdef. I see you have strong opinions like me. So, no ifdefs, please.
Then consider making the class with 2-phase initialization a different type.
What I want to know is if you consider a library rejection reason if Shmem provides *both* RAII *and* open/close functions.
I consider a design that specifically accomodates a version of C++ with some of its features turned off at the expense of guarantees that one can otherwise achieve -- especially if that expense is completely avoidable -- cause for concern. And based on your response to my concerns so far I would be inclined to worry about the future of the library and your responsiveness to other legitimate concerns. All that would tend to bias me towards voting against this library. It's not a reason for a "no" vote in and of itself, but if I had any energy left to do an actual review, I would certainly be motivated to find other areas where the design looked problematic to me.
If you only have to use the constructor to open the shared memory I can't make boost::shmem::shared_memory a member of a class if the class has a different lifetime than the shared memory.
Use boost::optional or scoped_ptr.
Isn't optional a two phase initialization?
That's built into the condition you're trying to achieve: "if the class has a different lifetime than the shared memory..."
The optional may be or not constructed, and you have to check that before using optional. I'm trying to avoid dynamic allocation so scoped_ptr is not a good enough solution in my opinion.
The scoped pointer or optional would be employed by the *user* of shmem who wants to achieve that difference in lifetime. [BTW, I'm not sure that avoiding dynamic allocation is an appropriate goal for Shmem -- not that it matters, since I'm not suggesting you build dynamic allocation into your library.]
I meant that with a class that wants to contain a pure-RAII object without dynamic allocation/optional, it has to initialize the object in the member constructor list.
So?
Imagine that depending constructor arguments and the constructed other member, you want to open, open_or_create or create a RAII object. Since you have to initialize raii in the initializer list, Iyou can only use a constructor type.
What is a "constructor type?"
And for example, to open raii I need 3 arguments, and to create raii resource I need 4.
class Holder { RAII raii; public: Holder(/*some conditions*/) : raii(/*Create or open depending arguments, and other temporary results*/) {} };
So make your constructors more flexible. class Holder { RAII raii; public: Holder(/*some conditions*/) : raii( generate_initializer( arguments and other temporary results ) ) {} }; If you use the parameter library it's especially easy to accomodate this sort of interface.
Using RAII only approach I have to dynamically allocate it.
There's always optional.
Like I've said, optional provides two-phase initialization, which is something you want to avoid, no?
Not when you are trying to achieve "different lifetimes." That's building two-phase into the problem statement!
An empty optional is clearly "half-baked".
Only as much as a null pointer is.
And to open another segment I can't reuse the object, I have to destroy it and allocate a new one.
There's no reason you can't make it reopenable.
You are right. But I've understood that some boosters wanted a pure constructor/destructor approach to open/close semantics, with no reopen(). I may have understood it wrong, though.
I have no objection to reopen, FWIW.
Default "empty" state (which is not the same as "zombie", because the class is fully constructed) allows also move semantics between classes.
Having such an empty "destructible-only" state is a an unfortunate but necessary side effect of move optimization, but it is not necessary to make such objects available for immediate use as the ctor does. After moving away, the object is impossible for the user to touch (unless move() has been used explicitly).
I agree. But I only wanted to point out that move semantics need an empty state. I agree that it's clear that normally, once moved, you clearly don't want to use it.
Not only "don't want to" but normally, "can't." That's my point. The empty state does not need to be considered by programmers of normal code.
But consider a std::set with of std::string objects that you have to fill from a a container of c-strings (I think this problem was in Alexandrescu's last CUJ article). If you want to avoid any overhead you can:
std::string temp_string; auto it = source.begin(), itend = source.end();
for( ;it != itend, ++it) { temp_string = *it; string_set.insert(std::move(temp_string)); }
A moved-from object is not necessarily assignable.
Clearly, the moved object reuse can produce optimal code with move semantics.
And just why is this more efficient than string_set.insert(std::string(*it)) ?? Maybe it's not worth answering; I think this is really beside the point.
Imagine now a container of shared_memory or file_mappings (which are noncopyable but moveable). I think that a container of shared_memory elements is more efficient than a container of smart_ptr<shared_memory> and you don't need to pass through the pointer syntax.
?? Syntax has no runtime cost.
So I think that the reuse of a moved object can produce optimal code. And surely we will discover more uses to this "moved recycling" concept.
Sounds like premature optimization to me.
If you can only only the constructor and the destructor to open/close the segment you can't move the object to the target, since the source object can't be in a constructed state without owning the shared memory.
I really don't like the "you should ONLY do this" approach.
I don't know what you mean.
When the first reviews commented the RAII absence, I immediately proposed both RAII *and* open/close approach. The first one would involve exceptions and the second one return values. But I've understood that you were proposing *only* RAII approach, considering an additional two-phase possibility approach a bad design.
It usually is bad design, yes. And in this case I don't see any evidence that Shmem needs an exception to the rule more than any other class. All the arguments you've used would lead me to put two-phase initialization interfaces in every class.
So I've understood that you wanted to force me to use a RAII approach
I can't force you to do anything.
under "we should keep the programmer away from this" excuse. I repeat: I *want* to provide RAII. But not *only* RAII.
There's no excuse, and it's not about keeping the programmer away from anything. There's a good argument, and it has to do with being able to reason about the code.
That's because what now is cool, some years later is demonized. I want choice. I don't like the approach of boost::thread, so I have to create a new boost::thread object every time I want to launch a thread. And I'm not the only one. With boost::thread, I can't have member boost::thread objects in classes (or I have to use optional<> like hacks)
No silver bullet. No Good Thing (tm). If RAII is considered a good approach, I agree with you that I should provide it. But if fstream is good enough for the standard library, I think it can be a good model for Shmem without ifdefs.
That logic is flawed. There are lots of examples of bad design in the standard library. fstream is hardly the worst.
Yes. But at that time, surely they were designed under "good practices" approach. Otherwise they wouldn't be in the standard. Only time will say if "only RAII" approach won't be broken with future language features.
So let's throw all good design guidelines out the window, because future unforseen language changes may make them obsolete. Sorry, now I *am* getting sarcastic. This is starting to seem pointless.
Apart from this, little discussion, now that you are in the Shmem review thread, I would love if you could find time to review it. I think that your C++ knowledge is very interesting for a library that tries to put STL containers in shared memory and memory mapped files.
What I've been discussing here is one of the few deep aspects of my C++ knowledge, and one of the few aspects that you're not likely to get from others -- i.e. it is to some extent my unique contribution -- yet it seems like you're not really very interested in it. That doesn't leave me feeling very encouraged about further participation.
I *really* understand your reasons,
It doesn't sound like that, so far.
and I *am* interested: that's why I *will* provide RAII (without wrappers, I repeat).
Then you clearly don't understand me at all. I'm arguing that interfaces that easily lead to zombie states make components harder to use and code harder to think about. Just providing a *way* to avoid the zombie state does not help me know that I can operate on such an object that I'm passed by reference without first making sure it's not a zombie.
But I want to have freedom to offer also a exception-less, two phase initialization so that the programmer has freedom to choose, without ifdefs, and ready to "move recycling".
There ways to get that without 2-phase construction.
If RAII is so good, the programmer will use it. But I want people with no exception support (which is common is restricted environments) or exception-alergic to have an alternative.
Then allow them to check for successful construction after the fact, or if you *really* must, provide a different type that has the 2-phase feature.
I think that apart from this C++ aspect of your knowledge, there is plenty of your C++ knowledge that you can use in other aspects of the library. And I'm really interested in them. Specially, if they are related to correct uses or better idioms like the previous one. And I want to support those idioms. I just want to provide some alternatives apart from your advices.
I think I've used up most of my available energy on this one. :( -- Dave Abrahams Boost Consulting www.boost-consulting.com

Hi David,
I find that a rather self-fulfilling statement. It's insulting to characterize my expression of concern for this design principle as flame.
I've not tried to insult you. So if it was the case, please forgive me.
that does not benefit Boost or Shmem itself.
I think if you tried to learn something from my comments, Shmem's design would in fact benefit.
What I want to know is if you consider a library rejection reason if Shmem provides *both* RAII *and* open/close functions.
I consider a design that specifically accomodates a version of C++ with some of its features turned off at the expense of guarantees that one can otherwise achieve -- especially if that expense is completely avoidable -- cause for concern. And based on your response to my concerns so far I would be inclined to worry about the future of the library and your responsiveness to other legitimate concerns. All that would tend to bias me towards voting against this library. It's not a reason for a "no" vote in and of itself, but if I had any energy left to do an actual review, I would certainly be motivated to find other areas where the design looked problematic to me.
I see. Obviously, since I've been defending the other alternative, I don't see the double-alternative so dangerous. But I see that current open/close *only* view is dangerous. After thinking it a bit, if boosters think that double way is too dangerous (well, some other has expressed the opposite view and wants alternatives) I'm ready to remove open/close expressions and implement a RAII only final version. But surely someone won't agree with it and it might have good reasons. But I repeat: I propose a RAII only version. I don't think this aspect is the most important aspect of Shmem.
The optional may be or not constructed, and you have to check that before using optional. I'm trying to avoid dynamic allocation so scoped_ptr is not a good enough solution in my opinion.
The scoped pointer or optional would be employed by the *user* of shmem who wants to achieve that difference in lifetime.
Yes. But I wanted to avoid Shmem users to be forced to use dynamic allocation or optional if the want two-phase initialization. Re-reading the proposed C++ N1883 paper Kevlin Henney wrote about threader-joiner architecture similar to smart pointer/new combination: class threader { public: template<typename nullary_function> joiner operator()(nullary_function threadable); ... }; template<typename threadable> joiner<return_type<threadable>::type> thread(threadable function) { return threader()(function); } int main () { //Class approach threader run; joiner wait = run(first_task); //or function approach jointer wait2 = thread(second_task); } Couldn't be an alternative something similar with Shmem primitives: shared_memory_handle shm = shared_memory(/*open, open/create overloads*/); shm would be like the proposed joiner (CopyConstructible, Assignable, and DefaultConstructible and Shareable (via an internal shared_ptr, for example) so that I could have a shared_memory_handle member in a class that can be initialized when I want. Much like a shared_ptr member that can be default constructed but only initialized though an external "new T" operation. I see that since shared_memory_handle has default constructor it can be also a two-phase initialization class (much like shared_ptr) but do you think this architecture is acceptable or you think it has the same problems as the previous one? I'm trying to offer some alternatives I've seen in proposed WG21 papers (but I'm ready to offer a RAII only interface, as I've said).
And for example, to open raii I need 3 arguments, and to create raii resource I need 4.
class Holder { RAII raii; public: Holder(/*some conditions*/) : raii(/*Create or open depending arguments, and other temporary results*/) {} };
So make your constructors more flexible.
class Holder { RAII raii; public: Holder(/*some conditions*/) : raii( generate_initializer( arguments and other temporary results ) ) {} };
If you use the parameter library it's especially easy to accomodate this sort of interface.
I see. I can wrap all parameters in a single object to avoid this problem.
Then you clearly don't understand me at all. I'm arguing that interfaces that easily lead to zombie states make components harder to use and code harder to think about. Just providing a *way* to avoid the zombie state does not help me know that I can operate on such an object that I'm passed by reference without first making sure it's not a zombie.
Well, someone can pass you shared_ptr that is empty and we don't forbid an empty shared_ptr. I think that functions can establish preconditions to say what kind of parameters they expect.
I think that apart from this C++ aspect of your knowledge, there is plenty of your C++ knowledge that you can use in other aspects of the library. And I'm really interested in them. Specially, if they are related to correct uses or better idioms like the previous one. And I want to support those idioms. I just want to provide some alternatives apart from your advices.
I think I've used up most of my available energy on this one. :(
Ok. But don't say you were not invited to the party ;-) Now, seriously, I'm ready to offer a RAII only approach if the above described alternative is considered also a bad design. I would only need a class/library that you think it implements well this RAII approach (what kind of exceptions it throws, and so on) to have some good model. I do understand your concerns and I think that Shmem has much more to offer than this initialization problem. Regards, Ion

Ion Gaztañaga <igaztanaga@gmail.com> writes:
The optional may be or not constructed, and you have to check that before using optional. I'm trying to avoid dynamic allocation so scoped_ptr is not a good enough solution in my opinion.
The scoped pointer or optional would be employed by the *user* of shmem who wants to achieve that difference in lifetime.
Yes. But I wanted to avoid Shmem users to be forced to use dynamic allocation or optional if the want two-phase initialization.
They're never forced; they can code their own optional. But why are you worried about requiring a user to do that?
Re-reading the proposed C++ N1883 paper Kevlin Henney wrote about threader-joiner architecture similar to smart pointer/new combination:
class threader { public: template<typename nullary_function> joiner operator()(nullary_function threadable); ... };
template<typename threadable> joiner<return_type<threadable>::type> thread(threadable function) { return threader()(function); }
int main () { //Class approach threader run; joiner wait = run(first_task); //or function approach jointer wait2 = thread(second_task); }
Couldn't be an alternative something similar with Shmem primitives:
shared_memory_handle shm = shared_memory(/*open, open/create overloads*/);
shm would be like the proposed joiner (CopyConstructible, Assignable, and DefaultConstructible and Shareable (via an internal shared_ptr, for example) so that I could have a shared_memory_handle member in a class that can be initialized when I want. Much like a shared_ptr member that can be default constructed but only initialized though an external "new T" operation.
I see that since shared_memory_handle has default constructor it can be also a two-phase initialization class (much like shared_ptr) but do you think this architecture is acceptable or you think it has the same problems as the previous one?
Seems like the the same problems to me, unless you provide a different type that offers the guarantee that it's either constructed or it doesn't exist. But then I consider the 2-phase interface to be premature generalization. Getting an interface where a zombie is possible should be the explicit choice. Getting an interface where no zombies are possible should be the default.
I'm trying to offer some alternatives I've seen in proposed WG21 papers (but I'm ready to offer a RAII only interface, as I've said).
IMO unless there's an especially good reason not to (and I haven't seen one here) it's important to offer an RAII-only interface.
Then you clearly don't understand me at all. I'm arguing that interfaces that easily lead to zombie states make components harder to use and code harder to think about. Just providing a *way* to avoid the zombie state does not help me know that I can operate on such an object that I'm passed by reference without first making sure it's not a zombie.
Well, someone can pass you shared_ptr that is empty and we don't forbid an empty shared_ptr.
Yes, that's part of the pointer idiom, so it makes sense there. A shmem is not a pointer. For that purpose can use shared_ptr<shmem> or intrusive_ptr<shmem> or optional<shmem> or...
I think that functions can establish preconditions to say what kind of parameters they expect.
Adding the precondition "it must be a non-zombie shmem object" to most functions that operate on shmems is not an attractive idea. Every time you strengthen preconditions it makes an interface harder to understand and work with effectively. Your library should take responsibility for that, rather than pushing responsibility off on client coders.
I think that apart from this C++ aspect of your knowledge, there is plenty of your C++ knowledge that you can use in other aspects of the library. And I'm really interested in them. Specially, if they are related to correct uses or better idioms like the previous one. And I want to support those idioms. I just want to provide some alternatives apart from your advices.
I think I've used up most of my available energy on this one. :(
Ok. But don't say you were not invited to the party ;-) Now, seriously, I'm ready to offer a RAII only approach if the above described alternative is considered also a bad design. I would only need a class/library that you think it implements well this RAII approach (what kind of exceptions it throws, and so on) to have some good model.
Throw the same ones you would throw from a failed 2nd phase initialization. And so on? I don't know what you have in mind.
I do understand your concerns and I think that Shmem has much more to offer than this initialization problem.
Great; I hope we can get it into Boost. -- Dave Abrahams Boost Consulting www.boost-consulting.com

David Abrahams <dave@boost-consulting.com> writes:
Yes. But I wanted to avoid Shmem users to be forced to use dynamic allocation or optional if the want two-phase initialization.
They're never forced; they can code their own optional. But why are you worried about requiring a user to do that?
I mean, of course why are you worried about requiring a user to use boost::optional or to code her own? -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams" wrote:
[BTW, I'm not sure that avoiding dynamic allocation is an appropriate goal for Shmem -- not that it matters, since I'm not suggesting you build dynamic allocation into your library.]
Many of strong/weak real-time systems forbid to use dynamic allocation. One of shmem targets are real-time systems. /Pavel

At 1:57 PM +0100 2/12/06, Pavel Vozenilek wrote:
"David Abrahams" wrote:
[BTW, I'm not sure that avoiding dynamic allocation is an appropriate goal for Shmem -- not that it matters, since I'm not suggesting you build dynamic allocation into your library.]
Many of strong/weak real-time systems forbid to use dynamic allocation.
One of shmem targets are real-time systems.
/Pavel
The shmem library fundamentally provides two facilities: 1. a portable interface to a collection of related IPC mechanisms (shared memory and memory mapped files, shared synchronization objects), 2. object allocation from a specific block of memory. In my experience, systems which forbid all use of dynamic allocation wouldn't touch processes with the proverbial ten foot pole, making the shmem IPC mechanisms largely moot for them. The object allocation mechanisms provided by shmem are, surprise, dynamic allocation. If a system design forbids dynamic allocation, there is nothing magical about the shmem library's allocation mechanisms that would make it more acceptable in such a system. So I don't find this argument convincing, speaking as someone who does a lot of soft/hard real-time programming for a living, and is actively using the shmem library.

"Kim Barrett" wrote:
The object allocation mechanisms provided by shmem are, surprise, dynamic allocation. If a system design forbids dynamic allocation, there is nothing magical about the shmem library's allocation mechanisms that would make it more acceptable in such a system.
shmem, being so limited, may provide lower and easier to estimate upper limits than dynamic allocation provided by OS/compiler RTL. (The stress is on /may/, I don't know whether anyone will rely on such assumption.) /Pavel

At 11:42 PM +0100 2/12/06, Pavel Vozenilek wrote:
"Kim Barrett" wrote:
The object allocation mechanisms provided by shmem are, surprise, dynamic allocation. If a system design forbids dynamic allocation, there is nothing magical about the shmem library's allocation mechanisms that would make it more acceptable in such a system.
shmem, being so limited, may provide lower and easier to estimate upper limits than dynamic allocation provided by OS/compiler RTL.
The shmem allocators are probably inappropriate for that. For one thing, they are designed so that their internal data structures can live in shared memory, using offset_ptr and the like. That's just unnecessary overhead if one isn't actually using shared memory. It is also irrelevant to the two-phase vs RAII construction question for the shared memory objects.

Ion Gaztañaga wrote:
Like I've said, optional provides two-phase initialization, which is something you want to avoid, no? An empty optional is clearly "half-baked".
Um, no, not really. Optional is a value type object. Even after default construction, it has a valid and usable state - it contains nothing. Due to the purpose and design of optional, this makes perfect sense. A shared memory that contains nothing, on the other hand, does not make sense. Sebastian Redl

Ion Gaztañaga wrote: ...
I meant that with a class that wants to contain a pure-RAII object without dynamic allocation/optional, it has to initialize the object in the member constructor list. Imagine that depending constructor arguments and the constructed other member, you want to open, open_or_create or create a RAII object. Since you have to initialize raii in the initializer list, Iyou can only use a constructor type. And for example, to open raii I need 3 arguments, and to create raii resource I need 4.
class Holder { RAII raii; public: Holder(/*some conditions*/) : raii(/*Create or open depending arguments, and other temporary results*/) {} };
If I have two phase construction (error handling omitted):
class Holder { TwoPhase twophase; public: Holder(/*some conditions*/) : raii() { /*Depending on passed conditions, open or create*/ if() twophase.open(/*3 arguments*/) else twophase.create(/*4 arguments*/) //If we throw, the twophase destructor will free resources } };
Looking at this from the RAII perspective, I'd say, hmm, what if I have a constructor that opens if the resource is already available, otherwise will ask the OS for a new one. IIRC, that's what Microsoft does for named IPC objects. This simplifies the usage of the library from the user's perpective. Jeff Flinn

Fred Bertsch <fred.bertsch@gmail.com> writes:
This is a fairly minor point, in my opinion, and I don't think I can say which way is better for shmem, but I'll try to explain it a bit better.
For example, named_shared_object can only be created through its default constructor. Once created this way, you probably can't actually use your named_shared_object. Instead, you can complete the construction of the object by using open, create, or one of the other methods that "construct" the object. An error is indicated by the return value of these member functions. I don't actually know the name of this technique, and I suspect my point would be clearer if I knew its name.
If I'm understanding you correctly, that's known as "two-phase construction" and it's generally a very bad idea, because it weakens the class invariants and -- consequently -- the assumptions that code can make about the object's state. It oftne means that the class has a special "zombie" state that you always have to check for before operating on it. The opposite of this technique is RAII.
The alternative technique is to have several constructors or static member functions in named_shared_object. These would create an opened or created named_shared_object if nothing bad happened, and they would throw an exception if an error came up.
I mostly brought this up because few things in boost use the first technique, so someone might have a strong opinion about it.
I'm surprised this hasn't drawn more fire. I'll be happy to condemn it -- even before I've seen the details of the code -- and risk that I've completely misinterpreted this in which case I'll have to apologize. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Ion Gaztañaga wrote:
Hi fred,
First, there tends to be a create and object member function of a lot of objects in the library. This forced Ion to create partially constructed objects with the constructor, but it allows the use of return values instead of exceptions for error handling. I personally do not like partially constructed objects, and I do not mind exceptions that are thrown if there's a serious problem detected. I'm curious what the rationale was for that decision. Do the create/open functions fail frequently in typical use cases?
Can you elaborate a bit on this? I can't understand well what you are trying to point out.
I think he's pointing out the lack of RAII. From your first example: named_shared_object segment; if(!segment.create("/MySharedMemory", //segment name 65536)){ //segment size in bytes return -1; } void * shptr = segment.allocate(1024/*bytes to allocate*/); segment.deallocate(shptr); I'd rather see: try { named_shared_object segment("/MySharedMemory",65536); shmem::shared_ptr lPtr = segment.allocate(1024); } catch( const shmem::exception& aExc ) { ... } Where the shared_ptr has a deleter calling named_shared_object::deallocate. I'd also think that the 'shared_memory' should not be destructed until the last shmem::shared_ptr is released. Which brings up the issue that I could not easily find any discussion of lifetime requirements in the documentation. Thanks, Jeff

Hi Jeff,
I think he's pointing out the lack of RAII. From your first example:
named_shared_object segment;
if(!segment.create("/MySharedMemory", //segment name 65536)){ //segment size in bytes return -1; }
void * shptr = segment.allocate(1024/*bytes to allocate*/);
segment.deallocate(shptr);
I'd rather see:
try { named_shared_object segment("/MySharedMemory",65536);
shmem::shared_ptr lPtr = segment.allocate(1024); } catch( const shmem::exception& aExc ) { ... }
I see. Anyway, in the first example, if "allocate" throws, segment destructor will automatically unmap and destroy the segment. If you think that a one step constructor is better, there is no problem to add it. Just like fstream can do a default constructor or open the file in the constructor. The only minor point is that we have to say if we want to create, connect or open_or_create the segment in the constructor, so we need an extra constructor.
Which brings up the issue that I could not easily find any discussion of lifetime requirements in the documentation.
Ok. It's true that the segment must be open while using it but, maybe I should point out with some lifetime examples. Thanks, Ion

"Jeff Flinn" wrote:
I think he's pointing out the lack of RAII. From your first example:
named_shared_object segment;
if(!segment.create("/MySharedMemory", //segment name 65536)){ //segment size in bytes return -1; }
void * shptr = segment.allocate(1024/*bytes to allocate*/);
segment.deallocate(shptr);
I'd rather see:
try { named_shared_object segment("/MySharedMemory",65536);
shmem::shared_ptr lPtr = segment.allocate(1024); } catch( const shmem::exception& aExc ) { ... }
Where the shared_ptr has a deleter calling named_shared_object::deallocate. I'd also think that the 'shared_memory' should not be destructed until the last shmem::shared_ptr is released.
Rather than yet another underdocumented shared pointer with subtly different name it should be: boost::shared_ptr<void, shmem::deleter> p = segment.allocate(1024); and void* p = segment_allocate(1024, shmem::manual_lifetime); And there could be debug mode flag that checks the data is deallocated in the same modeit was allocated.
Which brings up the issue that I could not easily find any discussion of lifetime requirements in the documentation.
Yes, explicit lifetime info is missing. /Pavel

Pavel Vozenilek wrote:
"Jeff Flinn" wrote:
I think he's pointing out the lack of RAII. From your first example:
named_shared_object segment;
if(!segment.create("/MySharedMemory", //segment name 65536)){ //segment size in bytes return -1; }
void * shptr = segment.allocate(1024/*bytes to allocate*/);
segment.deallocate(shptr);
I'd rather see:
try { named_shared_object segment("/MySharedMemory",65536);
shmem::shared_ptr lPtr = segment.allocate(1024); } catch( const shmem::exception& aExc ) { ... }
Where the shared_ptr has a deleter calling named_shared_object::deallocate. I'd also think that the 'shared_memory' should not be destructed until the last shmem::shared_ptr is released.
Rather than yet another underdocumented shared pointer with subtly different name it should be:
boost::shared_ptr<void, shmem::deleter> p = segment.allocate(1024);
Yes that was my intent.
and
void* p = segment_allocate(1024, shmem::manual_lifetime);
Did you mean for the above to be a free function? Is there really a need for this? Is there precedent with other boost libs dealing in unsafe raw pointers?
And there could be debug mode flag that checks the data is deallocated in the same modeit was allocated.
By 'mode' do you mean whether the shared_ptr, or raw pointer was returned? Thanks, Jeff

"Jeff Flinn" wrote:
void* p = segment_allocate(1024, shmem::manual_lifetime);
Did you mean for the above to be a free function? No its typo. "." was intended. I am very bad in typing.
And there could be debug mode flag that checks the data is deallocated in the same modeit was allocated.
By 'mode' do you mean whether the shared_ptr, or raw pointer was returned?
I mean the allocator would set a flag somewhere in shared memory and deallocator would check it and assert() if needed. I plan to write more about in in review if I find time. ------ In case I won't find time for review take my default position as "accept into Boost". I have been following shmem development for quite long and the issues I had with it got resolved to better. /Pavel

Rather than yet another underdocumented shared pointer with subtly different name it should be:
boost::shared_ptr<void, shmem::deleter> p = segment.allocate(1024);
Hopefully someone will correct me, but I don't think that shared_ptr can have a custom allocator to allocate the reference count in the shared memory. -Fred

Hopefully someone will correct me, but I don't think that shared_ptr can have a custom allocator to allocate the reference count in the shared memory.
If you want to store the reference count and the shared_ptr itself in the shared memory so that the shared_ptr is a shared_ptr between *processes*, I'm working on a boost::shmem::shared_ptr so that you can store a shared_ptr in shared memory (and construct containers of shared_ptr). The reference count would be allocated also in the same segment. Obviously, since in shared memory I have some problems(the deleter should be a template parameter, since we can't use virtual functions, we need a parameter to know from where to allocate the reference count, etc...) the interface will be a bit different. But you will need to wait a bit ;-) Regards, Ion

Hi Pavel,
Rather than yet another underdocumented shared pointer with subtly different name it should be:
boost::shared_ptr<void, shmem::deleter> p = segment.allocate(1024);
I don't know if in CVS this has changed but in my boost version boost::shared_ptr has only one template parameter and the deleter is passed in the constructor and dynamically (and polymorphically) created.
void* p = segment_allocate(1024, shmem::manual_lifetime);
What you want to express with this?
Where the shared_ptr has a deleter calling named_shared_object::deallocate. I'd also think that the 'shared_memory' should not be destructed until the last shmem::shared_ptr is released.
To maintain the shared memory open while you have allocated fragments you can try this (I've not compiled it so it can have errors): struct shmem_deleter { boost::shared_ptr<named_shared_object> m_named_shared_object; shmem_deleter(const boost::shared_ptr<named_shared_object> &segment) : m_named_shared_object(segment){} void operator()(void *ptr) { m_named_shared_object.deallocate(ptr); } }; int main () { shared_ptr<named_shared_object> segment(new named_shared_object); segment->create(/*...*/); shared_ptr<void> buffer (segment->allocate(1024), shmem_deleter(segment)); return 0; } The idea is that the deleter of the shared pointer of an allocation has a shared pointer to the segment (the deleter is shared between all copies of buffer). When all shared pointers are deleted, the segment will be destroyed if there are no more shared_ptr<named_shared_object> objects around pointing to the same named_shared_object. The same can be used with named objects, we must just change the deleter destruction function to a m_named_shared_object.destroy_ptr(ptr). Ion

"Pavel Vozenilek" <pavel_vozenilek@hotmail.com> writes:
Rather than yet another underdocumented shared pointer with subtly different name it should be:
boost::shared_ptr<void, shmem::deleter> p = segment.allocate(1024);
One of the coolest things about shared_ptr is that the deleter is not part of the type. There is no deleter template parameter.. That would be: boost::shared_ptr<void> p = segment.allocate(1024, shmem::deleter()); -- Dave Abrahams Boost Consulting www.boost-consulting.com

On 2/6/06, Fred Bertsch <fred.bertsch@gmail.com> wrote: Please always state in your review, whether you think the library should be
accepted as a Boost library!
An enthusiastic YES. Additionally please consider giving feedback on the following general
topics:
- What is your evaluation of the design?
The design is very good.
- What is your evaluation of the implementation?
I agree with others that there should be RAII versions of the constructors of various classes like shared_memory and named_shared_object that throw if they fail. Another concern, addressed clearly in the documentation, is the necessity of reproducing the synchronization primitives of Boost.Thread and the xtime class in Shmem. This is yet another case where a library that tries to be thread-aware, without necessarily being multithreaded itself, has had to do this. From what I can gather from some grepping, Boost.Pool(pool/detail/mutex.hpp) and Boost.Regex (regex/pending/static_mutex.hpp) contains their own mutex implementation as well. Also, the proposed Logging library (which was pulled before the review was complete) and ASIO both contain some elemental synchronization classes that could be a more fundamental part of Boost. I think it is high time for the separation of the synchronization primitives of Boost.Thread into a separate library before they are duplicated again. This duplication has the follow-on issues of reduced portability for the library that reproduces them and the possibility of bugs being introduced. I'm also intrigued by the reproduction of many Standard Library containers due to implementation restrictions (under Current Limitations, "Problems with most STL implementations"). Ion, can you include a listing of platforms for which the Shmem versions of containers must be used and ones where the Standard Library containers can be used with the Shmem allocator? Are there any of the latter? A table would be quite useful here. For compilers with a reasonable hope of getting changes fed into the Standard Library implementation (e.g. gcc) it might be worthwhile to contact the maintainers about these limitations to see if they can be removed. - What is your evaluation of the documentation? Very good and quite complete. Some minor questions/quibbles:
From Current Limitations / "Be careful with static class members":
"Static members are not dangerous if they are just constant variables initialized when the process starts, but they don't change at all (for example, when used like enums)." I'm not sure I understand the second half of this sentence. What does it mean to use a static variable like an enum? Maybe I'm just being thick-headed. The Introduction states: "Shmem also offers the basic_string pseudo-container to use full-powered C++ strings in shared memory." But this class isn't documented. I see it in the header <boost/shmem/containers/string.hpp>. Actually it appears that none of the headers in the shmem/containers dir are mentioned explicitly in the documentation. They are touched on briefly in "Shmem and containers in shared memory", but this documentation might benefit from some expansion. - What is your evaluation of the potential usefulness of the library? I think it has great potential, especially if the memory segments can be made to grow (fingers crossed!). - Did you try to use the library? With what compiler? Did you have any
problems?
I didn't actually use the library other than to compile and run the tests w/gcc 3.3.4 on Linux. I did run into one error trying to compile the tests with a CVS version of Boost from this morning: "g++" -c -DBOOST_ALL_NO_LIB=1 -g -O0 -fno-inline -pthread -Wall -ftemplate-depth-255 -I"../../../bin/boost/libs/shmem/test" -I "/home/nbde52d/src/boost-regression/boost" -o "../../../bin/boost/libs/shmem/test/private_node_allocator_test.test/gcc/debug/threading-multi/private_node_allocator_test.o" "private_node_allocator_test.cpp" "/usr/bin/objcopy" --set-section-flags .debug_str=contents,debug "../../../bin/boost/libs/shmem/test/private_node_allocator_test.test/gcc/debug/threading-multi/private_node_allocator_test.o" /home/nbde52d/src/boost-regression/boost/boost/shmem/allocators/private_node_allocator.hpp: In function `void boost::shmem::swap(boost::shmem::private_node_allocator<boost::shmem::detail::shmem_list_node<priv_node_allocator_t>, 64, boost::shmem::detail::segment_manager<wchar_t, boost::shmem::simple_seq_fit<boost::shmem::shared_mutex_family, boost::shmem::offset_ptr<void, boost::shmem::offset_1_null_ptr> >, boost::shmem::flat_map_index> >&, boost::shmem::private_node_allocator<boost::shmem::detail::shmem_list_node<priv_node_allocator_t>, 64, boost::shmem::detail::segment_manager<wchar_t, boost::shmem::simple_seq_fit<boost::shmem::shared_mutex_family, boost::shmem::offset_ptr<void, boost::shmem::offset_1_null_ptr> >, boost::shmem::flat_map_index> >&)': /home/nbde52d/src/boost-regression/boost/boost/shmem/detail/utilities.hpp:45: instantiated from `void boost::shmem::detail::swap(T&, T&) [with T = boost::shmem::private_node_allocator<boost::shmem::detail::shmem_list_node<priv_node_allocator_t>, 64, boost::shmem::detail::segment_manager<wchar_t, boost::shmem::simple_seq_fit<boost::shmem::shared_mutex_family, boost::shmem::offset_ptr<void, boost::shmem::offset_1_null_ptr> >, boost::shmem::flat_map_index> >]' /home/nbde52d/src/boost-regression/boost/boost/shmem/containers/list.hpp:317: instantiated from `void boost::shmem::detail::shmem_list_alloc<T, A, true>::swap(boost::shmem::detail::shmem_list_alloc<T, A, true>&) [with T = int, A = priv_node_allocator_t]' /home/nbde52d/src/boost-regression/boost/boost/shmem/containers/list.hpp:598: instantiated from `void boost::shmem::list<T, A>::swap(boost::shmem::list<T, A>&) [with T = int, A = priv_node_allocator_t]' /home/nbde52d/src/boost-regression/boost/boost/shmem/containers/list.hpp:792: instantiated from `void boost::shmem::list<T, A>::sort(StrictWeakOrdering) [with StrictWeakOrdering = boost::shmem::list<int, priv_node_allocator_t>::value_less, T = int, A = priv_node_allocator_t]' /home/nbde52d/src/boost-regression/boost/boost/shmem/containers/list.hpp:774: instantiated from `void boost::shmem::list<T, A>::sort() [with T = int, A = priv_node_allocator_t]' private_node_allocator_test.cpp:95: instantiated from here /home/nbde52d/src/boost-regression/boost/boost/shmem/allocators/private_node_allocator.hpp:198: error: call of overloaded `swap( boost::shmem::offset_ptr<boost::shmem::detail::segment_manager<wchar_t, boost::shmem::simple_seq_fit<boost::shmem::shared_mutex_family, boost::shmem::offset_ptr<void, boost::shmem::offset_1_null_ptr> >, boost::shmem::flat_map_index>, boost::shmem::offset_1_null_ptr>&, boost::shmem::offset_ptr<boost::shmem::detail::segment_manager<wchar_t, boost::shmem::simple_seq_fit<boost::shmem::shared_mutex_family, boost::shmem::offset_ptr<void, boost::shmem::offset_1_null_ptr> >, boost::shmem::flat_map_index>, boost::shmem::offset_1_null_ptr>&)' is ambiguous /home/ietdev/tools/linux-i686/gcc-3.3.4/include/c++/3.3.4/bits/stl_algobase.h:121: error: candidates are: void std::swap(_Tp&, _Tp&) [with _Tp = boost::shmem::offset_ptr<boost::shmem::detail::segment_manager<wchar_t, boost::shmem::simple_seq_fit<boost::shmem::shared_mutex_family, boost::shmem::offset_ptr<void, boost::shmem::offset_1_null_ptr> >, boost::shmem::flat_map_index>, boost::shmem::offset_1_null_ptr>] /home/nbde52d/src/boost-regression/boost/boost/shmem/detail/utilities.hpp:43: error: void boost::shmem::detail::swap(T&, T&) [with T = boost::shmem::offset_ptr<boost::shmem::detail::segment_manager<wchar_t, boost::shmem::simple_seq_fit<boost::shmem::shared_mutex_family, boost::shmem::offset_ptr<void, boost::shmem::offset_1_null_ptr> >, boost::shmem::flat_map_index>, boost::shmem::offset_1_null_ptr>] - How much effort did you put into your evaluation? A glance? A quick
reading? In-depth study?
I've read the documentation going back to the Ion's version 0.3 and re-read it from the Review Materials. Overall, a few hours of reading and thinking. - Are you knowledgeable about the problem domain? A little bit. -- Caleb Epstein caleb dot epstein at gmail dot com

Hi Caleb,
Please always state in your review, whether you think the library should be
accepted as a Boost library! An enthusiastic YES.
- What is your evaluation of the design? The design is very good.
Thanks!
I agree with others that there should be RAII versions of the constructors of various classes like shared_memory and named_shared_object that throw if they fail.
You will have them in all classes in the final version.
Another concern, addressed clearly in the documentation, is the necessity of reproducing the synchronization primitives of Boost.Thread and the xtime class in Shmem. This is yet another case where a library that tries to be thread-aware, without necessarily being multithreaded itself, has had to do this. From what I can gather from some grepping, Boost.Pool(pool/detail/mutex.hpp) and Boost.Regex (regex/pending/static_mutex.hpp) contains their own mutex implementation as well. Also, the proposed Logging library (which was pulled before the review was complete) and ASIO both contain some elemental synchronization classes that could be a more fundamental part of Boost.
I think it is high time for the separation of the synchronization primitives of Boost.Thread into a separate library before they are duplicated again. This duplication has the follow-on issues of reduced portability for the library that reproduces them and the possibility of bugs being introduced.
I agree. The problem is that Boost Thread seems to be in stand-by mode, so I don't know if we will have any chance to change the situation.
I'm also intrigued by the reproduction of many Standard Library containers due to implementation restrictions (under Current Limitations, "Problems with most STL implementations"). Ion, can you include a listing of platforms for which the Shmem versions of containers must be used and ones where the Standard Library containers can be used with the Shmem allocator?
As far as I know there is no single platform that supports Shmem/smart pointer allocators. Dinkumware STL was close, though. SGI-derived ones (stlport, libstdc++) use raw pointers so they are also incompatible. I don't know about Metrowerks and RogueWave-Apache, maybe Metrowerks won't be far.
Are there any of the latter? A table would be quite useful here. For compilers with a reasonable hope of getting changes fed into the Standard Library implementation (e.g. gcc) it might be worthwhile to contact the maintainers about these limitations to see if they can be removed.
I have already commented this with Howard Hinnant and Paolo Carlini from libstdc++ in a private mail. They want to support shared memory allocators but the work is not trivial, since raw pointers are everywhere. I'm ready to help them.
From Current Limitations / "Be careful with static class members":
"Static members are not dangerous if they are just constant variables initialized when the process starts, but they don't change at all (for example, when used like enums)."
I'm not sure I understand the second half of this sentence. What does it mean to use a static variable like an enum? Maybe I'm just being thick-headed.
I just wanted to say that if you have "static const int a = 5" (like enum { a = 5 };) declarations that are just constant integral values, there is no harm. Each process will read their own copy and since both will be the same (supposing they use the same version of the class, of course) there is no problem.
The Introduction states:
"Shmem also offers the basic_string pseudo-container to use full-powered C++ strings in shared memory."
But this class isn't documented. I see it in the header <boost/shmem/containers/string.hpp>. Actually it appears that none of the headers in the shmem/containers dir are mentioned explicitly in the documentation. They are touched on briefly in "Shmem and containers in shared memory", but this documentation might benefit from some expansion.
Sorry about containers. They are not documented at all, since all are standard C++ containers in boost::shmem namespace. The only new container family is the Loki AssocVector derived flat_map family. I agree that I should document that at least. I don't know if documentation for others is needed since they are standard C++ containers in boost::shmem namespace. But if this is what boosters want, I will document them (but it's quite a hard work).
- What is your evaluation of the potential usefulness of the library?
I think it has great potential, especially if the memory segments can be made to grow (fingers crossed!).
I will try to write a small paper about multi-segment shared memory architecture (I mean allocating new segments when we need more memory) and the problems I've found when trying to construct them. But this will be after Shmem review ends (and if it's accepted, after I make the requested changes).
- Did you try to use the library? With what compiler? Did you have any
problems?
I didn't actually use the library other than to compile and run the tests w/gcc 3.3.4 on Linux. I did run into one error trying to compile the tests with a CVS version of Boost from this morning:
I don't have access to gcc 3.3.4 right now but I will try to discover what's going wrong. If you can find any solution, please let me know. Thanks for your review! Ion

I agree. The problem is that Boost Thread seems to be in stand-by mode, so I don't know if we will have any chance to change the situation.
I think this is an important issue for Boost in general!
Are there any of the latter? A table would be quite useful here. For compilers with a reasonable hope of getting changes fed into the Standard Library implementation (e.g. gcc) it might be worthwhile to contact the maintainers about these limitations to see if they can be removed.
I have already commented this with Howard Hinnant and Paolo Carlini from libstdc++ in a private mail. They want to support shared memory allocators but the work is not trivial, since raw pointers are everywhere. I'm ready to help them.
OK, so users need to live with the shmem containers for a while. Thats OK. I think you should state this clearly at some point: "No Standard Library implementation currently available supports Shmem this way" or words to that effect. I just wanted to say that if you have "static const int a = 5" (like
enum { a = 5 };) declarations that are just constant integral values, there is no harm. Each process will read their own copy and since both will be the same (supposing they use the same version of the class, of course) there is no problem.
Perhaps the last clause of tha sentence should be re-written, or stricken entirely as it seems not to say much. Sorry about containers. They are not documented at all, since all are
standard C++ containers in boost::shmem namespace. The only new container family is the Loki AssocVector derived flat_map family. I agree that I should document that at least. I don't know if documentation for others is needed since they are standard C++ containers in boost::shmem namespace. But if this is what boosters want, I will document them (but it's quite a hard work).
It makes no sense to reproduce Standardese for the classes that simply mimic Standard Library containers, but I would certainly be interested to see some documentation for the flat_* containers and related classes. I will try to write a small paper about multi-segment shared memory
architecture (I mean allocating new segments when we need more memory) and the problems I've found when trying to construct them. But this will be after Shmem review ends (and if it's accepted, after I make the requested changes).
I look forward to it. Plane boarding. Skiing trip is finally here! -- Caleb Epstein caleb dot epstein at gmail dot

I haven't looked at the library, but I'd like to raise a minor point. I think the name "shmem" is bad as it doesn't tell you what the library is about - most Boost libraries have more descriptive names. "Shmem" sounds to me like a letter of the Hebrew alphabet rather than the name of a Boost library. Couldn't it be called "shared_memory" or something similar instead? Paul

Hi Paul,
I haven't looked at the library, but I'd like to raise a minor point. I think the name "shmem" is bad as it doesn't tell you what the library is about - most Boost libraries have more descriptive names. "Shmem" sounds to me like a letter of the Hebrew alphabet rather than the name of a Boost library. Couldn't it be called "shared_memory" or something similar instead?
Suggestion are welcome. There are other libraries with Shmem name (for example, for programming multiprocessor shared memory issues), so I'm not the only one with bad taste ;-) Regards, Ion

Ion Gaztañaga wrote:
Hi Paul,
I haven't looked at the library, but I'd like to raise a minor point. I think the name "shmem" is bad as it doesn't tell you what the library is about - most Boost libraries have more descriptive names. "Shmem" sounds to me like a letter of the Hebrew alphabet rather than the name of a Boost library. Couldn't it be called "shared_memory" or something similar instead?
Suggestion are welcome. There are other libraries with Shmem name (for example, for programming multiprocessor shared memory issues), so I'm not the only one with bad taste ;-)
I prefer "shared_memory" as well, especially considering the recent threads on this list and the user list discussing new boost users not being able to easily/quickly finding what facilities are available. Had I not been introduced to the name shmem at my current employer(it's even worse, we have a shmipc library) I wouldn't have known what your library offered even though I've been using memory mapped files under windows for quite some time. Jeff Flinn

Hi Jeff,
I prefer "shared_memory" as well, especially considering the recent threads on this list and the user list discussing new boost users not being able to easily/quickly finding what facilities are available. Had I not been introduced to the name shmem at my current employer(it's even worse, we have a shmipc library) I wouldn't have known what your library offered even though I've been using memory mapped files under windows for quite some time.
We can take it further. So if this library is also about memory mapped files, a message queu named objects, containers and so on, maybe it's something more related with inter-process communications (IPC) than with only shared memory. Maybe IPC is even a worse name? Regards, Ion

Ion Gaztañaga wrote:
Hi Paul,
I haven't looked at the library, but I'd like to raise a minor point. I think the name "shmem" is bad as it doesn't tell you what the library is about - most Boost libraries have more descriptive names. "Shmem" sounds to me like a letter of the Hebrew alphabet rather than the name of a Boost library. Couldn't it be called "shared_memory" or something similar instead?
Suggestion are welcome. There are other libraries with Shmem name (for example, for programming multiprocessor shared memory issues), so I'm not the only one with bad taste ;-)
Ah, I wasn't aware of that. Well, shared_memory is the obvious choice, or shared_mem perhaps. Both are descriptive and simple - I don't think you have to be too adventurous when it comes to naming libraries :-) Paul

Paul Giaccone <paulg@cinesite.co.uk> writes:
I haven't looked at the library, but I'd like to raise a minor point. I think the name "shmem" is bad as it doesn't tell you what the library is about - most Boost libraries have more descriptive names. "Shmem" sounds to me like a letter of the Hebrew alphabet rather than the name of a Boost library.
Yeah, it's barely even pronounceable for a native english speaker that doesn't also speak Hebrew ;-) -- Dave Abrahams Boost Consulting www.boost-consulting.com

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of David Abrahams | Sent: 10 February 2006 13:52 | To: boost@lists.boost.org | Subject: Re: [boost] [Review] Shmem | | Paul Giaccone <paulg@cinesite.co.uk> writes: | | > I haven't looked at the library, but I'd like to raise a | minor point. I | > think the name "shmem" is bad as it doesn't tell you what | the library is | > about - most Boost libraries have more descriptive names. "Shmem" | > sounds to me like a letter of the Hebrew alphabet rather | than the name | > of a Boost library. |Yeah, it's barely even pronounceable for a native english speaker that |doesn't also speak Hebrew ;-) As "a native english speaker that doesn't also speak Hebrew" this is my only contribution to this review: Boost.Shared_memory or Boost.Shared -- if it can apply to more than RAM memory? PLEASE. Paul -- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB Phone and SMS text +44 1539 561830, Mobile and SMS text +44 7714 330204 mailto: pbristow@hetp.u-net.com http://www.hetp.u-net.com/index.html http://www.hetp.u-net.com/Paul%20A%20Bristow%20info.html

Ion- There is some missing documentation for shmem. basic_named_shared_object is missing the member functions that are in basic_named_object_impl. I wouldn't have brought this up except that I've got a question about some of them. (Actually, Google's cache has some documentation on some of this. Is that out of date?) When you create a named_shared_object, your examples do so on the stack in the creating (or opening) process. From looking at the code, it appears that a named_shared_object::segment_manager is created in the shared memory. These segment_managers are also undocumented, but it appears that they do many (all?) of the things that can be done through the named_shared_object. I'm hoping to use one of these. I'm hoping to store an offset_ptr to the segment_manager in a shared object that I'm creating. The object would then be able to use the segment_manager to allocate and deallocate shared memory. Is that the correct use for these things? Am I going down the wrong track somewhere? -Fred

Hi Fred,
There is some missing documentation for shmem. basic_named_shared_object is missing the member functions that are in basic_named_object_impl. I wouldn't have brought this up except that I've got a question about some of them. (Actually, Google's cache has some documentation on some of this. Is that out of date?)
I think that the problem is that doxygen-quickbook-boosbook team does not show inherited funcitions. The functions are already commented in the source code. What I can do for the final version is to implement all functions in derived classes as forwarding functions to the base class and document them so they are in the reference section. In the documentation when I present shared_named_object, you can see all the function and the comments for them. If you have any question about them, please ask.
When you create a named_shared_object, your examples do so on the stack in the creating (or opening) process. From looking at the code, it appears that a named_shared_object::segment_manager is created in the shared memory. These segment_managers are also undocumented, but it appears that they do many (all?) of the things that can be done through the named_shared_object.
Yes, the segment manager, as the documentation in the architecture chapter says, is created in the shared memory. So it's the entity that really does all the job. Named_shared_object and other front-ends create the back-end memory and forward all functions to the segment manager. I should document it, because it's a class that the user can need. Maybe I should pull it out of the detail namespace.
I'm hoping to use one of these. I'm hoping to store an offset_ptr to the segment_manager in a shared object that I'm creating. The object would then be able to use the segment_manager to allocate and deallocate shared memory.
Yes, that's the idea. You can have a relative pointer to the segment manager, so that you have all allocation functions available from an object placed in shared memory. That's the way all Shmem STL-like allocators work. Regards, Ion

I vote to accept the library into Boost. I have been following shmem development for long time and most of the issues I had were resolved. It covers important but so far neglected area. Potential for tools based on shmem is large. Ion has created extensive (1MB of sources) and powerful tool and I have full belief he will maintain and improve it futher. For practical reasons I do not recommend attempt to split the library into separate parts. What I do not like is described in details bellow. Most of my objections fit into two groups: 1. The documentation is /very/ dense with information and may easily scare a reader. Examples to play with and pictures/diagrams wold lessen the mental toll on reader. 2. (From someone who is not English native.) Currently used names are hard to understand and remember and lack intuitive meaning. Some words like "shared", "named" or "object" are overused I think naming conventions should be discussed as the most important issue of the library since shmem has real chance to establish de-facto standard for C++ IPC and process synchonization. /Pavel __________________________________________________________ 1. index.html: "portable synchronization primitives" ==>> "portable interprocess synchronization primitives" --------------- Need for glossary: the first page mentions several terms that are not really well defined: * dynamic allocation * segment * named allocation * base address (not eveyone may know) These terms should have link to glossary page when they are used for the first time, ala: <a ref="....">dynamic allocation</a> (<a ref="...#...">in glossary</a>) __________________________________________________________ 2. Naming: IMO the naming convention should be reconsidered. It is rather hard to intuitively feel difference between "named_shared_object" and "named_user_object" for example. I am not and English speaker so I don't dare to suggest alternatives but current names are quite confusing to me. Anyway, all naming conventions should be explained in glossary. I have uneasy feeling from these names: * named_shared_object (vague for me) * segment_manager (the word manager is overused) * segment (too much reminds me the x86 segments) * mixing of "object" and "class" together __________________________________________________________ 3. quick_guide.html: as it was suggested a smart pointer should be used in the first example. I am against adding new typedef into shmem, seeing shared_ptr<void, shmem::deleter> gives me immediate clue, while shmem_ptr_whatever is something one needs to look up. ----------------- The code snippets may have link to full source code of an example next to them. Especially for first few snippets this will encourage novices to play and get immediate feedback. -------------- Which reminds me: there should be a page listing all examples with short description what important and useful is in each example. It is fast and cheap way for learning and it is used e.g. in multi-index. ------------------ Code snippet about retrieving named object from shmem (the fourth): the need for std::pair when extracting the object should be explained in a comment. It is unexpected on the first sight. ------------------ The "offset_ptr" word should be linked to reference. Generally, the docs should be interlinked as much as possible. ------------------ An example with offset_ptr AND named object may be added to ensure the first time reader these tools are orthogonal. ------------------- The title: "Creating vectors in shared memory with different base addresses" should be "Shmem mapped to different base adresses? Standard STL containers cannot be used. Solution: shmem's own container set." Long but self-explaining. Medieval publishers titled their books in similar way and it sold well for centuries. ------------------- Just a pedagogical nit. The snippets show: int main() { ... if (!...) { return -1; } Depending on OS the value (-1) would produce strange effects. Use 1. ------------------- Nits for snippet #6: * the name ShmemAllocator should be shmem_allocator_t * alloc_inst should be "my_allocator" or so (the word "inst" means something in Prolog/Mercury) __________________________________________________________ 4. offset_ptr.html:the "offset_1_null_ptr" policy is sufficient for all purposes. Since I cannot think of any use case where an other pointer policy is required I recommend to remove them. This should decrease code complexity and most importantly mental load on the user. __________________________________________________________ 5. limitations.html: more hyperlinks. ------------- In sentence "References will only work if memory is ..." the word "only" should be bold and red. ------------ Perhaps this page should be divided into two parts: * shmem is always mapped to the same address * shmem can be mapped to different addresses and for each cases limitations listed. __________________________________________________________ 6. rationale.html: the text about containers repeats what was already said before. While repeating helps one to remember better it is for the third time on last three pages (quick guide/limitations/rationale) __________________________________________________________ 7. concepts.html: should be named glossary and expanded. Some texts here sound strange: "...memory algorithm is an object..." Picture/diagram would help here. The bolding is overused. I feel uneasy about the term "front-end" but have no suggestion. __________________________________________________________ 8. oswrappers.html: "Base shmem classes" may be better name. a the classes may have name "XYZ_base". The word "basic" is also used in two docs pages later and it is very confusing (to me). --------------- The functions "create_with_func/open_with_func/..." I still didn't got need for these (I am slow) but there should be a leaf page with an example and typical list of situations where this feature is necessary. The code snippet should have link to this "more" leaf page. ---------------- Perhaps the construction/initialisation dichotomy for mapped_file is not needed. The class looks cheap enough to use constructor/desctructor for them. --------------- Huge (> 4 GB) files and mapped_file class: I am not sure what size_t is on current 64 bit systems. To be absolutely, positively, safe I would use fileoff_t everywhere instead of size_t. ------------------- (Absolutely uninformed guess follows.) If process A crashes in the middle of shmem operation (I expect this to happen during development), will all the mutexes/shared memories/etc be destroyed automaticlalyy after process B is closed? Or will reboot be needed to cleanup the system? In second case, could some API be added to destroy any named remnants on explicitly? This may be also useful for self-restarting applications. -------------------- Use of term "process-shared" and "named". It is confusing (for me) and it took me a while to get it. I suggest to add short info on the top of the page describing distinction between these two approaches in tabular form. Should I pick names I would use "XYZ placed inside shmem" and "XYZ identified by name". --------------------- Generally, this page (oswrappers.html) is /extremely/ dense with information. It covers stuff that is taught during whole university semester. At least reader should be warned on the top of page that lot of time is needed to grok it. Links to examples and class diagrams would help here. __________________________________________________________ 9. named_shared_object.html: The title may be: "Named user data places in shmem" or something what forms full sentence. I have trouble parsing the current name. ----------------- Seeing the whole class synopsis is a bit frightening experience. (a blob on two pages). Perhaps a mini-synopsis with just template parameters and few most important functions (or groups of functions) could be shown and only then followed by the full class. Other possibility would be SGI STL like documentation with table and detailed comments bellow the table. ----------------- Title: "Common named shared object classes" ==> "Default specialisation of class doing named shared memory allocation" ;-) I do not like the group "object classes" and do not like that something with "object" in name is a class. I think this should be discussed. ------------------ Links to examples and class diagrams would help here. ------------------ Instead of having boost::shmem::anonymous_instance I would prefere to have function construct_unnamed(...) or construct_anonynous(...) Similarly construct_unique(....) ------------------ The discussion of "index types" feels as something so advanced that it should be moved futher down in the docs. Anyway, current documentation too low on details and actual purpose. Missing is a /table/ comparing each type and where it shines. __________________________________________________________ 9. stl_allocators.html: A very dense page. A table on the top comparing advantages of each allocator type would help. A link to header(s) implementing the allocators should be present so reader could jump there quickly. I guess every allocator type is used in at least one of shmem containers: there could be link to show this as an example. __________________________________________________________ 10. containers_explained.html: An overview of Boost libraries compatible with shmem couled be added (I suspect multi-index is compatible, while say smart pointer generally isn't). ------------- The last code snippet may be better if separated into smaller and focused parts like "all in shmem", "nothing in shmem", "whatever inbetween" and so on. __________________________________________________________ 11. customizing_boost_shmem.html: This should be separated into three parts: algorithms, custom allocators and indexes. ---------------- The previous pages in the docs should have hyperlinks like "see how to build your own XYZ (very advanced)" pointing here. -------------- Generally, interlinking should be used much more. __________________________________________________________ 12. beyond_shared_memory.html: This name suggests super-advanced functionality far beyond needs or abilities of anyone. I think what is presented here are very useful tools that could be used separately from any IPC and synchronisation. Complete examples would be very helpful here, as well as careful wording that doesn't suggest need to use whatever "shared" (in OS sense). ---------- The relevation of memory mapped files feels too late, at this moment. Too many readers won't get that far or will skip the "beyond". The note about file mapping in initial parts of the docs should have interlink pointing here. ---------- I remember Pablo Halpern's proposal for new allocator (on comp.std.c++). One of the features was ability to 'move' objects passed into an container under allocator of this container. At the moment of writing I have only vague recollection but I think it could be possible (via shmem containers): * create complex data structures in specified memory areas (without the need to modify too much of user code). * being able to "(de)serialize" complex data structures. (As I said, these are vague ideas from the past.) __________________________________________________________ 13. streams.html: there could be possibly overlap with Boost.Iostreams library. At this moment I am not able to comment on it more. __________________________________________________________ 14. shmem_smart_ptr.html: A teaser in form of miniexample should be on the top of the page. ------------- For examples I suggest to always use full qualification for boost::shmem::scoped_ptr (and the other one as well). Danger of making mistake is too large. __________________________________________________________ 15. shared_message_queue.html: The mention of named_user_object should have backlink and a comment that this means user-buffer-related tool. ------ There's no explicit info what happen if the buffer-is-too-small error happens: will it read part of message, will it stay in queue or will it be discared? This and info whether message can get fragmented should be added. ------ The double contruction/initialisation steps may be merged into one (no strong opinion, just a feeling). ------ Functions like: how-big-is-the-next-message() or peek() may be added. I guess that people who will use shmem will be the ones who happily use such tricks. ------ The "MyLess" in the example is not used consistently, should be removed for clarity. ------- The names like "mg1", "mg2" should be longer. ----- Information on atomicity of the queue should be told explicitly inside introduction paragraph. --------- I wonder whether and how it would be possible to build 'objects' passing queue above shared_message_queue (subject of necessary limitations). I know such tool can get easily misused but it would be first step to implement Erlang-like message passing framework. __________________________________________________________ 16. architecture.html: Diagrams picturing the levels of the library and interaction between the levels would be handy. ---------- Information how lifetime of OS resources is managed should be here (especially for Posix). __________________________________________________________ 17. performance.html: Examples would be helpful. Examples that show a slow way as well as examples tuned for performance. __________________________________________________________ 18. open_issues.html Namespaces: I do not like idea of more than one namespace as it would only complicate already feature rich tool. If it is possible to implement (transparently to the user) a "type" checking then add it. __________________________________________________________ 19. future_improvements.html Security attributes - IMHO adding them is almost trivial (I know, I know) and they should be added before the library is thrown to public. __________________________________________________________ 20. All shmem exception should have one common parent. Name sem_exception is too short. lock_exception should be deadlock_exception. -------------- Possibly shmem::bad_alloc may inherit from both common parent and std::bad_alloc. This would make sense if release of "normal" memory could have positive effect on availability of "shared" memory. Just an idea. __________________________________________________________ 21. Wishes for additional functonality: * ability to enumerate named from shmem. Useful for debugging/troubleshooting. * debugging support: for example the first byte of shmem should be fixed constants and assert()s should be everywhere to check that user code didn't overwrote this. Possibly the metadata may have CRC stored and /always/ checked against. It may sounds as paranoia but multiprocess troubleshooting is such a pain that any measure that helps to catch a bug is justified. * in "debug" mode the shared memory areas may be surrounded by guard areas (possibly too hard) and memory of an deleted object may be filled with 0xCC. * primitive "transaction like" functionality for shmem. Scenario: - shmem segment exists - I do copy (clone) of the shmem data - many operations with shmem are executed - something may fail but ability to revert into well-defined state (in application sense, not in low level C++ sense) is impossible at this moment - then the stored copy of shmem will be written back into shmem segment, restoring the initial, well defined state. __________________________________________________________ 22. Wishes for docs: * a page named "What I can do with shmem" with one liner description and link to example or detailed docs. This should cover small practical tasks like using process wide mutex, adding data into shared queue etc. * the current documentation should visually distinguish parts showing problems, mistakes and potentially wrong uses of shmem. Picture of a bomb is typically used for this purpose. * possibly a visual separation could be used for "basic" and "advanced" topics or for levels like "implementation details", "basic functionality" and "higher level tools". * the docs may have section on typical (or expected) user mistakes, for example: - "what if I retrieve named object but use wrong type" * the docs should say whether it is possible to somehow block shmem from working properly. E.g. both processes crash and leave a named OS resource hanging out, restarted processes will fail to connect the resource or it will be damaged beoynd repair. To discover such corner cases later in the project may very unpleasant. * Pictures of inheritance diagrams could be relatively easily generated by Doxygen. On many places such pictures would come handy. __________________________________________________________ EOF

Hi Pavel,
I vote to accept the library into Boost.
Good start! ;-)
What I do not like is described in details bellow. Most of my objections fit into two groups:
1. The documentation is /very/ dense with information and may easily scare a reader.
Examples to play with and pictures/diagrams wold lessen the mental toll on reader.
I agree. I will try to add some diagrams.
2. (From someone who is not English native.)
Currently used names are hard to understand and remember and lack intuitive meaning.
Some words like "shared", "named" or "object" are overused
They are not my favorite but I couldn't find better ones! I want also to be a relationship between the front-ends, since all offer similar functions but each ones operates in a different type of memory (named_shared_object, named_mfile_object...). Now we have to find a name to an object that creates shared memory/memory mapped file and allows allocation of named objects there. I'm open to change it.
__________________________________________________________ 1. index.html: "portable synchronization primitives" ==>> "portable interprocess synchronization primitives"
Ok
Need for glossary: the first page mentions several terms that are not really well defined:
* dynamic allocation * segment * named allocation * base address (not eveyone may know)
Well, maybe it's that I can see it through the eyes of a newbie. I will try to define them (as accurate as I can).
I have uneasy feeling from these names: * named_shared_object (vague for me) * segment_manager (the word manager is overused) * segment (too much reminds me the x86 segments) * mixing of "object" and "class" together
Sorry, but I couldn't find better ones. Open to change if someone proposes good alternatives.
__________________________________________________________ 3. quick_guide.html: as it was suggested a smart pointer should be used in the first example.
I am against adding new typedef into shmem, seeing
shared_ptr<void, shmem::deleter>
gives me immediate clue, while
shmem_ptr_whatever
is something one needs to look up.
Ok, I wanted to keep example simple, but I don't think a shared_ptr will hurt.
-----------------
The code snippets may have link to full source code of an example next to them.
Especially for first few snippets this will encourage novices to play and get immediate feedback.
Good idea. Since they taken from real code it's not difficult.
Code snippet about retrieving named object from shmem (the fourth): the need for std::pair when extracting the object should be explained in a comment.
Ok.
------------------- The title: "Creating vectors in shared memory with different base addresses"
should be
"Shmem mapped to different base adresses? Standard STL containers cannot be used. Solution: shmem's own container set."
Long but self-explaining.
I find your phrase too long, but I will think in a better alternative.
------------------- Just a pedagogical nit. The snippets show:
int main() { ... if (!...) { return -1; }
Depending on OS the value (-1) would produce strange effects. Use 1.
Strange effects? I didn't know that. Which type of effects? I can change all return errors to 1.
-------------------
Nits for snippet #6:
* the name ShmemAllocator should be shmem_allocator_t * alloc_inst should be "my_allocator" or so (the word "inst" means something in Prolog/Mercury)
No problem.
__________________________________________________________ 4. offset_ptr.html:the "offset_1_null_ptr" policy is sufficient for all purposes.
Since I cannot think of any use case where an other pointer policy is required I recommend to remove them. This should decrease code complexity and most importantly mental load on the user.
I know you were since the beginning against this. But I will try once more to resist. ;-)
__________________________________________________________ 5. limitations.html: more hyperlinks. "References will only work if memory is ..." the word "only" should be bold and red.
Bold will be enough!
__________________________________________________________ 6. rationale.html: the text about containers repeats what was already said before.
While repeating helps one to remember better it is for the third time on last three pages (quick guide/limitations/rationale)
Right. I will try to say it only once.
--------------- The functions "create_with_func/open_with_func/..."
I still didn't got need for these (I am slow) but there should be a leaf page with an example and typical list of situations where this feature is necessary.
This functions allow atomic initialization of objects the shared memory when connecting/creating, so that two processes can create AND initialize shared memory without race conditions. The alternative would be to lock a external named mutex. It is used, for example, in the shared_message_queue, if two processes open_or_create the message queue, the creator, apart from creating the shared memory segment, will initialize queue members safely. Others have requested similar functionality for named_shared_object, so they can create the segment and atomically create some named objects. It's a matter of taste I think.
--------------- Huge (> 4 GB) files and mapped_file class:
I am not sure what size_t is on current 64 bit systems.
To be absolutely, positively, safe I would use fileoff_t everywhere instead of size_t.
I use size_t when referring to memory since size_t is enough to point all memory range. When referring to file offsets, I use fileoff_t. I think this is safe (if there is any 64 expert here, please correct me if I'm wrong).
-------------------
(Absolutely uninformed guess follows.)
If process A crashes in the middle of shmem operation (I expect this to happen during development), will all the mutexes/shared memories/etc be destroyed automaticlalyy after process B is closed?
Good question. The answer is... it depends. I can't guarantee cleanup when a process crashes, because I can't register rollback functions in C++. In Windows, the shared memory is automatically released by the OS. In Unix the file-like interface does not do this. And this is really annoying. I haven't found solution for this (yes I could handle all signals including SIGSEGV, but I would let UNIX signals unusable). If any POSIX expert can help I would appreciate. This is, in may opinion, the weakest point of the library. The behavior of the the shared memory is correct when all goes well. But when a process crashes, I can't do anything.
In second case, could some API be added to destroy any named remnants on explicitly?
I think this is a good idea, so that we can make more robust applications. I would need to register in a process-level map (a static object perhaps all objects when I create them. However, this singleton-like interface can create problems if we create static named objects (when we will have a good singleton in C++?)
-------------------- Use of term "process-shared" and "named". It is confusing (for me) and it took me a while to get it.
I suggest to add short info on the top of the page describing distinction between these two approaches in tabular form.
Ok.
--------------------- Generally, this page (oswrappers.html) is /extremely/ dense with information.
It covers stuff that is taught during whole university semester.
I know, but I don't pretend to explain these concepts. I already suppose the programmer knows something about them.
At least reader should be warned on the top of page that lot of time is needed to grok it.
Ok.
__________________________________________________________ 9. named_shared_object.html: Perhaps a mini-synopsis with just template parameters and few most important functions (or groups of functions) could be shown and only then followed by the full class.
Ok.
----------------- Title:
"Common named shared object classes"
==>
"Default specialisation of class doing named shared memory allocation"
I will try to find an alternative.
I do not like the group "object classes" and do not like that something with "object" in name is a class.
I think this should be discussed.
I agree. But I run out ideas some time ago.
__________________________________________________________ 9. stl_allocators.html:
A very dense page. A table on the top comparing advantages of each allocator type would help.
Ok
A link to header(s) implementing the allocators should be present so reader could jump there quickly.
Ok
__________________________________________________________ 10. containers_explained.html:
An overview of Boost libraries compatible with shmem couled be added (I suspect multi-index is compatible, while say smart pointer generally isn't).
Sorry, but I'm not aware of any.
The last code snippet may be better if separated into smaller and focused parts like "all in shmem", "nothing in shmem", "whatever inbetween" and so on.
Ok
__________________________________________________________ 12. beyond_shared_memory.html:
This name suggests super-advanced functionality far beyond needs or abilities of anyone.
Any name suggestion?
The relevation of memory mapped files feels too late, at this moment. Too many readers won't get that far or will skip the "beyond".
You are right. Maybe I should say it just after named_shared_object explanation, since you have basically the same features.
__________________________________________________________ 13. streams.html: there could be possibly overlap with Boost.Iostreams library.
I don't know Boost.Iostream well. Anyway, I think bufferstream and vectorstream are very general tools that could replace many scanf/printf functions of C++ code alergic to stringstream overhead.
__________________________________________________________ 14. shmem_smart_ptr.html:
For examples I suggest to always use full qualification for boost::shmem::scoped_ptr (and the other one as well).
Ok
__________________________________________________________ 15. shared_message_queue.html:
The mention of named_user_object should have backlink and a comment that this means user-buffer-related tool.
------
There's no explicit info what happen if the buffer-is-too-small error happens: will it read part of message, will it stay in queue or will it be discared?
It will stay in the queue.
This and info whether message can get fragmented should be added.
Message won't never be fragmented. I will write that.
The double contruction/initialisation steps may be merged into one (no strong opinion, just a feeling).
I will add it as exception throwing alternative.
------ Functions like: how-big-is-the-next-message() or peek() may be added.
Ok. With "peek" you mean copying the message but without extracting from the queue?
The "MyLess" in the example is not used consistently, should be removed for clarity. The names like "mg1", "mg2" should be longer.
Ok.
----- Information on atomicity of the queue should be told explicitly inside introduction paragraph.
Ok
__________________________________________________________ 16. architecture.html:
Diagrams picturing the levels of the library and interaction between the levels would be handy.
Ok
----------
Information how lifetime of OS resources is managed should be here (especially for Posix).
Ok
__________________________________________________________ 19. future_improvements.html
Security attributes - IMHO adding them is almost trivial (I know, I know) and they should be added before the library is thrown to public.
But how will you unify POSIX/Windows security attributes? I hard issue, in my opinion.
__________________________________________________________ 20. All shmem exception should have one common parent.
Name sem_exception is too short.
lock_exception should be deadlock_exception.
Lock exception is named after boost::thread name.
__________________________________________________________ 21. Wishes for additional functonality:
* ability to enumerate named from shmem. Useful for debugging/troubleshooting.
I could fill (atomically) a vector with info about types. I will look at this.
* debugging support:
Ok, but this is quite a big task. I would like to implement them in future versions of Shmem.
* primitive "transaction like" functionality for shmem.
Scenario: - shmem segment exists - I do copy (clone) of the shmem data - many operations with shmem are executed - something may fail but ability to revert into well-defined state (in application sense, not in low level C++ sense) is impossible at this moment - then the stored copy of shmem will be written back into shmem segment, restoring the initial, well defined state.
Uf. I think this is beyond my knowledge!
* the current documentation should visually distinguish parts showing problems, mistakes and potentially wrong uses of shmem.
Picture of a bomb is typically used for this purpose.
Ok. Thanks for the review! I've left some comments out of the reply, but I think you will happy if I address "only" those from above. It's just that replying your reviews is a very hard work! ;-) Regards, Ion

"Ion Gaztañaga" wrote:
Currently used names are hard to understand and remember and lack intuitive meaning.
Some words like "shared", "named" or "object" are overused
They are not my favorite but I couldn't find better ones! I want also to be a relationship between the front-ends, since all offer similar functions but each ones operates in a different type of memory (named_shared_object, named_mfile_object...). Now we have to find a name to an object that creates shared memory/memory mapped file and allows allocation of named objects there. I'm open to change it.
E.g. "shared_memory" is wasted for some internal class. Perhaps this method could be used: 1. select the most important class in the lib and assign it the most fitting, short name (I suspect named_shared_object -> shared_memory) 2. select the next most important class and assign as much fitting name as possible. 3. and so on ... __________________________________________________________
Just a pedagogical nit. The snippets show:
int main() { ... if (!...) { return -1; }
Depending on OS the value (-1) would produce strange effects. Use 1.
Strange effects? I didn't know that. Which type of effects? I can change all return errors to 1.
Some OS may use only lowest 8 bits from the returned value. It was DOS and possibly Windows. __________________________________________________________
4. offset_ptr.html:the "offset_1_null_ptr" policy is sufficient for all purposes.
Since I cannot think of any use case where an other pointer policy is required I recommend to remove them. This should decrease code complexity and most importantly mental load on the user.
I know you were since the beginning against this. But I will try once more to resist. ;-)
To "resist" means finding an use case where such pointer is necessary. __________________________________________________________
If process A crashes in the middle of shmem operation (I expect this to happen during development), will all the mutexes/shared memories/etc be destroyed automaticlalyy after process B is closed?
Good question. The answer is... it depends. I can't guarantee cleanup when a process crashes, because I can't register rollback functions in C++. In Windows, the shared memory is automatically released by the OS. In Unix the file-like interface does not do this. And this is really annoying. I haven't found solution for this (yes I could handle all signals including SIGSEGV, but I would let UNIX signals unusable). If any POSIX expert can help I would appreciate.
This is, in may opinion, the weakest point of the library. The behavior of the the shared memory is correct when all goes well. But when a process crashes, I can't do anything.
Perhaps a tool can be provided to do cleanup in debug mode: Name of every created shmem/mutex etc will be stored in a file and the file will be deleted on normal application close. When application restarts it will try to read the file, if it finds it it will destroy what is recorded there. With tmpnam() it should be portable and safe for all practical purposes. For debugging this should ensure clean system every time the application starts. __________________________________________________________
In second case, could some API be added to destroy any named remnants on explicitly?
I think this is a good idea, so that we can make more robust applications. I would need to register in a process-level map (a static object perhaps all objects when I create them. However, this singleton-like interface can create problems if we create static named objects (when we will have a good singleton in C++?)
Jason Hise: chaos[at]ezequal.com. Don't know current state. __________________________________________________________
12. beyond_shared_memory.html:
This name suggests super-advanced functionality far beyond needs or abilities of anyone.
Any name suggestion?
Named allocation in user supplied buffer. (I like complete sentences.) The "named allocation" could be perhaps one of basic key terms. It says what and how (up to point) __________________________________________________________
13. streams.html: there could be possibly overlap with Boost.Iostreams library.
I don't know Boost.Iostream well. Anyway, I think bufferstream and vectorstream are very general tools that could replace many scanf/printf functions of C++ code alergic to stringstream overhead.
I've managed to use it only after a long struggle. Don't know how J. Turkannis is active these days, perhaps may be asked.
Functions like: how-big-is-the-next-message() or peek() may be added.
Ok. With "peek" you mean copying the message but without extracting from the queue?
Yes. __________________________________________________________
19. future_improvements.html
Security attributes - IMHO adding them is almost trivial (I know, I know) and they should be added before the library is thrown to public.
But how will you unify POSIX/Windows security attributes? I hard issue, in my opinion.
Good old way: #ifdef WINDOWS void shmem_create(....., LPSECURITY_ATTRIBUTES* security); #else void shmem_create(....., int security); #endif I would even avoid the defaults so people will be forced to take look on it. __________________________________________________________
* primitive "transaction like" functionality for shmem.
Scenario: - shmem segment exists - I do copy (clone) of the shmem data - many operations with shmem are executed - something may fail but ability to revert into well-defined state (in application sense, not in low level C++ sense) is impossible at this moment - then the stored copy of shmem will be written back into shmem segment, restoring the initial, well defined state.
Uf. I think this is beyond my knowledge!
I think just a function to do copy of shmem memory shared_ptr<char> shared_memory::close() { shared_ptr<char> result(new char[total_size]); memcpy(....); return result; } No locking/checking/whatever. If the application screws something up the shmem memory will be overwritten and voila, we are back. ------------------------------ I'll try how BCB works ith shmem, possibly during the weekend. /Pavel

Pavel Vozenilek(e)k dio:
"Ion Gazta�aga" wrote:
Currently used names are hard to understand and remember and lack intuitive meaning.
Some words like "shared", "named" or "object" are overused They are not my favorite but I couldn't find better ones! I want also to be a relationship between the front-ends, since all offer similar functions but each ones operates in a different type of memory (named_shared_object, named_mfile_object...). Now we have to find a name to an object that creates shared memory/memory mapped file and allows allocation of named objects there. I'm open to change it.
E.g. "shared_memory" is wasted for some internal class.
How will you call to a class that just does what boost::shmem::shared_memory does? It's a wrapper around the OS that just allocates a shared memory segment. Perhaps, for named_shared_yyy we can use something like shared_memory_xxx / mmapped_file_xxx / heap_memory_xxx /user_buffer_xxx. The problem is to find the xxx part.
To "resist" means finding an use case where such pointer is necessary.
A case where you want to byte the whole segment using a char pointer placed in the segment. The pointer can access the whole segment since.
Perhaps a tool can be provided to do cleanup in debug mode:
Name of every created shmem/mutex etc will be stored in a file and the file will be deleted on normal application close.
When application restarts it will try to read the file, if it finds it it will destroy what is recorded there.
With tmpnam() it should be portable and safe for all practical purposes.
This supposes that only one application creates a segment. And that might not be true.
__________________________________________________________
12. beyond_shared_memory.html:
This name suggests super-advanced functionality far beyond needs or abilities of anyone. Any name suggestion?
Named allocation in user supplied buffer.
(I like complete sentences.)
The "named allocation" could be perhaps one of basic key terms. It says what and how (up to point)
Maybe "beyond shared memory" is a bit pretentious. I will think an alternative.
Good old way:
#ifdef WINDOWS void shmem_create(....., LPSECURITY_ATTRIBUTES* security); #else void shmem_create(....., int security); #endif
But can you have a common security attribute set accross win/unix? I'm not taking about compilation but semantics.
I think just a function to do copy of shmem memory
shared_ptr<char> shared_memory::close() { shared_ptr<char> result(new char[total_size]); memcpy(....); return result; }
No locking/checking/whatever.
You need to see if synchronization objects can be safely copied, changed and after that crushed with a char buffer. For user objects fully constructed in shared memory (without external dependencies from objects from outside the segment or OS resources) that can be true, but with resources from the operating system... who knows.
------------------------------ I'll try how BCB works ith shmem, possibly during the weekend.
Thanks. Ion

Feature request to make fixed base adress usable. Generally it is impossible to ensure newly created shared memory will be mapped into the same address, because the address space gets get fragmented almost instantly. I have used one workaround: * create dummy main executbale that does nothing but creates/opens shared memory at given address. * Since the process does nothing else (no other DLLs loaded, no dynamic allocations, no statics initialisation, nothing) it is ensured this request will succeed. * then a DLL with the main application code is called and passed the shared memory. This DLL does all the work, loads all other DLLs etc. Shmem may provide function open() which takes ownership of existing shared memory, as if it was created there. The difference from use supplied buffer is that lifetime management of shared memory block is kept inside the library. The trick could be nice item in "How do I..." hints page. /Pavel

Vote: ----- I believe the library should be made part of Boost, definitely! Usefulness: ----------- There is great potential in this submission. I would not want to touch any system API for shared memory having this both easy-to-use and highly extensible framework. The library does not only provide access to shared memory, but also very useful machinery for general memory management. So I think it is (or at least should be) very useful to a wide audience. Design / Documentation: ----------------------- The library's name isn't excactly perfect. It's abbreviated, hard to pronounce and doesn't cover the whole functional range of the submission. Maybe Boost should contain a memory subsection (just like it there is a "numeric" subsection and an "functional" subsection) and Boost.Pool would be another candidate --or better-- since the library provides enough memory management infrastructure that can be used with regular memory split Shmem (and maybe merge it with Pool to some degree) to form Boost.Memory. I believe the library could generally benefit from a tighter Boost integration (of couse it's a wise choice to do this after it has been accepted). Probably it is possible to get rid of some of the duplication (more below). Static member variables are local to a process -- is there an easy way to provide the semantics of a static member variable accross process boundaries (a static member that proxies a named object, perhabs)? If yes, it would be quite useful and should be added. Many of the library's classes use "success results" for error handling (rather than throwing exceptions). I think it's OK for low-level stuff, as exception handling can be expensive luxury in some contexts, but it should be said in the documentation. Probably it's handy to have a wrapper/subclass that adds throw-exception-on-failure semantics. The documentation states that the current STL implementations can't handle smart pointers through their allocators. It also mentions that VC7.1 is close. So what about VC8? And BTW. did you try to improve things by writing a bug reports to the developers of libstc++, STLport and maybe Dinkumware? Did you consider utilizing Boost.PtrContainer? shared_message_queue could use some templated overloads for the send and reveive functions to - map pointers to offsets, - submit values and arrays thereof as memory dumps, and - provide optional type checking to compare sent and excepted type. It would allow to easily manufacture complex serialized protocols. A very useful data structure to have in shared memory context would be a discriminated union such as Boost.Variant. There are no security attributes -- what are the defaults then? Can I emit machine code into shared memory and have another process jump into it? I'd like to (but might not be representative for the average user) -- it should be mentioned what the defaults are, though. Further, I can't really see how security attributes on memory segments relate to security attributes on files and why having them supported means so much work. Please tell me! Documentation structure: ------------------------ The "Concepts and defininitions" part of the documentation seems misplaced to me (at this point of reading it doesn't help me a lot). Maybe it's something for the appendix. I'd also like to see a few more links within the documentation. The ownership pointers should be mentioned somewhere in the overview. Implementation: --------------- I don't like that 1-hack for the NULL values in offset_ptr and I don't understand why it's necessary. If NULL is outside the memory segment -- so what? NULL should never be dereferenced anyway... There should be a "just works" solution and there's no need for parametrization, here. Maybe I don't understand the problem correctly. Evaluation effort, experience with the submission: -------------------------------------------------- The evaluation is based on a good reading of the documentation and I haven't used the library yet but am certainly looking forward to it. In case I get to it during the review period I'll post an update.

I don't like that 1-hack for the NULL values in offset_ptr and I don't understand why it's necessary. If NULL is outside the memory segment -- so what? NULL should never be dereferenced anyway... There should be a "just works" solution and there's no need for parametrization, here. Maybe I don't understand the problem correctly.
That's a good point. Is it possible to have a null offset_ptr point to NULL? (-this, essentially) It might not be. I can imagine potential problems with this on a CPU architecture that placed registers near 0, though.
The evaluation is based on a good reading of the documentation and I haven't used the library yet but am certainly looking forward to it. In case I get to it during the review period I'll post an update.
That would be really valuable! We've only had a few reviews so far from folks who have actually used the library. It would be nice to get some more. -Fred

Fred Bertsch wrote:
I don't like that 1-hack for the NULL values in offset_ptr and I don't understand why it's necessary. If NULL is outside the memory segment -- so what? NULL should never be dereferenced anyway... There should be a "just works" solution and there's no need for parametrization, here. Maybe I don't understand the problem correctly.
That's a good point. Is it possible to have a null offset_ptr point to NULL? (-this, essentially) It might not be. I can imagine potential problems with this on a CPU architecture that placed registers near 0, though.
You mean hardware registers like vector interrupts? Well if you don't dereference there's no problem and if you dereference you have to debug anyway... But my suggestion has a very fundamental flaw: When an offset pointer in process a sets the offset to '-this' (resulting in NULL), then the offset pointer in process b would add his (very own) 'this' and *not* get NULL. Sorry for my stupidity attack, Tobias

I don't like that 1-hack for the NULL values in offset_ptr and I don't understand why it's necessary. If NULL is outside the memory segment -- so what? NULL should never be dereferenced anyway... There should be a "just works" solution and there's no need for
"Tobias Schwinger" wrote: parametrization, here. Maybe I don't understand
the problem correctly.
It is about converting an offset value into pointer. Offset 0 means that the data didn't move, their absolute address stayed the same. Offset 1 meaning always NULL is based on assumption that no one ever will use a pointer pointing inside self, to the second byte of self. ala char* p; p = (char*)&p + 1; I argue somewhere that this is enough for absolutely all practical situations and that other, more complicated types of offset pointer can be removed from shmem. /Pavel

Pavel Vozenilek wrote:
"Tobias Schwinger" wrote:
Maybe I don't understand the problem correctly.
char* p; p = (char*)&p + 1;
OK, now I get it. And I don't think it's that bad anymore.
I argue somewhere that this is enough for absolutely all practical situations and that other, more complicated types of offset pointer can be removed from shmem.
Right. Who would seriously want to point to the second byte of an address? Thanks, Tobias

My review is based on a very light skim through everything. Sorry its not more comprehensive, but if/when I get time I defintely want to play with this library! ***Design*** Shmem is limited in scope. Its a low level building block for implementing local inter process Object transfer. AFAICS shmem doesnt concern itself much with the protocols of IPC. It wont scale to distributed communications. Is there scope for proprietary IPC mechanisms , (such as COM, DDE, and CORBA), to be built on top of it ? The restrictions placed on the types of objects that can be placed in shared memory are understandable, but heavy and presumably cant be checked?. Now there Must be a special definition of an shareable-object outside C++ class eg like a COM interface. In fact some evidence that this is occurring with classes such as basic_named_shared_object. However should these objects not be designed for distributed environment as well as local one? If shmem doesnt provide protocols for building objects that can be passed over a network and in a scaleable way then objects using its mechanisms direct will be of limited use, so larger picture should be taken into account Now. *** Offset Pointers *** Couldnt pointer_offsets be a class?. This would mean overloadable functions, type safety, rather than ptrdiff_t which I'm guessing is an int or something? -------------- ***Did you try to use, with what compiler?*** Tested out ProcessA/ProcessB example in VC7.1. Could do with some documentation as to how it should work. C++ IPC is unfamiliar territory! Eventually figured out that running ProcessA and then ProcessB in separate Command Prompt windows should have the desired effect.. which it did. Its Fun.! ---------- *Documentation* It seems a lot of effort has been put into documentation. Its noted that Ion is not a native English speaker, but it reads OK in spite of that. It probably needs to be better organised though. Concepts and Definitions section should be split into Concepts section and Definitions section. Traditionally definitions are put near to start of docs so user knows where they are before defined words are used. Definitions should be expanded A Lot ... to include many more entities such as mutex, semaphore, shared-memory, named-shared-memory,offset-pointer, process etc etc etc. Even if you think you know what they mean, some words need to be defined so reader can see what author means in this particular context. Quick guide for the impatient could then benefit from having unfamiliar terms hyperlinked to their definitions. ---------- **Construction *** On construction issue , only because its been brought to light recently. named_shared_object segment; //Create shared memory if(!segment.open_or_create(shMemName, memsize)){ std::cout << "error creating shared memory\n"; return -1; could (presumably?) be replaced with try{ //Create shared memory named_shared_object segment(shMemName, memsize); } catch( shmem_exception & e){ std::cout << "error creating shared memory\n"; return -1; } which I would prefer. --------------------- ***Should it be a boost library** You betcha. Yes indeed It should definitely be a boost library. C++ needs this type of library badly. It is all very low level however and I would like to see a higher level abstraction for designing objects that can be passed about over a network too though and shmem objects should then try to conform to that, but that shmems role would be maybe as a building block for that. Needs to be discussed. If it is in Docs.. Sorry I missed it! regards Andy Little

Andy Little wrote: NOTE: I haven't had time to look at this library yet, only read a few of the reviews.
The restrictions placed on the types of objects that can be placed in shared memory are understandable, but heavy and presumably cant be checked?. Now there Must be a special definition of an shareable-object outside C++ class eg like a COM interface. In fact some evidence that this is occurring with classes such as basic_named_shared_object. However should these objects not be designed for distributed environment as well as local one? If shmem doesnt provide protocols for building objects that can be passed over a network and in a scaleable way then objects using its mechanisms direct will be of limited use, so larger picture should be taken into account Now.
The shmem library will provide a common data block between process A and process B. From the shmem library PoV, this is just a block of data who's contents is user defined. That is OK from the PoV of this library as it should just be concerned with the memory sharing and synchronisation thereof. If process A is writing data to that block, process B should be aware of that and not start reading until process A signals that all the data is written (especially if process A is writing the data incrementally). Similarly, process A should not be allowed to write to that memory while process B is reading from it. With this in place, transfering C++ objects across processes becomes easy if you know the type of object being passed. Boost already has a mechanism to do this (serialization), so you could make use of the Boost.Serialization library to persist the objects to a shmem memory block rather than a file. Therefore, the only thing that the shmem needs to provide is a serialization archive that works for its shared memory resources. What would be nice is if the I/O part of the serialization could be seperate from the archive format. (I am not sure this is possible with the current serialization library). That way, you could do: object --> binary archive --> shmem | shmem --> binary archive --> object or object --> xml archive --> shmem | shmem --> xml archive --> object where shmem provides what is needed by the different archives in terms of I/O. Does the shmem library provide an I/O stream library sink and source? If it did, then it would be easy to serialize data using exising C++ streams.
**Construction ***
On construction issue , only because its been brought to light recently.
named_shared_object segment; //Create shared memory if(!segment.open_or_create(shMemName, memsize)){ std::cout << "error creating shared memory\n"; return -1;
could (presumably?) be replaced with
try{ //Create shared memory named_shared_object segment(shMemName, memsize); } catch( shmem_exception & e){ std::cout << "error creating shared memory\n"; return -1; }
which I would prefer.
what happens if you have a class called shared_data that uses shmem, and use it like this: shared_data data; int main() { return 0; } If this throws an exception, your application will crash and it could be difficult to determine why. Doing something like that, you should at least provide an error policy, so you could bring up a sensible error and exit gracefully. For example: struct shmem_exit_policy { static void creation_error() { std::cout << "The application failed to initialize." << std::endl; assert( false ); std::exit( -1 ); } };
***Should it be a boost library**
You betcha. Yes indeed It should definitely be a boost library. C++ needs this type of library badly. It is all very low level however and I would like to see a higher level abstraction for designing objects that can be passed about over a network too though and shmem objects should then try to conform to that, but that shmems role would be maybe as a building block for that. Needs to be discussed. If it is in Docs.. Sorry I missed it!
If shmem provides Boost.Serialization support and iostreams support then it would be easy to leverage those libraries to provide the high-level passing of objects you describe. - Reece

Reece Dunn wrote:
what happens if you have a class called shared_data that uses shmem, and use it like this:
shared_data data;
int main() { return 0; }
If this throws an exception, your application will crash and it could be difficult to determine why.
This is true for all classes. It doesn't matter whether shared_data uses shmem or not.

Peter Dimov wrote:
Reece Dunn wrote:
what happens if you have a class called shared_data that uses shmem, and use it like this:
shared_data data;
int main() { return 0; }
If this throws an exception, your application will crash and it could be difficult to determine why.
This is true for all classes. It doesn't matter whether shared_data uses shmem or not.
True, but we were discussing shmem and making it throw an exception if allocation fails. Throwing exceptions does make error handling easier in that the error reporting can be done in one place, but you have to be more careful about how you use classes that can throw. You could write my example as: class shared_data { boost::shared_ptr< shmem > shared; public: shared_data() { try { ... } catch( shmem_exception & ) { report_error_and_exit(); } catch( std::bad_alloc & ) { report_error_and_exit(); } } }; so you wouldn't need to have an error policy. This would work for other cases like this, not just using shmem. - Reece

Hi Reece,
If process A is writing data to that block, process B should be aware of that and not start reading until process A signals that all the data is written (especially if process A is writing the data incrementally). Similarly, process A should not be allowed to write to that memory while process B is reading from it.
With this in place, transfering C++ objects across processes becomes easy if you know the type of object being passed. Boost already has a mechanism to do this (serialization), so you could make use of the Boost.Serialization library to persist the objects to a shmem memory block rather than a file. Therefore, the only thing that the shmem needs to provide is a serialization archive that works for its shared memory resources.
You can do that, or you can construct the object directly in shared memory if both process share the same ABI. In this case there is no serialization. But this approach is more likely when you want to share some complex data (for example, a data-base) between all processes.
What would be nice is if the I/O part of the serialization could be seperate from the archive format. (I am not sure this is possible with the current serialization library). That way, you could do:
object --> binary archive --> shmem | shmem --> binary archive --> object
or
object --> xml archive --> shmem | shmem --> xml archive --> object
where shmem provides what is needed by the different archives in terms of I/O.
Shmem offers two general stream-like classes bufferstream and vectorstream. The first one directly serializes data via operator << to an user provided buffer (which can be a shared memory buffer). You are protected against buffer overflows with classic iostream errors managing. The second one serializes data via operator << to any character-vector (a shared memory boost::shmem::vector<> for example. The difference is that vectorstream reallocates the vector as you insert data. Take a look at the Shmem documentation in this address: http://ice.prohosting.com/newfunk/boost/libs/shmem/doc/html/shmem/streams.ht...
If shmem provides Boost.Serialization support and iostreams support then it would be easy to leverage those libraries to provide the high-level passing of objects you describe.
I agree. I would need to see what do I need to have Serialization support what are the requirements of a Serialization archive. Regards, Ion

Ion Gaztañaga wrote:
If shmem provides Boost.Serialization support and iostreams support then it would be easy to leverage those libraries to provide the high-level passing of objects you describe.
I agree. I would need to see what do I need to have Serialization support what are the requirements of a Serialization archive.
LOL - if all else fails you could look at the documention for the serialization library. Robert Ramey

"Ion Gaztañaga" wrote:
If shmem provides Boost.Serialization support and iostreams support then it would be easy to leverage those libraries to provide the high-level passing of objects you describe.
I agree. I would need to see what do I need to have Serialization support what are the requirements of a Serialization archive.
Since shared memory can be /huge/ there should be also mechanism to retrieve the raw data piece by piece (for futher processing). Having only serialization would mean doubling, memory usage when the shmem gets serialized. /Pavel

Reece Dunn wrote:
Andy Little wrote:
With this in place, transfering C++ objects across processes becomes easy if you know the type of object being passed. Boost already has a mechanism to do this (serialization), so you could make use of the Boost.Serialization library to persist the objects to a shmem memory block rather than a file. Therefore, the only thing that the shmem needs to provide is a serialization archive that works for its shared memory resources.
actually, serialization uses i/o provide by an underlying stream. So all one would need is a variation on std::strstream which uses shmem rather than the program memory - and you be all done!
What would be nice is if the I/O part of the serialization could be seperate from the archive format. (I am not sure this is possible with the current serialization library).
This is the way it is now - see docs. That way, you could do:
object --> binary archive --> shmem | shmem --> binary archive --> object
or
object --> xml archive --> shmem | shmem --> xml archive --> object
where shmem provides what is needed by the different archives in terms of I/O.
Does the shmem library provide an I/O stream library sink and source? If it did, then it would be easy to serialize data using exising C++ streams.
as well as the serializaton libary Robert Ramey

Robert Ramey wrote:
Reece Dunn wrote:
Andy Little wrote:
With this in place, transfering C++ objects across processes becomes easy if you know the type of object being passed. Boost already has a mechanism to do this (serialization), so you could make use of the Boost.Serialization library to persist the objects to a shmem memory block rather than a file. Therefore, the only thing that the shmem needs to provide is a serialization archive that works for its shared memory resources.
actually, serialization uses i/o provide by an underlying stream. So all one would need is a variation on std::strstream which uses shmem rather than the program memory - and you be all done!
Just use boost::iostreams::stream< boost::iostreams::array_source > using the memory pointer and size from the shared memory as the stream used to construct the archive. Jeff Flinn

"Reece Dunn" wrote
what happens if you have a class called shared_data that uses shmem, and use it like this:
shared_data data;
int main() { return 0; }
If this throws an exception, your application will crash and it could be difficult to determine why.
The other alternative is that you construct the object, its in a bad state but because you made the darn thing a global variable you forget to check the state before use, your application crashes or just hangs in an endless loop. I dont know an answer except to change the design. The global object without exceptions means less argument passing into functions but more tedious and error prone checking of state within each function that references it. The current practise in C++ design is create objects when you want them isnt it? Changing the above design is a question of changing to int main() { try{ shared_data data; use(data); // data guaranteed to be in a good state on entry // if here data still good ... } catch(shmem_exception & e){ std::cout << "shmem failed\n"; return Error; } } Are there particular situations where a design cant be modified to not use global data? FWIW none of the shmem examples use global variables, though they could do as currently designed. OTOH I believe std::basic_ostream and istream handle the situation by means of "sentries" in their associated functions.[ C++PL 3rd Ed 21.3.8 ]. though in that example flags are set which user must check. My preferred option is that (if possible!) a user of data in the above example is guaranteed that s/he is either getting an object in a good state or an exception has been thrown and doesnt need to remember to check the state at hir end before each use. Of course if s/he can modify the state then we are back in the same position, but again the policy should be that any user action that is about to put the object in a bad state will rather throw an exception. The kind of object that violently informs me its being put into a bad state is preferable IMO to the one that silently continues. That said if there are situation such as the no-exceptions platforms then the behaviour should be suitably modified as a workaround only on those platforms otherwise we are just denied using a well used language feature simply becaause its not universally supported. AFAIK the practise of not allowing exceptions in e.g embedded use is only unofficially supported isnt it? regards Andy Little

Hi Andy,
***Design***
Shmem is limited in scope. Its a low level building block for implementing local inter process Object transfer. AFAICS shmem doesnt concern itself much with the protocols of IPC. It wont scale to distributed communications. Is there scope for proprietary IPC mechanisms , (such as COM, DDE, and CORBA), to be built on top of it ?
You are right. Shmem is focused on classic inter-process mechanisms like shared memory/memory mapped files and it supposes ABI-compatible C++ clients. However, it can be used to build complex binary-serialized messages through network. But I haven't put any effort in distributed approach, mainly because I would be reinventing the wheel. I think that for distributed approach you could use boost::serialization over TCP/IP. Shmem is focused on raw, maximum efficiency transport between processes.
The restrictions placed on the types of objects that can be placed in shared memory are understandable, but heavy and presumably cant be checked?.
Well, I could put some is_polymorphic<> checks. However, I don't know how to detect member pointers and references.
Now there Must be a special definition of an shareable-object outside C++ class eg like a COM interface. In fact some evidence that this is occurring with classes such as basic_named_shared_object. However should these objects not be designed for distributed environment as well as local one? If shmem doesnt provide protocols for building objects that can be passed over a network and in a scaleable way then objects using its mechanisms direct will be of limited use, so larger picture should be taken into account Now.
*** Offset Pointers ***
Couldnt pointer_offsets be a class?. This would mean overloadable functions, type safety, rather than ptrdiff_t which I'm guessing is an int or something?
That seems a good idea. You prefer some kind of handle or identifier that can identify an object in shared memory that can be passed between processes. I think it's a good idea to encapsulate it since in a multi-segment architecture, the unique identifier of an object in a multi-segment group could be a more complex object. I would need to change get_offset_from_address()/get_address_from_object functions of named_shared_object to a get_handle_from_address/get_address_from_handle approach. So you can pass the handle of an object through an IPC mechanism instead of raw ptrdiff-s.
----------
*Documentation*
It seems a lot of effort has been put into documentation. Its noted that Ion is not a native English speaker, but it reads OK in spite of that. It probably needs to be better organised though. Concepts and Definitions section should be split into Concepts section and Definitions section. Traditionally definitions are put near to start of docs so user knows where they are before defined words are used. Definitions should be expanded A Lot ... to include many more entities such as mutex, semaphore, shared-memory, named-shared-memory,offset-pointer, process etc etc etc.
I will try to improve that.
----------
**Construction ***
On construction issue , only because its been brought to light recently.
Ready to implement RAII as you know.
***Should it be a boost library**
You betcha. Yes indeed It should definitely be a boost library. C++ needs this type of library badly. It is all very low level however and I would like to see a higher level abstraction for designing objects that can be passed about over a network too though and shmem objects should then try to conform to that, but that shmems role would be maybe as a building block for that. Needs to be discussed. If it is in Docs.. Sorry I missed it!
The idea of Shmem is to start building a better shared message queue, and a pipe-like stream. After that, maybe a interprocess datagram/stream mechanism similar to sockets. A Solaris doors like mechanism would be fine too. And apart from this, I want to do some research with a multi-segment Shmem architecture so that we can allocate new shared memory segments automatically when a segment is full of data. That would require also a special pointer type to be able to point from one segment to other when every segment can be mapped in a different address in each process. This can be a good base for multi mapped-file data-bases. A lot of work to do, as you can see. Ion

Hi Ion, One thing about shmem concerns me greatly. AFAIK shared memory is accessed by a string handle. What then, when using open _or_create, is to stop two totally unrelated applications accessing the same shared memory area by accident, or malice because they used the same string handle? Am I missing something? regards Andy Little

Hi Andy,
One thing about shmem concerns me greatly. AFAIK shared memory is accessed by a string handle. What then, when using open _or_create, is to stop two totally unrelated applications accessing the same shared memory area by accident, or malice because they used the same string handle?
There is no way to stop this. Is the same as if two unrelated processes open and write the same file. What can you do to stop this? Regards, Ion

Ion Gaztañaga wrote:
One thing about shmem concerns me greatly. AFAIK shared memory is accessed by a string handle. What then, when using open _or_create, is to stop two totally unrelated applications accessing the same shared memory area by accident, or malice because they used the same string handle?
There is no way to stop this. Is the same as if two unrelated processes open and write the same file. What can you do to stop this?
Not much. Security attributes can restrict access to specific users or specific groups, but if the offending process has the same security status as the legitimate one, there's no way to stop the access. The best you can do to prevent accidental access is to use unique prefixes for the name, similar to Java packages or XML namespace, e.g. a domain you control. Then you just need to make sure that your company doesn't produce two unrelated applications that use the same name, but that ought to be manageable. But as far as malicious interference goes, all IPC methods are rather weak in my experience: named pipes, message queues, Win32 messages, global named synchronization objects, they all suffer from the same problem. Sebastian Redl

Please always state in your review, whether you think the library should be accepted as a Boost library!
Yes.
- What is your evaluation of the design/implementation?
Seems good, though bear in mind my experience of C++ is quite limited relative to most members of this group :-).
- What is your evaluation of the documentation?
It would be useful if there were more example docs- eg. something that introduces features more advanced than those covered in "Quick Guide for the Impatient", but which isn't too in-depth. I was also initially confused as to what the library could do- I got the impression that eg. mmapped files were just an example of what you might want to implement using the shmem interface, and was pleasantly surprised to discover that in fact the functionality was included in the library.
- What is your evaluation of the potential usefulness of the library?
Very useful, especially as it supports STL containers, and is cross-platform.
- Did you try to use the library?
I compiled all the examples with MS Visual Studio 2003, and tried 2 examples: processA/B and shmemmmapped_file.
- How much effort did you put into your evaluation?
I briefly skimmed through the documentation.
- Are you knowledgeable about the problem domain?
No. As another poster mentioned, adding network communication might be a useful future addition, providing an easy to use interface. Simon

- What is your evaluation of the documentation?
It would be useful if there were more example docs- eg. something that introduces features more advanced than those covered in "Quick Guide for the Impatient", but which isn't too in-depth.
Ok, I will try to improve that section.
I was also initially confused as to what the library could do- I got the impression that eg. mmapped files were just an example of what you might want to implement using the shmem interface, and was pleasantly surprised to discover that in fact the functionality was included in the library.
Yes, memory mapped files are not given enough importance in the documentation, and this is something to correct. Thanks for the review, Ion

This is in response to the Shmem formal review request. I have used Ion's library for the last 6 months developing several multi-threaded applications using gcc 3.4.4 on RH Linux WS4 and continue to use it on a daily basis. I have found the library to be extremely well designed and documented. I can say with confidence that Shmem works very well, including all the shared memory sync primitives. I have created thousands of c++ objects in shared memory using multiple server applications and multiple client updater applications all running simultaneously accessing the same objects at update rates close to 100hz. The shared sync primitives work flawlessly. The only area of frustration I have experienced is the use of containers in shared memory which Ion describes in his documentation. He has provided a good set of containers and describes their usage under "STL containers in shared memory" and why he has provided them but it sure would be nice if STL implementations would use typedefs for allocator pointers so that any stl container could be used in shared memory. Overall, this is would be an excellent addition to the boost library. Harold
participants (21)
-
Andrew J Bromage
-
Andy Little
-
Caleb Epstein
-
Darren Cook
-
David Abrahams
-
Felipe Magno de Almeida
-
Fred Bertsch
-
Harold Pirtle
-
Ion Gaztañaga
-
Jeff Flinn
-
Jurko Gospodnetic
-
Kim Barrett
-
Paul A Bristow
-
Paul Giaccone
-
Pavel Vozenilek
-
Peter Dimov
-
Reece Dunn
-
Robert Ramey
-
Sebastian Redl
-
Simon Li
-
Tobias Schwinger