At Fri, 27 Aug 2010 09:44:39 -0800, Robert Ramey wrote:
David Abrahams wrote:
At Thu, 26 Aug 2010 16:05:41 -0800, Robert Ramey wrote:
Dave Abrahams wrote:
BoostPro Computing, http://boostpro.com Sent from coveted but awkward mobile device
If one were using heterogenious machines, I could understand the usage of MPI types. But as I understand it, the MPI serialization presumes that the machines are binary compatible.
You're mistaken.
So I'm just not seeing this.
Start reading here: http://www.boost.org/doc/libs/1_39_0/doc/html/mpi/tutorial.html#mpi.skeleton... and all will be revealed
lol - I've read that several times.
I always wonder, when you write that, whether you're physically laughing out loud. That's OK; don't spoil the mystique ;-)
I actually do think I snicker a little bit. Sort of one "snick".
This above is exactly the type of thing which provokes this reaction. Reading your comment, one has to conclude that
a) you think I haven't read it but am expressing an opinion anyway b) that you think that the documentation is clear and complete c) that I'm somehow responsable for not finding/reading/understanding it.
You make this kind of presumption all the time.
It's a weakness of mine, I admit, for which I apologize. But I wasn't doing that in this case. I was a bit terse, because as I noted above the message was sent from an awkward mobile device. Actually, a) I did think you hadn't read it, but I was trying to be helpful by pointing at the part of the doc that relates to your question. I thought you were saying "I don't get it; please help me understand," not that you were "expressing an opinion." b) I haven't read all the documentation. I did presume it was complete, but not that it was clear, hence http://groups.google.com/group/boost-list/msg/d14a38f3c55613b6 c) I don't see how anything I wrote could lead to that conclusion.
It doesn't bother me but it always provokes at least one "snick"
I just never found it to be very revealing. The word skeleton seemed pretty suggestive. It's still not clear to me how such a think can work between heterogeneous machines. For example, if I have an array of 2 byte integers and they each need to get transformed one by one into a 4 byte integer because that's closest MPI data type,
I think you don't understand what MPI datatypes do.
This is true. I suppose that's one reason why the documentation made no sense to me. They just looked like special types to make identify primitives accross differing architectures.
<snip> Interesting information which one should consider adding to the MPI documentation. </snip>
Actually, I wish you had given me some detailed feedback. I was thinking of writing it up in such a way that the Boost.MPI docs could point at it, but I can't tell whether it connected for you or not.
And since Boost.MPI and Boost.Serialization are so closely related, I think it's especially important that *you* underestand.
I disagree. Boost MPI depends upon Boost.Serialization but not the other way around.
I didn't say they were interdependent, just that they were closely-related. But anyway, I think you're being shortsighted: The success of tools built on top of Boost.Serialization is an important indicator of the correctness and genericity of its design. Boost.Serialization *is* dependent on Boost.MPI for a portion of its userbase, and to the extent that you are interested in supporting that portion of the userbase and enabling anything like Boost.MPI to exist, the things can/should do to Boost.Serialization depend on the requirements of Boost.MPI.
I shouldn't have to understand Boost MPI just as I can't be expected to understand all the usages to which boost serialization might be applied.
Welcome to generic library design! :-) You have to decide what your application domain(s) is/are. Do you want to serve people who are trying to save/load their desktop applications' files? Do you want to serve people who want to save/load XML? Do you want to serve people who want to checkpoint the state of long-running calculations? Do you want to serve people who are trying to implement RPC? etc...
Trying to do this, aside from the time involved, might well be counter productive in that it can trick one into coupling things which would otherwise not be. For all these reasons I've refrained from investing a lot of time in understanding MPI as it relates to serialization. I'm happy with Mathias efforts and commitment to supporting his library and don't want to muck up the works.
You can't be a generic serialization library without reference to real applications. Maybe you still haven't really decided whether serving Boost.MPI and its users is something you want to do. I think that might account for a good deal of the recurring friction we experience. It would be a good idea to settle on an answer to that question, along with the question of what other applications you're willing to support.
I only really have few observations/suggestions at this point.
a) It would be helpful if there were a way to test the serialization of the archive classes in boost MPI without having MPI installed. If this is not possible, it would seem to me that the serialization is intertwined with the data transport - rather than be separated as it is in the stream/streambuf io/stream design. This would look like a design flaw to me.
Helpful to whom? I agree this is all a good idea, but I'm not sure how this relates to anything else we're discussing.
b) user experience seems to show that archive construction/destruction is a significant performance issue when a new archive is made for each data transmission. On the other hand, one has to do this since the current archive implemenation track addresses of serialized obects so the same archive can't be use send the same structure (maybe with changed data) multiple times.
I can't understand how you reach that conclusion. In my long explanation I thought I made it clear that Boost.MPI and its database get a great deal of its performance advantage from exactly that: sending the same structure multiple times with the same MPI type map.
Given that MPI has a focus on performance, I wonder if this has been considered. I looked a the documentation, code and examples and it wasn't obvious to me how this is question was addressed - if at all.
We've obviously misunderstood one another somewhere along the way. It would be good if we could get that cleared up.
c) I think the above information regarding how MPI and serialization fit together in boost MPI would be a worth addition go the MPI documentation. AND it's already written !
It has been my intention to get it out in a more public place than this mailing list. Again, detailed feedback would be helpful. It can't already be perfect, or the misunderstanding cited in b) wouldn't have arisen.
You should know that all your efforts to educate me are not wasted.
Glad to hear it. -- Dave Abrahams BoostPro Computing http://www.boostpro.com