
Matthias Troyer wrote:
Hi Robert,
On 25 Jul 2010, at 10:28, Robert Ramey wrote:
Matthias Troyer wrote:
O Then please demonstrate how to implement an archive that actually does anything sensible and supports pointers, etc. without depending on what you call implementation details. The only way is implementing all the functionality from scratch.
Here's what to do:
a) derive from common archive instead of binary_archive. This is what all the other archives do. At this level and above there is only interface and not implemenation. By doing this you will have total control of the relationship between the the native types as represented in memory and those written to to the storage. The interface at this level hasn't been frozen in any specification - but as far as I can recall - it's never been changed since the beginning.
This still does not solve the basic issue I'm trying to tell you. The problem we have now does not come about because of deriving from binary_archive.
OK - I had thought the problem came from the fact that now the size of certain types as stored in binary archive is not the same as the native size of the type. Example version_type internally is now 32 bit but stored as 8, 16, or 32 bits depending on the version of the archive. It had not occured to me until just now that the mpi library might be dependent on the size of the type I suppose I jumped to the wrong conclusion as I first noticed the problem when I make the rendering of version_type in the file different than the size of version type as stored in memory.
Even the common archive uses version_type, etc.. In order to implement an archive I will need to know the list of these primitive types that I have to support and their interface.
Contrary > to what you say the interface to those types has changed now.
Hmmm - I did change the size of types. And I did restrict the usage of the types as opposed to STRONG_TYPEDEF which permits anything - conversions, arithetics, etc. I did this to make the system more robust and avoid getting surprised by "automatic" behavior. None of the archive classes in the library "complained" about these restrictions. (actually, not quite true - but the complaints were easily fixed and made the code more robust by minimizing conversions etc.). And truth is, it just never occurred to me that other archives might perform these operations on things like version_type, class_type etc. If it had occurred to me, I likely would have assumed that any fixes would be trivial as they were in my case.
If you declare those types to be implementation details then I still cannot implement an archive without making the "mistake: of relying on implementation details.
I think what I'm not seeing is why you need to rely upon how these types are implemented. The binary archive has to do this since there is the question of historical archives to be addressed. But in your case I don't see where the problem is coming from. It seems to me that the only connection be mpi_archives and specific types would be skip version_type and class_optional type in the mpi_archive class since you don't need them.
b) look into this issue of require default constructabiliy of types. Generally, default constructability is not a requirement of serializable types. I suspect that this came about by accident and I also suspect it would be easy to fix.
Sure, it can be fixed by a redesign of the MPI datatype creation. I just fear that such a redesign of a core part of the library a few days before a new release is not a good idea.
I already checked in modification which makes the version_type default constructor public. I did this in the effort to get things over the hump and don't think it's a big deal even though I'm not crazy about it. If this were the only issue there would be no problem here. update - John maddoc has suggested a similar change in STRONG_TYPEDEF which I could do. But now I look at the sun results and all these compile errors are gone - so I assumed you made some sort of adjustment here. It's become clear to me that a definitive solution to this won't happen in the next few days. It's really going to take more time. This not because I think that it's a lot work, it's just that the required back and forth is a time consuming process which can't be hurried. Sort of like a chess match where each side has to think about what the best move is.
c) look into the possibility of factoring out the MPI part so that the archive can be tested independently of the MPI facility. For example, if there were an mpibuf similar to filebuf, then the mpi_archive would could be verified orthogonally to mpi. The mpi_archive would in fact become a "super_binary" archive - which presumably be even faster than the current binary one and might have applicability beyond mpi itself.
All the other archives do the above so I don't think these enhancements would be very difficult. Benefits would be:
a) make things more robust - independent of binary archive. binary archive is sometimes hard because the types actually used are sometime hidden behind typedefs so it's hard to see what's going on.
b) make things more easily testable on all platforms.
c) make mpi_archive useful for things (which of course I can't forsee) beyond just mpi.
I'm focused on getting past this current problem. And I think that implementing this suggestion is the only practical way to do it. I realize that this is a pain but on the upside, it would make a huge improvement to the mpi_archive at quite a small investment of effort.
Actually it seems you do not understand MPI well.
lol - at last one thing we can agree on!
Your proposal is not feasible for either of the archives we use: The "content" mechanism just sends from memory to memory, never going through any buffer. The packed archives use the MPI library to pack a buffer. Both require an MPI library.
No dispute here. I tried to compile mpi_archive and took a cursory look at the code. I have no idea how feasible my suggestions are, I just thought they might make things better. Feel free to ignore them. I think I said this, but I got the idea that the skeleton presumed that the size of data as stored in the binary archive was the same of the size of the data type. This used to be true, but I had to break that to maintain compatibility with historical archives. So if I'm wrong about the skeleton, and it's only a question of either my adding operations to version_type etc or you tweak your code to use only the subset of operations that these types now permit the problem is much smaller than I thought. Robert Ramey