
Robert, this thread is beginning to ramble. I won't be able to keep up with it if we keep going back and forth like this.
At Tue, 27 Jul 2010 13:02:29 -0800, Robert Ramey wrote:
David Abrahams wrote:
At Tue, 27 Jul 2010 11:58:19 -0800, try this example, and see how well your library deals with it.
struct X { operator short() const { return 0; } operator short&() const { return 0; }
operator long() const { return 0; } operator long&() const { return 0; } };
in concept requirements the use of convertibility almost always causes problems.
As written this would work fine. Since it is not a primitive, the default serialization would be to insist upon the existence of a serialize function.
Then it wouldn't work fine. It's neither a primitive nor does it have a serialize function. You wrote:
Note that it could have a non-intrusive serialize function.
But it doesn't.
So I guess it would be correct to say that whether the above is serializable would depend upon other information not present in the above example.
I don't see anyway to verify this via concepts.
If you mean with BCCL, it's *trivial* to do.
I believe that any type implicitly convertible to a c++ primitive type (type and reference) is a serializable type.
and X contradicts that. We can go around and around on this until your definition of Serializable is solid, and I'm even willing to do so if that's what it takes to help you get this right.
The offer stands.
Actually, the convertability isn't stated in the documenation or concept. It's just what when I made the archive models, convertibility reduced/eliminated most of the code. I just plowed on and finished the job. So I suppose the concept as stated isn't accurate.
Doesn't surprise me.
The current documentation doesn't say anything about convertability. I just happened to be true for the internal types used by the library. It is only this which raises the question as to whether the concept as stated need be changed.
I think I disagree with you; there are lots of reasons that the concepts as stated should come into question. The most glaring one is that your documentation says these things: 1. being primitive is sufficient to make a type Serializable 2. a saving archive ar has to support ar << x for all instances x of any Serializable type 3. Using serialization traits, any user type can also be designated as "primitive" but gives no other clue about how to get the value into or out of an arbitrary primitive type. What that means for an archive implementor is that he is required to support serialization of a (potentially unbounded) set of primitive types for which there is no API that will let him figure out how to appropriately write instances into his archive and read them out again. Even though this is an issue of un-achievable semantics (not syntax), using BCCL would actually uncover this problem because your archives would fail when primitive archetypes were serialized.
And there is precedent for this. shared_ptr is NOT a serializable type as described by the concepts - and never can be.
Perhaps not with those concept definitions, if you're unwilling to put its serialize member function in namespace Boost. But I don't see the relevance anyway.
The implemented archives include special code for share_ptr to work around this and make it serializable anyway. Given the alternatives - I felt this was the best course - even though it muddles somewhat the question of exactly what is serializable.
So I think it's accurate to say that the current concepts describe sufficient requirements for serializability but not necessary ones.
I don't see how that could possibly be accurate. Sufficient requirements are a superset of the necessary ones. If you described sufficient requirements, I can see no problems, provided those requirements were implementable. Operations constrained by the concepts would use more than absolutely necessary, but models of the concepts would provide more than absolutely necessary. No conflict.
The iterators library does in fact use it (though probably not everywhere it should). The Graph library uses it all over the place.
I just looked again. I found ONE file in all of boost which includes boost/concept/requires.hpp. (That was in boost/graph/transitive_reduction.hpp)
Well, you're looking for the wrong header. I don't know why you thought that particular header was the key to everything, but look for boost/concept* and you'll find boost/concept_check.hpp and boost/concept_archetype.hpp, and you'll also find whole library headers devoted just to defining concept checking classes and archetypes, like boost/graph/distributed/concepts.hpp This is information is all available if you read http://www.boost.org/doc/libs/1_43_0/libs/concept_check/using_concept_check.... and glance quickly at http://www.boost.org/doc/libs/1_43_0/libs/concept_check/reference.htm
For what it's worth, based on these discussions (and not a recent look at your docs, admittedly) I _think_ I can identify at least one problem with your specification and your idea of what is a proper implementation detail. Please tell me if I'm wrong:
I've now looked at the docs and confirmed what I said here.
You require archives to handle all primitive types, yet there is a large class of such types for which you say the interface that creates instances, and gets and sets their values, is a private implementation detail.
I haven't need getters/setters for any serialized types. In fact the whole code base only has maybe two.
I didn't say anything about getters and setters. I said "the interface that gets and sets their values." An interface that sets the value might be the assignment operator. An interface that gets the value might be a conversion to int.
The documenation refers to primitive C++ types. These are all assignable and a reference can be taken on them.
Yes, but it also says that any user-defined type can be made primitive, and the library supplies a whole bunch of library-defined primitive types eof its own!
The current documentation says nothing about convertability so I think it's correct as it stands.
Unless you don't really want to allow people to write archives, then it can't be.
If you don't specify how to create a value of any given primitive type, how is he supposed to deserialize it?
These types (e.g. class_id_type, etc) are in fact created in the base archive implemenation. References to these types are serialized so the serialization doesn't have to construct them.
Forget construction; I purposefully said “create a value.” If you pass my deserialize function a reference to T expecting me to fill the referenced object up with some value, but don't tell me about the interface for setting the value of a T (what types can be assigned into T, for example), then up a creek, paddle-less.
Is anyone else implementing archives other than you? If not, he's the only serious consumer you have of the archive concept. As the person in control of both sides of that contract, you're not going to notice these kinds of problems if you don't have solid concept definitions and concept checking in place, because you are free to (unintentionally) make changes that subtly alter the contract.
This is true and admitidly a problem.
This comes down to one thing: you need to decide what your public APIs are, and you need to have tests for all of them that don't make any assumptions beyond what's specified in the API. Maybe it would be easier to achieve if someone else were writing the tests.
Great - any volunteers?
Nail down what you think the concepts actually are, since you've said several times in this thread that you need to make adjustments. Then I will write some tests to reveal their brokenness [almost nobody, including me, gets concepts right without going through this exercise, so don't take it personally that I say they're broken]. I can't guarantee complete coverage but I can almost guarantee that I can reveal some holes. -- Dave Abrahams BoostPro Computing http://www.boostpro.com