
Zitat von David Sankel <camior@gmail.com>:
The serialization library describes its feature as a function that converts a supported object to a sequence of bytes and back into an "equivalent structure". The meaning of "equivalent structure" unfortunately isn't defined in the documentation.
Consider the following example:
struct Z { A * a; , B b; };
If we consider syntactic equivalence, the result of serializing this would be something like:
Z{ a=0x029348af, b=B{...} }
If we use semantic equivalence, serialization could result in any number of things, including:
Z { a=new A{...}, b=B{...} }
or even:
Z {}
as long as the semantics of a member are limited to this member and don't interact with other members, they can be specified in introspect(), e.g. ( member<Z,A *,&Z::a,shared_pointer>() ) shared_pointer is a typedef that, combined with the defaults, results in an instantiation of semantics<Set,IndirSemantics>: semantics<unique,semantics<mpl::set<polymorphic,shared> > which means that the deserialize function can treat the pointer itself as a unique data member, but the pointee might be shared with another pointer and be of a derived type. the user can add user-defined semantics there and use them in algorithm overrides. this obviously is only useful when the semantics of one members doesn't depend on another member (e.g. introspect() of std::vector would be such a case). or, similar to one of your examples: class optional{ bool is_initialized; T t; }; when "is_initialized"==false, "t" doesn't matter for "equivalence". although it would be possible to define a new descriptor like "member<...>()" that covers that case, I think a class like that should not have an introspect() function but override the algorithms instead.
I argue that a traversable concept should not mix syntax and assumed semantics, but should instead be strictly syntactic. The algorithms that are included in the library should mainly work on this level.
that's good in theory, but how would the user specifiy semantics separate from introspect()? especially with common semantics like object tracking. if he'd have to override the algorithms every time a pointer shall point to a unique or shared object (whatever the non-default is), there'd have to be an override for almost every type containing a pointer.
is I don't know whether the reflections are semantic or syntactic. I do think a clear separation of syntax and semantics is important to writing clear code. Perhaps you can show what a shared_ptr "implementation of the concept" looks like?
it depends on the shared_ptr implementation if you'd actually implement introspect() for shared_ptr, but I guess you mean supporting shared_ptr in a specific algorithm. apply_recursive, which seems to be comparable to scrap++:
All the algorithms included in the library that work on a semantic level (like equality, serialization, etc.), might have a default implementation that works only on the syntax (although this is scary), but ought to at least be overridable for any structure where the syntax doesn't match the semantics. For example, consider the function inc that increments an "int" in-place.
template<typename F, typename T> T applyToAll( F f, T t );
struct X { int a; //Invariant: a is always 0; };
if "a" is defined as immutable ("const") it is ignored by a "apply_members" functor that only takes non-const references. in general however, a class that has a member that doesn't behave like an independent, mutable, data member should not implement introspect(). so when an algorithm is used on that type a compiler error is generated until an override for that type and that algorithm is implemented.