
Maciej Sobczak wrote:
Hi,
The boost.serialization library seems to use eod-of-stream in the underlying istream object to denote the end-of-archive.
Why does it seem that way? It would certainly be contray to my intention.
This equivalence might make sense with files where the stream is open for a short time and really associated with a single archive,
but seems to be cumbersome when used with streams that are supposed to be long-lived (network sessions?) and used for transmission of many separate archives.
I don't see that this would be a problem. What is the matter with the following? ?ostream os("pipename or whatever"); // first archive { ?_oarchive oa(os); oa << ...; } // archive is destroyed here - stream remains open and available // second archive { ?_oarchive oa(os); oa << ...; } // archive is destroyed here - stream remains open and available os.close();
The problem arises between two applications that want to use serializatioin library for data exchange "on the fly", using network sockets. Small tests have shown that it's not enough for the sender to flush its output streams (although it does result in the archive's data arriving at the destination side). For the archive to be read correctly, the sender needs to entirely close the connection. This indicates that the end-of-stream condition is used to denote end-of-archive in the serialization sense.
I don't believe the conclusion follows. The archive has to be constructed and destroyed - but the stream doesn't have to be.
Taking into account the interface of the serialization library (where the readers are created from streams), where the stream object is syntactically supposed to live longer than the archive, treating eof as eoa is counterintuitive. I really expect this to work for the receiver:
std::istream &is = ...; // some input stream, possibly long-lived
while (...) { boost::archive::text_iarchive ar(is); // ... }
with similar structure on the sender-side.
Any thoughts?
I also expect this to work Robert Ramey