[Serialization]#3118 basic_binary_oarchive.hpp - save_override possible loss of data warnings

I've posted the following ticket in light of the current bug sprint. ----------- The following save_override overloads cause possible loss of data warnings on MSVC8 and XCode3.1.2/gcc4.0.1 void save_override(const version_type & t, int) void save_override(const class_id_type & t, int) void save_override(const class_id_reference_type & t, int) with their respective assignments: const unsigned char x = t.t; const int_least16_t x = t.t; While a possible fix would be: const unsigned char x = static_cast<unsigned char>(t.t); const int_least16_t x = static_cast<int_least16_t>(t.t); there is still a possibility of a silent loss of data. We could be safer and use numeric_cast, but that would possibly impact code size and performance, and would introduce a library dependency. Why are the xxx_type strong typedef's using int rather than the smaller types that are being serialized? Any thoughts on the best solution? Thanks, Jeff

there is still a possibility of a silent loss of data.
This would only be an issue in following cases: a) A class has more than 255 backward compatible versions. b) An archive has more than 64K different classes in it. It is currently inconcievable to me that such a program would ever be written. Setting the sizes of these types to larger values would make every single binary archive bigger and slower. The only think I might suggest is that one add an assertion so that if either of the above were to occur, the program would trap - at least in debug mode. So maybe one might use assert(t.t < max_value<unsigned char>
const unsigned char x = static_cast<unsigned char>(t.t);
assert(t.t < max_value<int_least16_t >
const int_least16_t x = static_cast<int_least16_t>(t.t);
Robert Ramey Jeff Flinn wrote:
I've posted the following ticket in light of the current bug sprint. ----------- The following save_override overloads cause possible loss of data warnings on MSVC8 and XCode3.1.2/gcc4.0.1
void save_override(const version_type & t, int) void save_override(const class_id_type & t, int) void save_override(const class_id_reference_type & t, int)
with their respective assignments:
const unsigned char x = t.t; const int_least16_t x = t.t;
While a possible fix would be:
const unsigned char x = static_cast<unsigned char>(t.t); const int_least16_t x = static_cast<int_least16_t>(t.t);
there is still a possibility of a silent loss of data.
We could be safer and use numeric_cast, but that would possibly impact code size and performance, and would introduce a library dependency.
Why are the xxx_type strong typedef's using int rather than the smaller types that are being serialized?
Any thoughts on the best solution?
Thanks, Jeff
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

This would only be an issue in following cases:
a) A class has more than 255 backward compatible versions. b) An archive has more than 64K different classes in it.
It is currently inconcievable to me that such a program would ever be written. Setting the sizes of these types to larger values would make every single binary archive bigger and slower.
The only think I might suggest is that one add an assertion so that if either of the above were to occur, the program would trap - at least in debug mode. So maybe one might use
assert(t.t < max_value<unsigned char>
const unsigned char x = static_cast<unsigned char>(t.t);
assert(t.t < max_value<int_least16_t >
const int_least16_t x = static_cast<int_least16_t>(t.t);
Robert Ramey
You could always do some kind of escaped encoding. For example, reserve a value of 255 to mean >=255, and follow that byte with the actual value using a larger represention (may as well go right to a native int). Cheers, Chris

Robert Ramey wrote:
there is still a possibility of a silent loss of data.
This would only be an issue in following cases:
a) A class has more than 255 backward compatible versions. b) An archive has more than 64K different classes in it.
That would be 32K different classes, the data type is int_least16_t.
It is currently inconcievable to me that such a program would ever be written. Setting the sizes of these types to larger values would make every single binary archive bigger and slower.
I also remember paying $800 for 4MB of ram. ;-)
The only think I might suggest is that one add an assertion so that if either of the above were to occur, the program would trap - at least in debug mode. So maybe one might use
assert(t.t < max_value<unsigned char>
I couldn't find a max_value<> template, I assume you mean assert(t.t <= boost::integer_traits<unsigned char>::const_max);
const unsigned char x = static_cast<unsigned char>(t.t);
assert(t.t < max_value<int_least16_t >
and assert(t.t <= boost::integer_traits<int_least16_t>::const_max); Thanks, Jeff

on Mon Jun 01 2009, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
there is still a possibility of a silent loss of data.
This would only be an issue in following cases:
a) A class has more than 255 backward compatible versions. b) An archive has more than 64K different classes in it.
It is currently inconcievable to me that such a program would ever be written. Setting the sizes of these types to larger values would make every single binary archive bigger and slower.
a. Don't underestimate the potential of systems that generate code to... generate code ;-), including new versions of a class b. There are variable-length number formats that would allow you to avoid a size increase while still accomodating arbitrarily large version numbers. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

David Abrahams wrote:
on Mon Jun 01 2009, "Robert Ramey" <ramey-AT-rrsd.com> wrote:
there is still a possibility of a silent loss of data.
This would only be an issue in following cases:
a) A class has more than 255 backward compatible versions. b) An archive has more than 64K different classes in it.
It is currently inconcievable to me that such a program would ever be written. Setting the sizes of these types to larger values would make every single binary archive bigger and slower.
a. Don't underestimate the potential of systems that generate code to... generate code ;-), including new versions of a class
b. There are variable-length number formats that would allow you to avoid a size increase while still accomodating arbitrarily large version numbers.
This is only an issue for binary archives which are designed for speed at the expense of portability. Text base archives use strings for numbers and hence handle arbitrary length numbers in a portable way. The portable binary archive also uses a variable length encoding for all integers. Robert Ramey
participants (4)
-
Chris Hamilton
-
David Abrahams
-
Jeff Flinn
-
Robert Ramey