On 11/14/05 4:45 AM, "Paul Giaccone"
I haven't yet read the later messages, but from my point of view (a typical user), the bottom line *must* be:
1. If you can legitimately write it to an archive, you *must* be able to read it it back in. If not, the archive is useless, and the library is buggy. 2. Conversely, if you cannot legitimately read it, you must *not* allow it to be written out - throw an exception or handle it in some other way, but you *must* advise the user somehow. If not, the library is buggy.
(The following is only for initialized objects.) But how do you implement this? We would have to implement some sort of censoring hook to catch the "bad" values. What if you really wanted that value serialized anyway? How do you know that a value is unserializable? The main factor seems to be the I/O system's quality, not the object's type. How do we deal with the fact that said quality may vary per environment, so each computer may censor different values? Each object serialized now gets an extra "if"-branch, to execute the censor object; how much will the "if" check, for both approved and rejected values, slow down the streaming?
This applies to NaNs as much as to uninitialised booleans or anything else. The weight of argument so far seems to be that allowing NaNs to be written and read is a good thing. I'm sure it wouldn't be too hard to get the deserialisation code to parse NaN as a legitimate value for a floating-point variable.
The issues aren't really the same. Uninitialized objects are a red herring; manipulating them, besides initialization, is illegal. The fact that the objects are "bool" is another red herring; the type doesn't matter. This kind of error can't be portably checked, and the energy towards this effort should be directed away from giving this technique any smell of legitimacy and toward improving programmers that use bad techniques. -- Daryle Walker Mac, Internet, and Video Game Junkie darylew AT hotmail DOT com