[boost-users][conversion] 1.34.1 upgrade to 1.39 lexical_cast
I have code that converts, using lexical_cast, some numeric values to strings. When upgrading from boost 1.34.1 to 1.39 some of my float and double test values changed. For example the numeric value 1111.11 converts to the string value "1111.11" in boost 1.34.1. In boost 1.39 the numeric value 1111.11 converts to the string value "1111.1099999999999". I find it odd that some values give this long string representation but other values convert exactly (1222.22 for example). Can someone tell me what changed from 1.34.1 to 1.39. I'm pretty sure it has to be the underling code in boost since I used Microsoft Visual Studio 2005 IDE for building my project with both boost versions. The code I'm using is as follows: template <typename T> std::string convertValueToText(T const& value) const { return boost::lexical_caststd::string( value ); } Ryan
On Tue, Aug 18, 2009 at 9:12 PM, Ryan McConnehey
I have code that converts, using lexical_cast, some numeric values to strings. When upgrading from boost 1.34.1 to 1.39 some of my float and double test values changed. For example the numeric value 1111.11 converts to the string value "1111.11" in boost 1.34.1. In boost 1.39 the numeric value 1111.11 converts to the string value "1111.1099999999999". I find it odd that some values give this long string representation but other values convert exactly (1222.22 for example). Can someone tell me what changed from 1.34.1 to 1.39. I'm pretty sure it has to be the underling code in boost since I used Microsoft Visual Studio 2005 IDE for building my project with both boost versions.
The code I'm using is as follows:
template <typename T> std::string convertValueToText(T const& value) const { return boost::lexical_caststd::string( value ); }
I think it is just representing it in full accuracy now where it did not used to. I think there is something to control that though... Do not really recall, I use Boost.Spirit.Karma/QI now instead of lexical_cast, a lot faster.
On Tue, Aug 18, 2009 at 11:41 PM, Ryan McConnehey
I think it is just representing it in full accuracy now where it did not used to.
Isn't 1111.10999999999 less accurate than 1111.11?
Ryan _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
1111.11 cannot be represented in base2 IEEE-754 float representation. You can represent it in a double, but not a float. If you store 1111.11 in a float is saves it as: Hex: 448AE385 Now, bit 31 is the sign, bits 30-23 are the exponent, and 22-0 are the significand, thus that means that when you convert that back without rounding, that is 1.0850683 for the significand, 1024 for the exponent, so you get 1111.1099392 exactly. Let's also look at the hex value of 1111.10999999999: Hex: 448AE385 So yes, what it is doing is perfectly correct. 1111.11 cannot possibly exist in a float, if you must have 1111.11, then use a double (which, although has a greater range, still has the same aspects of floats, still compare using epsilons, some numbers may be different yet identical, etc). Why do you think fixed-point numbers are so popular in the financial world, because using floating-point in your money programs will start to screw you over after doing a few million calculations, you could be many cents, potentially dollars off. If you want to see how floating point numbers are actually stored and why their numbers seem odd, look at the wikipedia article at http://en.wikipedia.org/wiki/Single_precision . P.S. I have no clue why it stores it as 1111.10999999999, I would think it should be 1111.1099392, but maybe it thinks that number is different or something, no clue... Regardless, 1111.11 is wrong in any case as an output number, it is fine as an input number. Float's are weird, I try not to use them if possible, tend to stick to fixed-point numbers.
participants (2)
-
OvermindDL1
-
Ryan McConnehey