
On Mar 18, 2006, at 8:54 AM, John Maddock wrote:
This seems to be dependant on the type of generator you use. Try using lagged_fibonacci. The casting and dividing involved in uniform_real appears to loose a lot of accuracy, but lagged_fibonacci generates doubles between 0 and 1, so there's no cast and it's dividing by 1.
Just a quick update for those who are interested: using lagged_fibonacci does indeed solve the problem and allows the iostreams code I posted earlier to detect the VC8 streaming problem.
Unfortunately lagged_fibonacci isn't part of TR1: the only real number generator that is (subtract_with_carry_01) appears to still leave the final 4 bits as zero :-(
There are simple reason why lagged_fibonacci and the linear congruential generators fill the low bits with apparently random values, and the Mersenne twister does not. - The Mersenne twister produces integers modulo 2^32. Hence if you scale to a uniform real in the interval [0,1), the integer value is divided by 2^32, and the low bits remain 0. - With the 32-bit linear congruential generators, such as minstd_rand, the numbers produced are modulo 2^31-1, a prime number, and not modulo a power of 2. Hence, the low bits look random, although you actually get fewer distinct floating point values (only 2^31-1)! - The lagged Fibonacci floating point generators are by default seeded with minstd_rand0, and hence have low bits that are nonzero. If instead you were to seed it with numbers created from a Mersenne twister, you would see the same problems. Having said that I think that for most applications it is no problem that the low 20 bits remain zero. I don't know of any application requiring these low bits to be random as well. Usually 32 random bits are more than enough. If, however, you need more bits than I would recommend looking into 64 bit generators, such as the lcg64 in the SPRNG library, a wrapper for which exists in the sandbox. Matthias