
At 02:26 PM 12/26/2007, I wrote:
I was implying (when I should have been saying) that at some point the error of generating a *normal* random variant with a mean and standard deviate of lambda instead of a "for real" Poisson becomes too small to have any practical significance. If large lambdas are causing problems then one solution is to see if this is only occurring sufficiently far into that regime then it might make sense to just generate a normal instead. Scaling and shifting by 750 is not likely to cause much rounding errors for a standard normal deviate, but many direct algorithms for that extreme a value are likely to run into numerical problems.
Topher
I checked this out. The difference between the two curves (Poisson[750] and Normal[750, Sqrt(750)]) is very small -- when plotted, almost indistinguishable to the eye. In total, the absolute difference between the two is about 1%. The largest single error was that using the normal generator would result in 729 being generated with a probability of 0.01086 instead of 0.01098. While that is unlikely to have much effect on most applications, it is possible that a large simulation that is very sensitive to the precise probabilities might be thrown off, so I withdraw this suggestion. Topher