
Sounds like you've got a number of good things brewing. I wrote up the beta distribution mostly out of necessity for a machine learning project I'm working on (I needed a fast NVIDIA GPU implementation, so I figured I might as well write the boost c++ one too, since I was using boost for all the other psudorandom numbers already). I was wondering if you would be interested in taking the code and bundling it with the other distributions you've already got, as it sounds like your testing capabilities are better. As far as testing, I've only done the basics, but things seem alright, with the possible concern that the range is 0 <= x <= 1 as opposed to 0 < x < 1, due to limited precision of floating point, esp when for example a is near 0 and b is very large. It's not a problem for my application, but perhaps someone who knows the IEEE floating point rules better than I could suggest a fix. At any rate, I'll email you a copy. -Jeremy On Dec 11, 2007 5:31 AM, Andrew Sutton <asutton@cs.kent.edu> wrote:
Jeremy Bruestle wrote:
Hello, I have implemented support for the beta distribution for boost::random. I was wondering who I should send it to for examination and possible acceptance into boost?
Interesting. I took a different approach using a couple of gamma distributions. Ironically, I'm also using a rejection based technique from Cheng in 1977. Go figure. I also got bored and wrote a bunch of other random number generators and fixed some of the ones that were a little broken in Boost (Gamma and Lognormal). The results are in my own little Boost testing ground:
http://warhol.sdml.cs.kent.edu/trac/miniboost/browser/trunk/boost/random
I've also been slowly adding support for some of the missing Boost.Math libraries here:
http://warhol.sdml.cs.kent.edu/trac/miniboost/browser/trunk/boost/ math/distributions
The gumbel_distribution is a renamed version of the extreme_value_distribution. I renamed since (it looks like) there are three types of extreme value distributions (weibull, gumbel, and frechet), and also a generalized extreme value distribution.
I wanted to rename fisher_f to just fisher also, but didn't get around to it.
I also wrote a histogramming program to help generate plots of random numbers (using GNUPlot, for example), and comparing some basic stats against their theoretical equivalents - you know, just to make sure the random numbers look right. It's a great way of catching
If you have worked up docs and test cases that are all ready to be merged into the source, then you could also ask for a formal review (not sure if this qualifies for a "fast track" review, but it probably should do for a small addition like this). If accepted you would then be responsible for making all the necessary changes to the SVN codebase and maintaining the beta random-distribution.
I was waiting to finish out the Math distributions before dumping these wholesale into the sandbox. I think it would probably be a good idea to compare the implementations, figure out which is "better" - i'd probably lean towards the other since it (hopefully) doesn't depend on other variates.
Incidentally, the term "library" is meant to indicate the place where you go to get books as opposed to, say, Boost. They're not placing any real restrictions on implementations. The article needs to be cited in the code or in the documentation (preferably both).
Andrew Sutton asutton@cs.kent.edu _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost