Review Request: Statistical Distributions and Mathematical Special Functions

The "Math Toolkit" has now matured to the point where Paul Bristow and I would like to ask for a formal review. The toolkit contains: Statistical distributions: ~~~~~~~~~~~~~~~~~~~~~~~~~~ Bernoulli, Beta, Binomial, Cauchy-Lorentz, Chi Squared, Exponential, Extreme Value, F, Gamma (and Erlang), Log Normal, Negative Binomial, Normal (Gaussian), Poisson, Students t, Triangular, Weibull, Uniform. Operations on distributions: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ cdf, cdf complement, cumulative hazard, hazard, kurtosis, kurtosis_excess, mean, median, mode, pdf, range, quantile, quantile from the complement, skewness, standard_deviation, support, variance. Special Functions: ~~~~~~~~~~~~~~~~~~ The focus is twofold: functions required for the implementation of the statistical distributions, and functions that are part of TR1: Gamma Functions (Gamma, Log Gamma, Digamma, Ratios of Gamma Functions, Incomplete Gamma Functions, Incomplete Gamma Function Inverses, Derivative of the Incomplete Gamma Function). Factorials and Binomial Coefficients (Factorial, Double Factorial, Rising Factorial, Falling Factorial, Binomial Coefficients). Beta Functions (Beta, Incomplete Beta Functions, Incomplete Beta Function Inverses, Derivative of the Incomplete Beta Function). Error Functions (erf/erc, Error Function Inverses). Polynomials (Legendre (and Associated) Polynomials, Laguerre (and Associated) Polynomials, Hermite Polynomials, Spherical Harmonics). Elliptic Integrals(Carlson Form, Elliptic Integrals of the First Kind - Legendre Form, Elliptic Integrals of the Second Kind - Legendre Form, Elliptic Integrals of the Third Kind - Legendre Form). Logs, Powers, Roots and Exponentials (log1p, expm1, cbrt, sqrt1pm1, powm1, hypot). Sinus Cardinal and Hyperbolic Sinus Cardinal Functions (sinc_pi, sinhc_pi). Inverse Hyperbolic Functions (acosh, asinh, atanh). Floating Point Classification: Infinities and NaN's Unified Error Handling. Misc Tools: ~~~~~~~~~~~ Series Evaluation, Continued Fraction Evaluation, Root Finding With Derivatives, Root Finding Without Derivatives, Function Minimization. Availability: ~~~~~~~~~~~~~ Head to the Boost Vault (http://boost-consulting.com/vault) select the "Math - Numerics" directory, and you will find: math-toolkit-code.tar.bz2 Headers and tests: note only available in bz2 format due to size restrictions in the vault :-( If this causes undue problems let me know. math-toolkit-docs.zip HTML format docs. math_toolkit.pdf PDF format docs. Instructions: ~~~~~~~~~~~~~ Extract to a directory *separate* from your boost tree, then set the environment variable BOOST_ROOT to point to a copy of boost-1.34 (release branch cvs) or to 1.35 (cvs HEAD). Sorry but Boost-1.33.x or earlier won't work. The Jamfiles should then "just work" and enable testing of the library without having to integrate into your Boost tree. Please note that in order to catch regressions the tolerances for the tests are set quite low: when they are first run on a new platform many tests will very likely fail, a human eyeball then has to be cast over the results and judge whether the error rates are acceptable or whether they represent real issues. Currently the lib has been tested on Win32, Linux, HP-UX and FreeBSD with a variety of compilers (VC++ Intel, gcc, HP aCC). Review Manager: ~~~~~~~~~~~~~~~ Should some kind soul care to volunteer, we would be very grateful :-) Many thanks for your consideration, John Maddock.

Hi John, I have received your request and added your library to the review queue. Cheers, ron On Jan 5, 2007, at 8:05 AM, John Maddock wrote:
The "Math Toolkit" has now matured to the point where Paul Bristow and I would like to ask for a formal review.
The toolkit contains:
Statistical distributions: ~~~~~~~~~~~~~~~~~~~~~~~~~~
Bernoulli, Beta, Binomial, Cauchy-Lorentz, Chi Squared, Exponential, Extreme Value, F, Gamma (and Erlang), Log Normal, Negative Binomial, Normal (Gaussian), Poisson, Students t, Triangular, Weibull, Uniform.
Operations on distributions: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cdf, cdf complement, cumulative hazard, hazard, kurtosis, kurtosis_excess, mean, median, mode, pdf, range, quantile, quantile from the complement, skewness, standard_deviation, support, variance.
Special Functions: ~~~~~~~~~~~~~~~~~~
The focus is twofold: functions required for the implementation of the statistical distributions, and functions that are part of TR1:
Gamma Functions (Gamma, Log Gamma, Digamma, Ratios of Gamma Functions, Incomplete Gamma Functions, Incomplete Gamma Function Inverses, Derivative of the Incomplete Gamma Function).
Factorials and Binomial Coefficients (Factorial, Double Factorial, Rising Factorial, Falling Factorial, Binomial Coefficients).
Beta Functions (Beta, Incomplete Beta Functions, Incomplete Beta Function Inverses, Derivative of the Incomplete Beta Function).
Error Functions (erf/erc, Error Function Inverses).
Polynomials (Legendre (and Associated) Polynomials, Laguerre (and Associated) Polynomials, Hermite Polynomials, Spherical Harmonics).
Elliptic Integrals(Carlson Form, Elliptic Integrals of the First Kind - Legendre Form, Elliptic Integrals of the Second Kind - Legendre Form, Elliptic Integrals of the Third Kind - Legendre Form).
Logs, Powers, Roots and Exponentials (log1p, expm1, cbrt, sqrt1pm1, powm1, hypot).
Sinus Cardinal and Hyperbolic Sinus Cardinal Functions (sinc_pi, sinhc_pi).
Inverse Hyperbolic Functions (acosh, asinh, atanh).
Floating Point Classification: Infinities and NaN's
Unified Error Handling.
Misc Tools: ~~~~~~~~~~~
Series Evaluation, Continued Fraction Evaluation, Root Finding With Derivatives, Root Finding Without Derivatives, Function Minimization.
Availability: ~~~~~~~~~~~~~
Head to the Boost Vault (http://boost-consulting.com/vault) select the "Math - Numerics" directory, and you will find:
math-toolkit-code.tar.bz2 Headers and tests: note only available in bz2 format due to size restrictions in the vault :-( If this causes undue problems let me know.
math-toolkit-docs.zip HTML format docs.
math_toolkit.pdf PDF format docs.
Instructions: ~~~~~~~~~~~~~
Extract to a directory *separate* from your boost tree, then set the environment variable BOOST_ROOT to point to a copy of boost-1.34 (release branch cvs) or to 1.35 (cvs HEAD). Sorry but Boost-1.33.x or earlier won't work. The Jamfiles should then "just work" and enable testing of the library without having to integrate into your Boost tree. Please note that in order to catch regressions the tolerances for the tests are set quite low: when they are first run on a new platform many tests will very likely fail, a human eyeball then has to be cast over the results and judge whether the error rates are acceptable or whether they represent real issues. Currently the lib has been tested on Win32, Linux, HP-UX and FreeBSD with a variety of compilers (VC++ Intel, gcc, HP aCC).
Review Manager: ~~~~~~~~~~~~~~~
Should some kind soul care to volunteer, we would be very grateful :-)
Many thanks for your consideration,
John Maddock.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/ listinfo.cgi/boost

John Maddock <john <at> johnmaddock.co.uk> writes:
The "Math Toolkit" has now matured to the point where Paul Bristow and I would like to ask for a formal review.
Some comments before the review begins (If you allow): In the docs sometimes you have formulations like e.g. "Returns the cubed root of x.". For native speakers this may be no problem, but sqrt(x)^3 may be easier to catch for the rest of us. Also I dislike: "The definition used here is that used by Wolfram MathWorld" since by this the docs are neither self-contained nor robust against some idiot buying and shutting down the cited sites. I am sure, even wikipedia will vanish some day due to some more internet restrictions evolved from the pseudo-war against terror (alias war against freedom) or some idiot holding a software patent affecting the whole community. So rather include the full text from Wikipedia than hope it is there when my children read your docs. I learned it the hard way: data persistence is unavailable in the w^3. Citing papers is OK, but it takes a few thousand dollars to get them all I guess. So adding an outline of the algorithm would be nice for all functions (though saying what you use is GoodStuff(TM), too). Since I got really excited about gamma functions (I need them and had hard times evaluating exact solutions from continuum mechanics): The Definition section needs some rework and the warnings about the different definitions will not help much in this form (at least for me, the stupid one) I see no connection between \Int R(t, s) dt and the definitions F, E and \Pi so here again the docs are a little bit confusing and the information about gamma function definitions and what Legendre found out will not enter my brain without further information from other sources which I find odd. OTOH given these function on a silver tablet I'd like to say: Thank You! What I also dislike is the existence of default typedefs for double (students_t et al.). This is unnecessary and makes double a special type which it is not. I'd vote for removing those from the boost version of this due to the asymmetry it produces. regards, Markus

Markus Werle wrote:
John Maddock <john <at> johnmaddock.co.uk> writes:
The "Math Toolkit" has now matured to the point where Paul Bristow and I would like to ask for a formal review.
Some comments before the review begins (If you allow):
In the docs sometimes you have formulations like e.g. "Returns the cubed root of x.". For native speakers this may be no problem, but sqrt(x)^3 may be easier to catch for the rest of us.
From the example cbrt code on pg 207 it looks to me like this should have said "Returns the cube root of x." i.e. pow(x,1/3). -glenn

Glenn Schrader wrote:
From the example cbrt code on pg 207 it looks to me like this should have said "Returns the cube root of x." i.e. pow(x,1/3).
Right, see http://mathworld.wolfram.com/CubeRoot.html John.

Markus Werle wrote:
John Maddock <john <at> johnmaddock.co.uk> writes:
The "Math Toolkit" has now matured to the point where Paul Bristow and I would like to ask for a formal review.
Some comments before the review begins (If you allow):
No problem at all: the more feedback the better.
In the docs sometimes you have formulations like e.g. "Returns the cubed root of x.". For native speakers this may be no problem, but sqrt(x)^3 may be easier to catch for the rest of us.
That would be x^(1/3) :-) I've just added that.
Also I dislike: "The definition used here is that used by Wolfram MathWorld" since by this the docs are neither self-contained nor robust against
Hmmm, that section could be better phrased, Looks like we need a concise definition of "kertosis", but there are limits to how much we can rewrite what's already very well explained elsewhere.
Citing papers is OK, but it takes a few thousand dollars to get them all I guess. So adding an outline of the algorithm would be nice for all functions (though saying what you use is GoodStuff(TM), too).
The special functions all have an "implementation" section that should provide that (or at least a formula). However, there are some papers that are just too complex to summarise (the methods used to obtain the initial approximations for the inverse incomplete gamma and beta functions are good examples of this). The aim is that someone should be able to get the jist of the code, by refering to the implementation section and the formulae. They may still need to do a bit of pencil and paper math to get their head round some of the formulae, but that's not necessarily a bad thing :-)
Since I got really excited about gamma functions (I need them and had hard times evaluating exact solutions from continuum mechanics): The Definition section needs some rework and the warnings about the different definitions will not help much in this form (at least for me, the stupid one)
I see no connection between \Int R(t, s) dt and the definitions F, E and \Pi so here again the docs are a little bit confusing and the information about gamma function definitions and what Legendre found out will not enter my brain without further information from other sources which I find odd.
Now I'm confused: the functions F, E and Pi relate to the exponential integrals, the incomplete gamma functions are traditionally denoted P and Q (for the regularized forms) and with upper and lower case gamma's for full-non-regularized versions. They are fully defined by the definitions given (the integrals), and those definitions are completely standard, so giving information beyond that is the job of the textbooks: like A&S etc. Which functions were you looking for?
OTOH given these function on a silver tablet I'd like to say: Thank You!
What I also dislike is the existence of default typedefs for double (students_t et al.). This is unnecessary and makes double a special type which it is not. I'd vote for removing those from the boost version of this due to the asymmetry it produces.
I'd be interested in other views on this: my gut feeling is that hardly any users of the statistics code will be using anything except double precision. The convenience of being able to write: quantile(students_t(10), .95); // 95% quantile for 10 degrees freedom is well worth it IMO. The alternative: quantile(students_t_distribution<double>(10), .95); is unnecessarily verbose in comparison. It's like writing: std::string rather than: std::basic_string<char> all the time. Thanks for the feedback, John.

John Maddock <john <at> johnmaddock.co.uk> writes:
In the docs sometimes you have formulations like e.g. "Returns the cubed root of x.". For native speakers this may be no problem, but sqrt(x)^3 may be easier to catch for the rest of us.
That would be x^(1/3)
Unintenden "Quod erat demonstrantum"
I've just added that.
Fine.
Also I dislike: "The definition used here is that used by Wolfram MathWorld" since by this the docs are neither self-contained nor robust against
Hmmm, that section could be better phrased,
Looks like we need a concise definition of "kertosis",
The “kertosis” is a measure of skewness for distributions - right? So is this a game on words for "distribution"?
but there are limits to how much we can rewrite what's already very well explained elsewhere.
Could we ask Wolfram for permission to include the stuff? At the same time they could switch to boost license for the docs ... (yes, I can be optimistic, too ;-))
Citing papers is OK, but it takes a few thousand dollars to get them all I guess. So adding an outline of the algorithm would be nice for all functions (though saying what you use is GoodStuff(TM), too).
The special functions all have an "implementation" section that should provide that (or at least a formula). However, there are some papers that are just too complex to summarise (the methods used to obtain the initial approximations for the inverse incomplete gamma and beta functions are good examples of this).
I see.
I see no connection between \Int R(t, s) dt and the definitions F, E and \Pi so here again the docs are a little bit confusing and the information about gamma function definitions and what Legendre found out will not enter my brain without further information from other sources which I find odd.
Now I'm confused:
No, I am. This was an error due to removed text. I wanted to talk about elliptic integrals here. I had difficulties with chapter "Definition" below "Elliptic Integrals" Looking at this part of the docs my sentence above makes more sense.
What I also dislike is the existence of default typedefs for double I'd be interested in other views on this: my gut feeling is that hardly any users of the statistics code will be using anything except double precision.
... until 128 bit platforms are widely available at the supermarket in 2009?
The convenience of being able to write:
quantile(students_t(10), .95); // 95% quantile for 10 degrees freedom
is well worth it IMO. The alternative:
quantile(students_t_distribution<double>(10), .95);
I'd still vote for quantile(students_t<double>(10), .95); or quantile(students_t_distr<double>(10), .95); which is not much more pain or a typedef in the user's code (which makes it easy to adopt to future requirements, where I change the typedef, not all of my code): typedef students_t_distribution<double> students_t; ... quantile(students_t(10), .95);
is unnecessarily verbose in comparison. It's like writing:
std::string rather than: std::basic_string<char>
And yet std::string should have never made it into the standard. That beast is a counter example of cute design. We now have std::string and std::wstring and someday std::wwstring? This convention has no symmetry and I dislike it, too. It takes a lot of headache to get my code switched to unicode since all of it is polluted with the default 95% solution :-( Markus

Markus Werle wrote:
Looks like we need a concise definition of "kertosis",
The “kertosis” is a measure of skewness for distributions - right? So is this a game on words for "distribution"?
No game on anything intended. And "skewness" gives the skewness of the distribution, "kurtosis" relates to how sharp the peek of the distribution is. The problem is that there are two measures in common use: "kurtosis" and "kurtosis excess": the latter is the kurtosis less 3, and is defined so that the normal distribution has a kurtosis excess of 0. The problem is compounded by the two terms often being used interchangeably :-( In any event, I've found a mathematically precise definition, so I'll have a go at rewriting that.
but there are limits to how much we can rewrite what's already very well explained elsewhere.
Could we ask Wolfram for permission to include the stuff? At the same time they could switch to boost license for the docs ... (yes, I can be optimistic, too ;-))
Far too optimistic I fear, they have quite strict terms of use http://mathworld.wolfram.com/about/terms.html but also a bad publishing contract means that Wolfram and Eric Weisstein effectively no longer own their own work: http://mathworld.wolfram.com/about/erics_commentary.html
I see no connection between \Int R(t, s) dt and the definitions F, E and \Pi so here again the docs are a little bit confusing and the information about gamma function definitions and what Legendre found out will not enter my brain without further information from other sources which I find odd.
Now I'm confused:
No, I am. This was an error due to removed text. I wanted to talk about elliptic integrals here. I had difficulties with chapter "Definition" below "Elliptic Integrals" Looking at this part of the docs my sentence above makes more sense.
Basically: any elliptic integral can be reduced to a linear combination of three (linearly independent) standard forms. Various candidates for these forms have been proposed, but Legendre's E, F and Pi integrals are the first to be defined and most well known. The Carlson forms are an alternative set of std forms, but there are others in the literature too. I see your point about the definitions given not looking much like an elliptic integral: but they are entirely standard. The process of converting an arbitrary elliptic intregral into std form is covered in some depth in A&S I believe. You may also be able to find a symbolic maths package that will do it for you (though I admit I don't know of any).
What I also dislike is the existence of default typedefs for double I'd be interested in other views on this: my gut feeling is that hardly any users of the statistics code will be using anything except double precision.
... until 128 bit platforms are widely available at the supermarket in 2009?
The convenience of being able to write:
quantile(students_t(10), .95); // 95% quantile for 10 degrees freedom
is well worth it IMO. The alternative:
quantile(students_t_distribution<double>(10), .95);
I'd still vote for
quantile(students_t<double>(10), .95);
or
quantile(students_t_distr<double>(10), .95);
which is not much more pain or a typedef in the user's code (which makes it easy to adopt to future requirements, where I change the typedef, not all of my code):
typedef students_t_distribution<double> students_t; ... quantile(students_t(10), .95);
is unnecessarily verbose in comparison. It's like writing:
std::string rather than: std::basic_string<char>
And yet std::string should have never made it into the standard. That beast is a counter example of cute design. We now have std::string and std::wstring and someday std::wwstring? This convention has no symmetry and I dislike it, too. It takes a lot of headache to get my code switched to unicode since all of it is polluted with the default 95% solution :-(
I hear you, but I still think you're wrong :-) I think at this stage we need more opinions and some real world usage before changing these names again - we're on about the third naming scheme already - after previous discussions here. John.

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of John Maddock Sent: 10 January 2007 13:25 To: boost@lists.boost.org Subject: Re: [boost] Review Request: Statistical Distributionsand Mathematical Special Functions
Far too optimistic I fear, they have quite strict terms of use http://mathworld.wolfram.com/about/terms.html but also a bad publishing contract means that Wolfram and Eric Weisstein effectively no longer own their own work: http://mathworld.wolfram.com/about/erics_commentary.html
A horror story that left me gasping! I'll never buy a book from CRC again. (I'll take delight in borrowing it from the library and illegally photocopying it!) Authors - get a lawyer to read the small print. This case suggests that we should try harder to ensure that ALL the Boost files contain the license terms. I see from the reports that there are thousands of Boost files that do not have this (.png, xml?). How can one embed copyright into .png files? How can Quickbook progagate the copyright into all derived files? Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

Paul A Bristow wrote:
This case suggests that we should try harder to ensure that ALL the Boost files contain the license terms. I see from the reports that there are thousands of Boost files that do not have this (.png, xml?). How can one embed copyright into .png files?
It's certainly possible with the ImageMagick toolkit, if the file format supports embedded comments: http://www.imagemagick.org/script/index.php There are probably other ways, but I know that ImageMagick is portable to Unix and Windows boxes, has a set of standard command line tools, and also has a C++ binding (although I've never used it myself) -- ------------------------------------------------------------------------------- Kevin Lynch voice: (617) 353-6025 Physics Department Fax: (617) 353-9393 Boston University office: PRB-361 590 Commonwealth Ave. e-mail: krlynch@bu.edu Boston, MA 02215 USA http://budoe.bu.edu/~krlynch -------------------------------------------------------------------------------

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Kevin Lynch Sent: 10 January 2007 16:56 To: boost@lists.boost.org
This case suggests that we should try harder to ensure that ALL the Boost files contain the license terms. I see from the reports that there are thousands of Boost files that do not have
Paul A Bristow wrote: this (.png, xml?). How can one embed copyright into .png files?
It's certainly possible with the ImageMagick toolkit, if the file format supports embedded comments: http://www.imagemagick.org/script/index.php
There are probably other ways, but I know that ImageMagick is portable to Unix and Windows boxes, has a set of standard command line tools, and also has a C++ binding (although I've never used it myself)
On further investigation, I find that there is provision in the specification: Portable Network Graphics (PNG) Specification (Second Edition) Information technology - Computer graphics and image processing - Portable Network Graphics (PNG): Functional specification. ISO/IEC 15948:2003 (E) W3C Recommendation 10 November 2003 http://www.w3.org/TR/PNG/#14Ordering 11.3.4.1 Introduction PNG provides the tEXt, iTXt, and zTXt chunks for storing text strings associated with the image, such as an image description or copyright notice. Keywords are used to indicate what each text string represents. Any number of such text chunks may appear, and more than one with the same keyword is permitted. 11.3.4.2 Keywords and text strings The following keywords are predefined and should be used where appropriate. Title Short (one line) title or caption for image Author Name of image's creator Description Description of image (possibly long) Copyright Copyright notice Creation Time Time of original image creation Software Software used to create the image Disclaimer Legal disclaimer Warning Warning of nature of content Source Device used to create the image Comment Miscellaneous comment It would be nice to complete many of these items for Boost documentation. In theory, Adobe Photoshop Elements 2.0 (used to convert from .ps) has boxes for this information: File info and if I complete them, it can be read back and a c symbol appears with the filename on the Window banner, and if I save as a pdf it can be saved and restored. BUT - bad news - if I save as .png and re-open, the info is lost :-(( (If anyone can confirm that this feature works on later versions, I'd like to know). So it looks as though we need to use another tool to copyright our graphs :-( Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Markus Werle Sent: 08 January 2007 16:31 To: boost@lists.boost.org Subject: Re: [boost]Review Request: Statistical Distributions and Mathematical Special Functions
John Maddock <john <at> johnmaddock.co.uk> writes:
The "Math Toolkit" has now matured to the point where Paul Bristow and I would like to ask for a formal review.
Some comments before the review begins (If you allow):
Comments are very welcome now - many of the more minor matters can be resolved before they confuse the review.
Also I dislike: "The definition used here is that used by Wolfram MathWorld" since by this the docs are neither self-contained nor robust against some idiot buying and shutting down the cited sites. I am sure, even wikipedia will vanish some day due to some more internet restrictions evolved from the pseudo-war against terror (alias war against freedom) or some idiot holding a software patent affecting the whole community.
I think you are being rather pessimistic here - if these sites go, Boost could too :-(( However we have also included some references to books and academic journals as well, but the overall size (nearly 2 Mbyte & over 250 pages in pdf already, from 25 Mbyte of Quickbook sources!) is also an issue. Not to mention the work in producing the documentation ;-) So I hope we have struck a reasonable balance here.
What I also dislike is the existence of default typedefs for double (students_t et al.). This is unnecessary and makes double a special type which it is not.
Well I think that double IS special - because over 95% of users will be just using double throughout. So we have tried to make it easier and less verbose to use double by providing the typedefs. It seems a reasonable compromise to me. Paul PS I am sure John will also comment on other items. --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

I just joined the Boost community in order to begin a discussion of the merits of creating a library for function optimization. As a result, I discovered this thread on statistical distributions and mathematical special functions. This library has a single function minimization algorithm: Brent's one-dimensional parabolic interpolation. For my own work, I have implemented two other function optimization methods, neither of which are limited to one-dimensional problems and both of which are suitable for either minimization or maximization (or any other arbitrary ordering, for that matter).. The existing Brent's algorithm could be similarly generalized to handle orderings other than minimization. In addition, there are many other algorithms that could be included within an optimization library. This leads me to a general question about the library under discussion and the bit that I have been working on. Does it make sense to factor out the Brent's algorithm code and add it with mine into a new optimization library, or is it preferable to integrate my algorithms into the larger library? Given the early stage of development of optimization within the current library (compared. for example, with the diversity of statistical and mathematical functions), which is the best pathway to follow? As a new member of the Boost community, I appreciate your guidance. Thanks. Cheers, Brook

Brook Milligan wrote:
This leads me to a general question about the library under discussion and the bit that I have been working on. Does it make sense to factor out the Brent's algorithm code and add it with mine into a new optimization library, or is it preferable to integrate my algorithms into the larger library? Given the early stage of development of optimization within the current library (compared. for example, with the diversity of statistical and mathematical functions), which is the best pathway to follow?
Personally, I'm happy with either :-) Re: maximising vs minimising, a lot of this can be accumplished by mixing Boost.Lambda with a simple minimiser. Another option is to submit an enhanced version of the existing code: to an extent that's what I'm doing already with Hubert Holin's existing special functions. John. PS: I'm changing the subject line: might attract more attention if folks know what's being discussed :-)
participants (7)
-
Brook Milligan
-
Glenn Schrader
-
John Maddock
-
Kevin Lynch
-
Markus Werle
-
Paul A Bristow
-
Ronald Garcia