
Hello, I have been using boost for quite a while, and I was wondering if there is any interest (or ongoing development) in a numerical quadrature library. I have seen some old discussions on the mailing lists archive, but couldn't tell wether something was still in the works. I have coded the embryo of such a template library (for my personal use) which basically works taking a unary_function and allows the user (me in that case) to choose between different integration algorithms (composite trapezoidal, simpson and romberg at the moment). Performance is good (faster than the kind of implementation you get from Numerical Recipes in C thanks to good inlining). I also extended this to work on discrete data sets (such as a vector<double> obtained from measurements etc...) rather than just function with analytical expressions. 2D integration of binary_function's can the previous 1D algorithms using functional's binder1st, but I haven't done much in that direction to potentially improve things. I implemented two backbone classes to do the work but I am still wondering what the "consensus" calling convention on that kind of library could be (the choice of using a class was simply to be able to go back on a previously integrated function and ask for a bit more precision without having to compute points twice). Any comments welcome, All the best, Franck

I'm interested in it. Also check out the example in the MDC in the vault.

On Dec 4, 2006, at 9:56 PM, Schrom, Brian T wrote:
I'm interested in it. Also check out the example in the MDC in the vault. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/ listinfo.cgi/boost
Hi, Which examples are you referring to? Since there is some (minor) interest in such a library I uploaded a small example of the implementation I use. It does not contain all the integration routines, especially romberg and Monte-Carlo. Before including those, I'd like to get feedback on how people would like to be able to call such a library. Right now it's tuned for my own usage, that's why I used template classes and not just template functions. The main design objective was to be able to use it on any functor inheriting from std::unary_function and any predicate to decide when the required quadrature precision is achieved. There are undoubtedly a lot of room for improvements, but it should perform slightly better than a typical numerical recipes in C algorithm while being a lot more generic (and not involving static local variables ;-) ). Here is the URL http://www.thp.uni-koeln.de/~franck/quadrature.tar.gz Feedback is most welcome, as I'd really love to have a unified, coherent approach to achieving quadrature in C++. Best Regards, Franck

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Franck Stauffer Sent: 05 December 2006 12:35 To: boost@lists.boost.org Subject: Re: [boost] Numerical Quadrature Library
Since there is some (minor) interest in such a library I uploaded a small example of the implementation I use. It does not contain all the integration routines, especially romberg and Monte-Carlo. Before including those, I'd like to get feedback on how people would like to be able to call such a library.
Right now it's tuned for my own usage, that's why I used template classes and not just template functions.
The main design objective was to be able to use it on any functor inheriting from std::unary_function and any predicate to decide when the required quadrature precision is achieved. There are undoubtedly a lot of room for improvements, but it should perform slightly better than a typical numerical recipes in C algorithm while being a lot more generic (and not involving static local variables ;-) ).
http://www.thp.uni-koeln.de/~franck/quadrature.tar.gz
Feedback is most welcome, as I'd really love to have a unified, coherent approach to achieving quadrature in C++.
IMO quadrature would be a useful addition to Boost - and you should not be too discouraged by a modest response as the more numerically inclined people tend to be intermittent lurkers rather than hard-core posters ;-) Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

Good to bring this up. Note that we've discussed this before (and I posted sample code). I haven't seen your code yet, but you mention functors must derive from some base class. That's not desirable - and I doubt it's required.

On Dec 5, 2006, at 4:43 PM, Neal Becker wrote:
Good to bring this up. Note that we've discussed this before (and I posted sample code).
I haven't seen your code yet, but you mention functors must derive from some base class. That's not desirable - and I doubt it's required.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/ listinfo.cgi/boost
Hi, Thanks for your answer... To make that point clear, it is only required for static type checking of arguments and return types without having to use more than two template parameters. If you want to say it is not desirable for performance, I think deriving from std::unary_function costs nothing (since unary_function's only declare typedef's for return and argument types, there is not impact on calling the functor). The problem is that somehow I want the user to be explicit about those types. I am of course open to any propositions to avoid deriving from std::unary_function that does not require a large template argument list. All the best, Franck

On 12/5/06, Franck Stauffer <franck@lowcoders.net> wrote:
To make that point clear, it is only required for static type checking of arguments and return types without having to use more than two template parameters. If you want to say it is not desirable for performance, I think deriving from std::unary_function costs nothing (since unary_function's only declare typedef's for return and argument types, there is not impact on calling the functor). The problem is that somehow I want the user to be explicit about those types.
Why not just assume that the functor has typedefs for argument_type and return_type and document the details? --Michael Fawcett

Michael Fawcett <michael.fawcett <at> gmail.com> writes:
On 12/5/06, Franck Stauffer <franck <at> lowcoders.net> wrote:
To make that point clear, it is only required for static type checking of arguments and return types without having to use more than two template parameters. If you want to say it is not desirable for performance, I think deriving from std::unary_function costs nothing (since unary_function's only declare typedef's for return and argument types, there is not impact on calling the functor). The problem is that somehow I want the user to be explicit about those types.
Why not just assume that the functor has typedefs for argument_type and return_type and document the details?
Why not define an extra traits class that defines those typedefs instead of polluting time-critical classes with typedefs? I always find it odd that STL algorithms require a rather large set of typedefs to be defined _inside_ the container. Why not std::size_type<T>::type instead of T::size_type? Why not std::begin(std::vector<T> & v) instead of std::vector<T>::begin() A generic library is much more reusable if no typedefs inside the class are required and no special member function has to be provided (Declaring war to all those wrappers) And this I say not only for reusability, but also for performance: real world compilers tend to optimize away empty or nearly empty classes more easily than stuff that contains more than one element. Markus

Franck Stauffer ha escrito:
On Dec 5, 2006, at 4:43 PM, Neal Becker wrote:
Good to bring this up. Note that we've discussed this before (and I posted sample code).
I haven't seen your code yet, but you mention functors must derive from some base class. That's not desirable - and I doubt it's required.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/ listinfo.cgi/boost
Hi,
Thanks for your answer...
To make that point clear, it is only required for static type checking of arguments and return types without having to use more than two template parameters.
If you only want to do static *checking* the cost may be unacceptable: after all, if some type does not match it'll trigger an error when instantiating your generic code --admittedly the error message will be more cryptic than provided by a BOOST_STATIC_ASSERT, but even so, cluttering the interface for static type checking reasons seems too much.
If you want to say it is not desirable for performance, I think deriving from std::unary_function costs nothing (since unary_function's only declare typedef's for return and argument types, there is not impact on calling the functor). The problem is that somehow I want the user to be explicit about those types.
I am of course open to any propositions to avoid deriving from std::unary_function that does not require a large template argument list.
You can use some traits-based technique. As for the result type, you can use the carefully thought out [boost|std::tr1]::result_of proposal: http://boost.org/libs/utility/utility.htm#result_of HTH, Joaquín M López Muñoz Telefónica, Investigación y Desarrollo

Franck Stauffer wrote:
On Dec 5, 2006, at 4:43 PM, Neal Becker wrote:
Good to bring this up. Note that we've discussed this before (and I posted sample code).
I haven't seen your code yet, but you mention functors must derive from some base class. That's not desirable - and I doubt it's required.
Just for comparison, here is the classes I use. It's not the same integration rules and incomplete (I made no effort to split the interval of integration in the library, I do it naively in the code: there is a hierarchical splitting algorithm which is available but I did not had time to recode it in C++). The basic routines are converted from gsl which is itself translated from quadpack. I give it here for comparison. Two points:
1) I also have a "Function class" (included but not needed), but in fact everything is templated on a Functor class and there is no need to derive from this base class. It is more a concept that states what methods and types a functor class must have. Actually, this is quite like your UnaryFunction 2) One specific need I had was to be able to integrate vectorial or matricial functions, which is why the function has a type associated to it which is the type of what it returns. This return type also should be contrained to offer some services (namely assignement, +, -, multiplication by a scalar, abs,... but not done in my code). To really define a quadrature library, we should start defining the concept of function (even if in the end there will not be a base class for that). I think we are quite in an agreement here, but others might have a different view. Then, elementary quadrature rules and their methods should be defined (this is basically what I provide with GaussKronod.H). Those for simpson or trapezoidal quadrature should be more simple (but I have not done them, so....). Then, there should be higher level routines specifying how the elementary integrators are used to integrate a function over an extended interval (this can be regular splitting, or hierarchical splitting based on error estimation). At this point, your implementation mixes this step and the previous (mine provides only the previous step). From the begining, one might also have provision for multiple dimension integration.... My two cent on this problem... Oh, I almost forgot.... there is a GSL++, I discovered it after having written the above classes, but it might be worth a check. Theo.

I think virtual dispatch for a numerical library will be frowned upon. Anyway, its not really necessary and if you want to reduce the # of template instantiations, you can use boost::function.

Neal Becker wrote:
Good to bring this up. Note that we've discussed this before (and I posted sample code).
I haven't seen your code yet, but you mention functors must derive from some base class. That's not desirable - and I doubt it's required.
I would go so far as to say that it is unacceptable for a serious library. Every extra requirement on client code that isn't absolutely necessary will dramatically reduce the potential userbase of your code unless it brings with it a set of stong benefits. And I think that you'll be hard pressed to show that this inheritance requirement brings ANY benefits, much less compelling ones. That said, I would really love to see a library like this in boost, and I encourage Franck to push forward with development.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- ------------------------------------------------------------------------------- Kevin Lynch voice: (617) 353-6025 Physics Department Fax: (617) 353-9393 Boston University office: PRB-361 590 Commonwealth Ave. e-mail: krlynch@bu.edu Boston, MA 02215 USA http://budoe.bu.edu/~krlynch -------------------------------------------------------------------------------

On Dec 5, 2006, at 4:58 PM, Kevin Lynch wrote:
I would go so far as to say that it is unacceptable for a serious library. Every extra requirement on client code that isn't absolutely necessary will dramatically reduce the potential userbase of your code unless it brings with it a set of stong benefits. And I think that you'll be hard pressed to show that this inheritance requirement brings ANY benefits, much less compelling ones.
That said, I would really love to see a library like this in boost, and I encourage Franck to push forward with development.
Ok, This is precisely the reason why I posted this sample code, as I was quite sure the consensus on a quadrature library would differ from this implementation. While I do agree that asking for inheritance explicitely is not in general a good thing, I think this code is fairly independent from this point. This bring me to the point raised by Michael Fawcett:
Why not just assume that the functor has typedefs for argument_type and return_type and document the details?
Yes, that is a solution, and the way I achieved that in my example was to derive from std::unary_function (maybe because I am too lazy to type two lines of code in my test functions ;-) ). Note that the quadrature code itself doesn't make explicit use of this so that the posted code works if you typedef result_type and argument_type them in your functor by hand. I am a bit sorry to have presented things the way I did. So let me try again ;-). This code DOES NOT require to derive from std::unary function, but it requires result_type and argument_type into be typedef'ed in their functors. I definitely appreciate the feedback, and I won't give up easily as I really wish to have a useful generic quadrature code ;-) Best Regards, Franck

I'm interested in it. Also check out the example in the MDC in the vault. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/ listinfo.cgi/boost
Hi, Which examples are you referring to? It was the MCS files in the units directory in the vault (http://boost-consulting.com/vault/index.php?PHPSESSID=498eit1ms0ddep27o oolei5ab6&direction=0&order=&directory=Units) Specifically, the mcs/units/tests/measurement.hpp file. It is minimal, but useful. I've also implemented similar fragments that include the functions in math.h. I don't like the way that I did it, so am looking forward to seeing others implementations. Brian

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Franck Stauffer Sent: 04 December 2006 18:14 To: boost@lists.boost.org Subject: [boost] Numerical Quadrature Library
Hello,
I have been using boost for quite a while, and I was wondering if there is any interest (or ongoing development) in a numerical quadrature library. I have seen some old discussions on the mailing lists archive, but couldn't tell wether something was still in the works.
I have coded the embryo of such a template library (for my personal use) which basically works taking a unary_function and allows the user (me in that case) to choose between different integration algorithms (composite trapezoidal, simpson and romberg at the moment). Performance is good (faster than the kind of implementation you get from Numerical Recipes in C thanks to good inlining).
I also extended this to work on discrete data sets (such as a vector<double> obtained from measurements etc...) rather than just function with analytical expressions. 2D integration of binary_function's can the previous 1D algorithms using functional's binder1st, but I haven't done much in that direction to potentially improve things.
Sounds useful. Boost could certaintly do with more tools like this. Paul

I think we should begin with requirements. Here is an outline of some questions: 1. Interface: 1.1 Generic function arg type(s): Specified as template parameter to integrator, or as a trait of the functor? 1.2 Generic function result type: Template parameter, or computed (e.g., result_of)? 2. One-dimensional integration: Integration approx type as template parameter 3. Multi-dimesional integration: 3.1 Is this required? 3.2 Independent fixed limits 3.3 Limits on some dimensions are functions of other variables?

On Dec 6, 2006, at 1:27 PM, Neal Becker wrote:
I think we should begin with requirements. Here is an outline of some questions:
1. Interface: 1.1 Generic function arg type(s): Specified as template parameter to integrator, or as a trait of the functor? 1.2 Generic function result type: Template parameter, or computed (e.g., result_of)?
2. One-dimensional integration: Integration approx type as template parameter
3. Multi-dimesional integration: 3.1 Is this required? 3.2 Independent fixed limits 3.3 Limits on some dimensions are functions of other variables?
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/ listinfo.cgi/boost
Good idea :-) For 1.1 and 1.2 : I already expressed my initial wish to minimize the number of template parameters, especially if multidimensional integrals are being considered (and I am only talking about 2 to let's 6 dimensions...for larger dimensions, the problem ought to be treated). But after sleeping on it last night, I have less certitudes on this point. 2. Yes...it would make sense to me. It kinda goes along the point raised by Theodore Papadopoulo about the separation from "higher level" routines from the elementary integrators (which I did not do in this small example). 3. As numerical quadrature in less than say 6 "effective" dimensions (although one can argue that in some case 6 might already be too much) can be treated as multiple one dimensional integrals, the only first requirement I would have is making sure that there is an easy mechanism to bind a 1D integration to a specific argument of a multidimensional function. Yet, I suppose it could be worth to provide straightforward high level routines at least for 2D and 3D. Then I'd eventually like to see something for N-Dimensional integrals with "N large" treated exclusively by Monte-Carlo or Quasi-MC algorithms. (And also, one should make sure that people can use boost::function or an equivalent mechanism in case they need to "wrap" some old code that's been using function pointers)

Franck Stauffer wrote:
On Dec 6, 2006, at 1:27 PM, Neal Becker wrote:
[...]
3.3 Limits on some dimensions are functions of other variables?
3. As numerical quadrature in less than say 6 "effective" dimensions (although one can argue that in some case 6 might already be too much) can be treated as multiple one dimensional integrals, the only first requirement I would have is making sure that there is an easy mechanism to bind a 1D integration to a specific argument of a multidimensional function. Yet, I suppose it could be worth to provide straightforward high level routines at least for 2D and 3D. Then I'd eventually like to see something for N-Dimensional integrals with "N large" treated exclusively by Monte-Carlo or Quasi-MC algorithms.
How about this question? This is perhaps the trickiest. It's tempting to ignore it- but it's also solvable.

On Dec 6, 2006, at 2:34 PM, Neal Becker wrote:
[...]
3.3 Limits on some dimensions are functions of other variables?
How about this question? This is perhaps the trickiest. It's tempting to ignore it- but it's also solvable.
I am sorry for having ignored that point ;-) It is indeed tricky and I have to confess I haven't thought about this...what is your call on this point and what do you think should be achieved?

Franck Stauffer wrote:
On Dec 6, 2006, at 2:34 PM, Neal Becker wrote:
[...]
3.3 Limits on some dimensions are functions of other variables?
How about this question? This is perhaps the trickiest. It's tempting to ignore it- but it's also solvable.
I am sorry for having ignored that point ;-) It is indeed tricky and I have to confess I haven't thought about this...what is your call on this point and what do you think should be achieved?
I wrote some code quite some time ago - perhaps not very sophisticated by modern standards - but shows some example of possibilities: template<class func> double romberg (func f, double a, double b, double eps = 1e-6, int jmax = 20, int K = 5) { std::vector<double> s(jmax+1); std::vector<double> h(jmax+2); h[1] = 1.0; trapzd T; for (int j = 1; j <= jmax; j++) { s[j] = T(f, a, b, j); if (j >= K) { double ss, dss; polint (&h[j-K], &s[j-K], K, 0.0, ss, dss); if (fabs (dss) <= eps * fabs (ss)) return ss; } h[j+1] = 0.25 * h[j]; } assert (1 == 0); } template<class func> struct g { func& f; double ax; double bx; double eps; int jmax, K; g (func _f, double _ax, double _bx, double _eps = 1e-6, int _jmax = 20, int _K = 5) : f (_f), ax (_ax), bx (_bx), eps (_eps), jmax (_jmax), K(_K) {} double operator() (double y) { return romberg (bind2nd (f, y), ax, bx, eps, jmax, K); } }; template<class func, class fa, class fb> struct g2 { const func& f; const fa& ax; const fb& bx; double eps; int jmax, K; g2 (const func& _f, const fa& _ax, const fb& _bx, double _eps = 1e-6, int _jmax = 20, int _K = 5) : f (_f), ax (_ax), bx (_bx), eps (_eps), jmax (_jmax), K(_K) {} double operator() (double y) { return romberg (bind2nd (f, y), ax(y), bx(y), eps, jmax, K); } }; template<class func> double romberg2d (func f, double ax, double bx, double ay, double by, double eps = 1e-6, int jmax = 20, int K = 5) { return romberg (g<func>(f, ax, bx, eps, jmax, K), ay, by, eps, jmax, K); } template<class func, class fa, class fb> double romberg2d (const func& f, const fa& ax, const fb& bx, double ay, double by, double eps = 1e-6, int jmax = 20, int K = 5) { return romberg (g2<func, fa, fb>(f, ax, bx, eps, jmax, K), ay, by, eps, jmax, K); }

Starting with basics. Here is my idea for a family of 1-d integrators: template<typename func_t> struct arg { typedef typename func_t::arg_t type; }; template<typename func_t> struct ret { typedef typename func_t::ret_t type; }; template<typename func_t, typename arg_t=typename arg<func_t>::type, typename ret_t=typename ret<func_t>::type> struct trap_1d { ret_t operator() (arg_t lower, arg_t upper); }; I thought about having the integrator type (romberg, trap, etc) be specified as the 1st template parameter, but offhand I don't see what value that adds. Any thoughts?

On Dec 7, 2006, at 4:22 PM, Neal Becker wrote:
Starting with basics. Here is my idea for a family of 1-d integrators:
template<typename func_t> struct arg { typedef typename func_t::arg_t type; };
template<typename func_t> struct ret { typedef typename func_t::ret_t type; };
template<typename func_t, typename arg_t=typename arg<func_t>::type, typename ret_t=typename ret<func_t>::type> struct trap_1d { ret_t operator() (arg_t lower, arg_t upper); };
I have neither real objections on the template parameters, nor on making the integrator a functor. But I just have one simple question: what should trap_1d::operator() return? succesive refinements of the integral when called multiple times?

I should explain that the previous was a simple outline. I was thinking trap_1d::operator() does the complete integration. It will need more information to specify the desired precision. I'm thinking these additional arguments could be passed to trap_1d::operator() or could be constructor args. The reason trap_1d is a functor is just because I wanted to have arg_t and res_t be template parameters, and I wanted to have default values for them. This gives the best of all worlds. You can overide these types by specifying non-defaults, or you can specialize the arg or res templates, or you can use the default arg and res templates which are designed to work with functions of the style std::unary_function (sorry, I didn't check the spellings on the type names).

On Dec 7, 2006, at 5:02 PM, Neal Becker wrote:
I should explain that the previous was a simple outline.
I was thinking trap_1d::operator() does the complete integration. It will need more information to specify the desired precision. I'm thinking these additional arguments could be passed to trap_1d::operator() or could be constructor args.
Ok...then we ought to discuss those extra parameters as well: 1. Precision: what should be its type? Return type would be natural, but it is problematic for complex integrands. 2. How to evaluate if the desired precision is reached? In the code I uploaded I explicitely let the user use it's own predicate, which I would say is important especially as some people would like to compare things in ULPs.
The reason trap_1d is a functor is just because I wanted to have arg_t and res_t be template parameters, and I wanted to have default values for them. This gives the best of all worlds. You can overide these types by specifying non-defaults, or you can specialize the arg or res templates, or you can use the default arg and res templates which are designed to work with functions of the style std::unary_function (sorry, I didn't check the spellings on the type names).
That's just fine with me :-) What do other people think aboout all this?

Franck Stauffer wrote:
Ok...then we ought to discuss those extra parameters as well: 1. Precision: what should be its type? Return type would be natural, but it is problematic for complex integrands.
Leave that to the user (while providing sensible predefined possibilities).
2. How to evaluate if the desired precision is reached? In the code I uploaded I explicitely let the user use it's own predicate, which I would say is important especially as some people would like to compare things in ULPs.
This should be a template parameters. Decide that one parameter of the integration routine is an "Iterator-like" class that has a method: bool stop(const Integrator&). Then each user has the choice of the implementation of the routine with maximum flexibility. I use such a thing for gradient descent and this is very flexible. It even allows to decide that only a certain amount of time is allowed to do the integration, or some number of iterations, or some sophisticated error measurement !! With a little more care, we can even define hierarchical behaviours (multiscale). This function also allows the easy wrapping of the "Iterator-like" to output (debug) information during the computation. Theo.

On Dec 7, 2006, at 7:14 PM, Theodore Papadopoulo wrote:
2. How to evaluate if the desired precision is reached? In the code I uploaded I explicitely let the user use it's own predicate, which I would say is important especially as some people would like to compare things in ULPs.
This should be a template parameters. Decide that one parameter of the integration routine is an "Iterator-like" class that has a method:
bool stop(const Integrator&).
Then each user has the choice of the implementation of the routine with maximum flexibility. I use such a thing for gradient descent and this is very flexible. It even allows to decide that only a certain amount of time is allowed to do the integration, or some number of iterations, or some sophisticated error measurement !! With a little more care, we can even define hierarchical behaviours (multiscale).
This function also allows the easy wrapping of the "Iterator-like" to output (debug) information during the computation.
Thanks for the answer and excellent suggestion :-) I'll try to post a draft trapezoidal integrator implementing this idea tomorrow.

Ok...then we ought to discuss those extra parameters as well:
1. Precision: what should be its type? Return type would be natural, but it is problematic for complex integrands. 2. How to evaluate if the desired precision is reached? In the code I uploaded I explicitely let the user use it's own predicate, which I would say is important especially as some people would like to compare things in ULPs.
If there is a push to begin thinking about numerical libraries to be incorporated into Boost, I would vote to think seriously about having a return type that includes error estimates (as GSL does with it's result type or, perhaps, something like the measurement class I threw together as a demo for mcs::units, attached). If implicit conversion to the result type was allowed, it would be transparent to functions that didn't care about errors and visible to those that do. As far as I can see, precision needs to be the same type as the return type since it represents uncertainty in the latter. That is, the range of possible return values would be [result - n*error, result + n*error]. Do you have a case in mind where this isn't true? Also, though I haven't looked closely at it, it seems like integrators, in 1D at least, are general accumulators and might fit into the Boost.Accmulators framework nicely... Matthias ---------------------------------------------------------------- Matthias Schabel, Ph.D. Assistant Professor, Department of Radiology Utah Center for Advanced Imaging Research 729 Arapeen Drive Salt Lake City, UT 84108 801-587-9413 (work) 801-585-3592 (fax) 801-706-5760 (cell) 801-484-0811 (home) matthias at stanfordalumni dot org ----------------------------------------------------------------

perhaps, something like the measurement class I threw together as a demo for mcs::units, attached
 ---------------------------------------------------------------- Matthias Schabel, Ph.D. Assistant Professor, Department of Radiology Utah Center for Advanced Imaging Research 729 Arapeen Drive Salt Lake City, UT 84108 801-587-9413 (work) 801-585-3592 (fax) 801-706-5760 (cell) 801-484-0811 (home) matthias at stanfordalumni dot org ----------------------------------------------------------------

On Dec 7, 2006, at 9:29 PM, Matthias Schabel wrote:
As far as I can see, precision needs to be the same type as the return type since it represents uncertainty in the latter. That is, the range of possible return values would be [result - n*error, result + n*error]. Do you have a case in mind where this isn't true?
This is always true when you can define such an interval, which is not the case for an integral of a complex valued function ( no "<" on complex) .

Franck Stauffer wrote:
On Dec 7, 2006, at 9:29 PM, Matthias Schabel wrote:
As far as I can see, precision needs to be the same type as the return type since it represents uncertainty in the latter. That is, the range of possible return values would be [result - n*error, result + n*error]. Do you have a case in mind where this isn't true?
This is always true when you can define such an interval, which is not the case for an integral of a complex valued function ( no "<" on complex) .
Isn't this Boost.Interval? http://www.boost.org/libs/numeric/interval/doc/interval.htm Oh, and this library is in the TR2 pipeline as well... Jeff

As far as I can see, precision needs to be the same type as the return type since it represents uncertainty in the latter. That is, the range of possible return values would be [result - n*error, result + n*error]. Do you have a case in mind where this isn't true?
This is always true when you can define such an interval, which is not the case for an integral of a complex valued function ( no "<" on complex) .
I'm certainly not an expert in the numerical analysis involved in convergence of complex functions, but I imagine that this is essentially a subset of the more general case of vector-valued functions. For the latter case, the first-order assumption is that error variances are uncorrelated, leading to a diagonal covariance matrix between components. In this way, if a complex error was returned, it would indicate a gaussian distributed set of errors in the real and complex components forming an ellipse oriented with the real and imaginary axes. Naturally, covariances are important in many cases, but also are more heavyweight. I suppose that a N-dimensional result class could return an N- dimensional value and an NxN covariance matrix that reduced to diagonal in the case where covariances are negligible... Matthias ---------------------------------------------------------------- Matthias Schabel, Ph.D. Assistant Professor, Department of Radiology Utah Center for Advanced Imaging Research 729 Arapeen Drive Salt Lake City, UT 84108 801-587-9413 (work) 801-585-3592 (fax) 801-706-5760 (cell) 801-484-0811 (home) matthias at stanfordalumni dot org ----------------------------------------------------------------

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Matthias Schabel Sent: 07 December 2006 20:29 To: boost@lists.boost.org Subject: Re: [boost] Numerical Quadrature Library/Boost.Accumulators
If there is a push to begin thinking about numerical libraries to be incorporated into Boost, I would vote to think seriously about having a return type that includes error estimates (as GSL does with it's result type or, perhaps, something like the measurement class I threw together as a demo for mcs::units, attached). If implicit conversion to the result type was allowed, it would be transparent to functions that didn't care about errors and visible to those that do.
As far as I can see, precision needs to be the same type as the return type since it represents uncertainty in the latter. That is, the range of possible return values would be [result - n*error, result + n*error]. Do you have a case in mind where this isn't true?
I strongly support thinking about (computational) uncertainty estimates. (Uncertainty seems to be the 'modern' term for what was called error). But I am not clear exactly how to achieve this. We've got Interval but it's doubtful if this is the right weapon to use. This is a potential big strength for C++ over other languages. Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com

Le vendredi 08 décembre 2006 à 09:25 +0000, Paul A Bristow a écrit :
I strongly support thinking about (computational) uncertainty estimates.
(Uncertainty seems to be the 'modern' term for what was called error).
But I am not clear exactly how to achieve this. We've got Interval but it's doubtful if this is the right weapon to use.
Interval as a generic notion is the right tool for bounding errors and representing uncertain inputs. Most papers on "reliable computing" are related to intervals (the name may change for higher dimensions, like ellipsoids, but they are still convex sets). Then there is the matter of representing intervals. With lower-upper bounds as in Boost.Interval, the representation is fine as long as the precision of the intervals is "small". For precisions of thousands of bits, a midpoint-radius representation is more efficient, as the radius can be stored in reduced form to improve performance and memory footprint. Best regards, Guillaume

Guillaume Melquiond wrote:
Le vendredi 08 décembre 2006 à 09:25 +0000, Paul A Bristow a écrit :
I strongly support thinking about (computational) uncertainty estimates.
(Uncertainty seems to be the 'modern' term for what was called error).
But I am not clear exactly how to achieve this. We've got Interval but it's doubtful if this is the right weapon to use.
Interval as a generic notion is the right tool for bounding errors and representing uncertain inputs. Most papers on "reliable computing" are related to intervals (the name may change for higher dimensions, like ellipsoids, but they are still convex sets).
Then there is the matter of representing intervals. With lower-upper bounds as in Boost.Interval, the representation is fine as long as the precision of the intervals is "small". For precisions of thousands of bits, a midpoint-radius representation is more efficient, as the radius can be stored in reduced form to improve performance and memory footprint.
Isn't the Boost.Interval interface agnostic of the underlying representation? Of course one would probably have to specialise boost::interval<T> for each type T you want to work with :-( Unless someone has a generic implementation? John.

Le vendredi 08 décembre 2006 à 14:04 +0000, John Maddock a écrit :
Then there is the matter of representing intervals. With lower-upper bounds as in Boost.Interval, the representation is fine as long as the precision of the intervals is "small". For precisions of thousands of bits, a midpoint-radius representation is more efficient, as the radius can be stored in reduced form to improve performance and memory footprint.
Isn't the Boost.Interval interface agnostic of the underlying representation?
Boost interval<T> uses a representation as a pair of T values: the lower and upper bounds of the connected set [1]. Whatever the uncertainty on the real results, it is enclosed between these two bounds.
Of course one would probably have to specialise boost::interval<T> for each type T you want to work with :-( Unless someone has a generic implementation?
The implementation is generic; you just have to provide a mathematical kernel as a policy (what the documentation calls the "rounding policy") in order to describe how to perform operations on values of type T [2]. That is, once you said how to divide values of type T, you can divide values of type interval<T>. Best regards, Guillaume [1] For comparison, n2137 does not mandate a specific representation; an implementation relying on a midpoint-radius one would still be compliant. [2] n2137 does not need a policy, as it is not generic : its behavior is not defined for user-defined types, only for native floating-point types.

Guillaume Melquiond wrote:
The implementation is generic; you just have to provide a mathematical kernel as a policy (what the documentation calls the "rounding policy") in order to describe how to perform operations on values of type T [2]. That is, once you said how to divide values of type T, you can divide values of type interval<T>.
I hadn't realise that, thanks for the correction!
[1] For comparison, n2137 does not mandate a specific representation; an implementation relying on a midpoint-radius one would still be compliant.
Right, that was what I was getting at really: presumably one could write a specialisation for boost::interval<real_big_type> that used value+radius internally? Regards, John.

Neal Becker <ndbecker2@gmail.com> wrote:
I thought about having the integrator type (romberg, trap, etc) be specified as the 1st template parameter, but offhand I don't see what value that adds. Any thoughts?
It should be easy to specify the integrator type at runtime. So unless you can point to a performance improvement, I would keep it out of the template arguments. Cheers, Walter Landry wlandry@ucsd.edu
participants (15)
-
Franck Stauffer
-
Guillaume Melquiond
-
Jeff Garland
-
Joaquín Mª López Muñoz
-
John Maddock
-
Kevin Lynch
-
Markus Werle
-
Matthias Schabel
-
Michael Fawcett
-
Neal Becker
-
Paul A Bristow
-
Schrom, Brian T
-
Sohail Somani
-
Theodore Papadopoulo
-
Walter Landry