Math functions - requirements 'spec'

Following the view of C and C++ Working groups at Redmond that a working implementation of my proposal for math functions was a necessary condition for consideration for a TR-2 standard, I have been skirmishing with the problems of converting Stephen Moshier's Cephes code into something that works for both C++ and C. Several issues have emerged, mainly revealing my ignorance with C - a state of bliss in which I would have preferred to remain ;-) 1 Should I no longer cater for non-compliant compilers (usually old-style function specifications? I only have MSVC 8.0 available. 2 How do I check that my code is Standard C compatible (as well as C++)? 3 How do I tell whether the compiler is a C compiler or a C++ compiler (for #if ing). #if __cplusplus #if _STDC__ ... ? 4 I have compiled a module #including <cmath> OK with explicit project property "compile with C++", but when I change this to "compile with C" and no extensions so the /Za option is on the command line, it does not define __STDC__ == 1 as I have expected. What am I doing wrong. 5 Do I have to use exclusively C /* */ style comments :-((? (Or can I assume that C compilers will understand // comments?) I have also immediately come up against the problems of IEEE 754 compliance, argument checks, NaN, infs and exception throwing. 6 Should I assume IEEE 754 compliance and signal #error "Only works with IEEE compliant compilers"? How do I check with C - numeric_limits :: is_IEC559 equivalent? Or would it be foolish to rule out the some older DEC machines? 7 Do you recommend making NaN and inf checks optional? 8 How do I find if they are available, automatically but portably? 9 Can I assume isnan(float, double and long double) with C and with C++? 10 How do I detect isinf? Do I use FPclass for detecting isinf? (pos and neg?) 11 Do you recommend making throwing exceptions optional with C++? 12 Should I scrap all the hexadecimal (mainly polynomial) constants on the grounds that conforming compilers should read decimal digit strings 'correctly' - getting the nearest representable value. Is this true for both C and C++? 13 Do you also recommend making checking arguments optional (with #ifdefs)? So that those who want the ultimate in speed at any risk can switch checking off? Comments welcome. Paul PS Slightly updated versions of my TR2 proposal are at http://www.hetp.u-net.com/public/math_stats_functions_tr2_v2.doc http://www.hetp.u-net.com/public/math_stats_functions_tr2_v2.pdf Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539 561830 +44 7714 330204 mailto: pbristow@hetp.u-net.com

"Paul A Bristow" <pbristow@hetp.u-net.com> wrote in message news:E1CPhm8-0004eI-89@he203war.uk.vianw.net... | Following the view of C and C++ Working groups at Redmond | that a working implementation of my proposal for math functions | was a necessary condition for consideration for a TR-2 standard, | | I have been skirmishing with the problems of converting | Stephen Moshier's Cephes code into something that works for both C++ and C. | | Several issues have emerged, mainly revealing my ignorance with C | - a state of bliss in which I would have preferred to remain ;-) | | 1 Should I no longer cater for non-compliant compilers (usually old-style | function specifications? yes. | I only have MSVC 8.0 available. | | 2 How do I check that my code is Standard C compatible (as well as C++)? compile with /TC compile all files as .c /TP compile all files as .cpp | 3 How do I tell whether the compiler is a C compiler or a C++ compiler (for | #if ing). | #if __cplusplus #if _STDC__ ... ? #if !defined(__cpluplus ) ... we have C | 5 Do I have to use exclusively C /* */ style comments :-((? (Or can I | assume that C compilers will understand // comments?) why do you want to implement it for C also? | I have also immediately come up against the problems of IEEE 754 compliance, | argument checks, NaN, infs and exception throwing. | | 6 Should I assume IEEE 754 compliance and signal #error "Only works with | IEEE compliant compilers"? but this is not guaranteen by the standard, is it? | 13 Do you also recommend making checking arguments optional (with #ifdefs)? | So that those who want the ultimate in speed at any risk can switch checking | off? first make it work... | Comments welcome. good to see you working on this :-) -Thorsten

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Thorsten Ottosen | Sent: 04 November 2004 17:47 | To: boost@lists.boost.org | Subject: [boost] Re: Math functions - requirements 'spec' | | "Paul A Bristow" <pbristow@hetp.u-net.com> wrote in message | news:E1CPhm8-0004eI-89@he203war.uk.vianw.net... | | Following the view of C and C++ Working groups at Redmond | | that a working implementation of my proposal for math functions | | was a necessary condition for consideration for a TR-2 standard, | | | | I have been skirmishing with the problems of converting | | Stephen Moshier's Cephes code into something that works for | both C++ and C. | | | | Several issues have emerged, mainly revealing my ignorance with C | | - a state of bliss in which I would have preferred to remain ;-) | | | | | 5 Do I have to use exclusively C /* */ style comments | :-((? (Or can I | | assume that C compilers will understand // comments?) | | why do you want to implement it for C also? because I have been recommended to present follow the example of C99 math functions, which are applicable to both C and C++. And I have therefore made a proposal to both C and C++ WGs | | | | 6 Should I assume IEEE 754 compliance and signal #error | "Only works with IEEE compliant compilers"? | | but this is not guaranteed by the standard, is it? No, but tons of code assumes this, and portable ways of checking for isnan and is finite are essential. So new question: is it OK if I assume C99 additions (which include these)? | | 13 Do you also recommend making checking arguments | optional (with #ifdefs)? | | So that those who want the ultimate in speed at any risk | can switch checking | | off? | | first make it work... It already does as Cephes C code, so now seems the right time to #if | | good to see you working on this :-) -Thorsten I only said I was thinking about it - looks VERY tedious and messy, even assuming the underlying code is fine. There are dozens of functions ... Never mind proper testing ... You will understand that I don't want to find that reviewers suddenly have other new ideas. Paul

"Paul A Bristow" <pbristow@hetp.u-net.com> wrote in message news:E1CPpK5-00034X-Vt@he204war.uk.vianw.net... | | [mailto:boost-bounces@lists.boost.org] On Behalf Of Thorsten Ottosen | | why do you want to implement it for C also? | | because I have been recommended to present follow the example of C99 math | functions, | which are applicable to both C and C++. And I have therefore | made a proposal to both C and C++ WGs that seem reasonable. However, would it not be possible to implement the C interface in terms of the templated C++ implementation? | | | 6 Should I assume IEEE 754 compliance and signal #error | | "Only works with IEEE compliant compilers"? | | | | but this is not guaranteed by the standard, is it? | | No, but tons of code assumes this, and portable ways of checking for isnan | and is finite are essential. | | So new question: is it OK if I assume C99 additions (which include these)? I don't have a good answer to this. Do all reasonable desctop compilers actually go for IEEE 754? Why can't you use std::numeric_limits<F>::is_nan() etc for C++ ? | | good to see you working on this :-) | -Thorsten | | I only said I was thinking about it - looks VERY tedious and messy, | even assuming the underlying code is fine. There are dozens of functions | ... | Never mind proper testing ... | | You will understand that I don't want to find that reviewers suddenly have | other new ideas. yeah, I understand. I would like to say, that even though we don't get the functions into the standard, I think a good boost version will become a significant benefit to the community. -Thorsten

On Sun, 7 Nov 2004 12:13:27 +0100, Thorsten Ottosen <nesotto@cs.auc.dk> wrote:
"Paul A Bristow" <pbristow@hetp.u-net.com> wrote in message
I don't have a good answer to this. Do all reasonable desctop compilers actually go for IEEE 754?
No. (But the issue isn't so much whether a compiler is IEEE-754 compliant, but whether the architecture itself it.) 'float' and 'double' are usually IEEE-754 compliant, but it's very common for 'long double' not to be. There are some popular platforms where the long double format is just plain weird. --Matt

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Thorsten Ottosen | Sent: 07 November 2004 11:13 | To: boost@lists.boost.org | Subject: [boost] Re: Re: Math functions - requirements 'spec' | | Do all reasonable desktop compilers actually go for IEEE 754? Well Dinkumware code used by Microsoft and many others only handles IEEE754. | | Why can't you use std::numeric_limits<F>::is_nan() etc for C++ ? My understanding is that only std::numeric_limits<F>::has_nan() and A value std::numeric_limits<F>::quiet_nan() - interesting but you can't test a NaN by comparing with this! There are tons of possible NaNs :-( are provided, but NOT what you really want is_nan(). But as noted by Hubert Holin, and his reference - this is a mess. Paul Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539 561830 +44 7714 330204 mailto: pbristow@hetp.u-net.com PS to Standards groups - where is this on your agendas?

Somewhere in the E.U., le 05/11/2004 Bonjour In article <E1CPhm8-0004eI-89@he203war.uk.vianw.net>, "Paul A Bristow" <pbristow@hetp.u-net.com> wrote:
Following the view of C and C++ Working groups at Redmond that a working implementation of my proposal for math functions was a necessary condition for consideration for a TR-2 standard,
I have been skirmishing with the problems of converting Stephen Moshier's Cephes code into something that works for both C++ and C.
Several issues have emerged, mainly revealing my ignorance with C - a state of bliss in which I would have preferred to remain ;-)
1 Should I no longer cater for non-compliant compilers (usually old-style function specifications?
I only have MSVC 8.0 available.
Yes, please stick to compliant compilers (or at least, do not feel you have to bend backwards too much for the non-conforming ones).
2 How do I check that my code is Standard C compatible (as well as C++)?
[SNIP]
5 Do I have to use exclusively C /* */ style comments :-((? (Or can I assume that C compilers will understand // comments?)
I must say I *strongly* disagree with having code with is C compatible, mainly because this will greatly hamper genericity (or at least convenient and safe parametrisation of code). Even if the code turns up only feasible for, say, float and double, I strongly believe it should be templated upon the floating type, with specializations if need be. The C library in C++ clothing approach is just plain wrong, IMHO.
I have also immediately come up against the problems of IEEE 754 compliance, argument checks, NaN, infs and exception throwing.
6 Should I assume IEEE 754 compliance and signal #error "Only works with IEEE compliant compilers"? How do I check with C - numeric_limits :: is_IEC559 equivalent? Or would it be foolish to rule out the some older DEC machines?
7 Do you recommend making NaN and inf checks optional?
8 How do I find if they are available, automatically but portably?
9 Can I assume isnan(float, double and long double) with C and with C++?
10 How do I detect isinf? Do I use FPclass for detecting isinf? (pos and neg?)
I started a thread on comp.std.c++ a while back about NaN and Inf (http://groups.google.com/groups?hl=en&lr=&threadm=b172eb2f.0106180508.49 0a6401%40posting.google.com&rnum=5&prev=/groups%3Fhl%3Den%26lr%3D%26q%3DH olin%26btnG%3DSearch%26meta%3Dgroup%253Dcomp.std.c%25252B%25252B). Basically, we are out of luck using only the provisions of C++. If we assume that in addition we abide by the relevant IEEE/IEC standards, we can roll our own isinf and isnan (with good suggestions in the thread). I still believe we should have language support for this!
11 Do you recommend making throwing exceptions optional with C++?
Computations *should* throw if necessary. This *should not* be an option. However, Inf and NaN should be reported as such, and not throw. Using "#error" should be deprecated (it's main use should be for the maintenance of legacy code, not for the development of new one).
12 Should I scrap all the hexadecimal (mainly polynomial) constants on the grounds that conforming compilers should read decimal digit strings 'correctly' - getting the nearest representable value. Is this true for both C and C++?
Keep them, they are hardly a nuisance... You can put decimal equivalents in comments, though...
13 Do you also recommend making checking arguments optional (with #ifdefs)? So that those who want the ultimate in speed at any risk can switch checking off?
First make it do the right thing, then only perhaps worry about making it fast... (this comment of course does not preclude using asymptotic methods for which the error term can be estimated). In other words, use algorithms that are fast, but do not remove the safety nets!
Comments welcome.
Paul
PS
Slightly updated versions of my TR2 proposal are at
http://www.hetp.u-net.com/public/math_stats_functions_tr2_v2.doc
http://www.hetp.u-net.com/public/math_stats_functions_tr2_v2.pdf
I'll try to take a look next monday.
Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539 561830 +44 7714 330204 mailto: pbristow@hetp.u-net.com
Thanks for taking on this essential task! Hubert Holin

Hubert Holin wrote:
I must say I *strongly* disagree with having code with is C compatible, mainly because this will greatly hamper genericity (or at least convenient and safe parametrisation of code).
Even if the code turns up only feasible for, say, float and double, I strongly believe it should be templated upon the floating type, with specializations if need be. The C library in C++ clothing approach is just plain wrong, IMHO.
I completely agree with this. It makes no sense to have a C++ library that does not use the full strength of the language.

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Deane Yang | Sent: 05 November 2004 16:02 | To: boost@lists.boost.org | Subject: [boost] Re: Math functions - requirements 'spec' | | Hubert Holin wrote: | > | > I must say I *strongly* disagree with having code with is C | > compatible, mainly because this will greatly hamper | genericity (or at | > least convenient and safe parametrisation of code). | > | > Even if the code turns up only feasible for, say, float and | > double, I strongly believe it should be templated upon the floating | > type, with specializations if need be. The C library in C++ | clothing | > approach is just plain wrong, IMHO. | > | | I completely agree with this. It makes no sense to have a C++ library | that does not use the full strength of the language. This view has already been expressed several times - but we have to face the fact that C99 and Walter Brown's functions are already in TR-1 to achieve C compatibility. I consider it essential to follow their example. (Perhaps you should check PJP's reasoning on this). So, despite that fact that I agree with you, I feel we must be pragmatic and face the facts. If I don't get agreement on this before I start, there is no point in continuing as the code will be rejected on review. Paul

At Saturday 2004-11-06 10:11, you wrote:
| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Deane Yang | Sent: 05 November 2004 16:02 | To: boost@lists.boost.org | Subject: [boost] Re: Math functions - requirements 'spec' | | Hubert Holin wrote: | > | > I must say I *strongly* disagree with having code with is C | > compatible, mainly because this will greatly hamper | genericity (or at | > least convenient and safe parametrisation of code). | > | > Even if the code turns up only feasible for, say, float and | > double, I strongly believe it should be templated upon the floating | > type, with specializations if need be. The C library in C++ | clothing | > approach is just plain wrong, IMHO. | > | | I completely agree with this. It makes no sense to have a C++ library | that does not use the full strength of the language.
This view has already been expressed several times
- but we have to face the fact that C99 and Walter Brown's functions are already in TR-1 to achieve C compatibility. I consider it essential to follow their example.
F*** C comparability!!! (I gotta go make that bumper sticker suCks (with the C in a different color)) the language should have died a decade ago. In case anyone else doesn't get it, C++ is simply a better language. Staying tied to a dinosaur is foolish.
(Perhaps you should check PJP's reasoning on this).
So, despite that fact that I agree with you, I feel we must be pragmatic and face the facts.
If I don't get agreement on this before I start, there is no point in continuing as the code will be rejected on review.
Paul
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Victor A. Wagner Jr. http://rudbek.com The five most dangerous words in the English language: "There oughta be a law"

"Victor A. Wagner Jr." <vawjr@rudbek.com> wrote in message news:6.1.2.0.2.20041106152208.048c5c70@mail.rudbek.com... | At Saturday 2004-11-06 10:11, you wrote: | >| Hubert Holin wrote: | >| > | >| > I must say I *strongly* disagree with having code with is C | >| > compatible, mainly because this will greatly hamper | >| genericity (or at | >| > least convenient and safe parametrisation of code). | >| > | >| > Even if the code turns up only feasible for, say, float and | >| > double, I strongly believe it should be templated upon the floating | >| > type, with specializations if need be. The C library in C++ | >| clothing | >| > approach is just plain wrong, IMHO. | >| > | >| | >| I completely agree with this. It makes no sense to have a C++ library | >| that does not use the full strength of the language. | > | > | >This view has already been expressed several times | > | >- but we have to face the fact that | >C99 and Walter Brown's functions are already in TR-1 to achieve C | >compatibility. | >I consider it essential to follow their example. | | F*** C comparability!!! (I gotta go make that bumper sticker suCks | (with the C in a different color)) | the language should have died a decade ago. | In case anyone else doesn't get it, C++ is simply a better | language. Staying tied to a dinosaur is foolish. It might be worth looking at what benefits we can get out of a templated version. Would it for example, be possible to use the code with a big_floats class? At any rate, wouldn't it be possible to have a genuine C++ version with exceptions and all and then provide simple wrappers for C compatibility: extern "C" { void foo( ... ); } ? -Thorsten

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Thorsten Ottosen | Sent: 07 November 2004 11:08 | To: boost@lists.boost.org | Subject: [boost] Re: Re: Math functions - requirements 'spec' | | | | F*** C comparability!!! (I gotta go make that bumper | sticker suCks | | (with the C in a different color)) | | the language should have died a decade ago. IMO tt should never have been invented ! | | In case anyone else doesn't get it, C++ is simply a better | | language. Staying tied to a dinosaur is foolish. But we already are :-(( | It might be worth looking at what benefits we can get out of | a templated version. | | Would it for example, be possible to use the code with a | big_floats class? Don't forget that many functions are simply evaluations of polynomial functions - whose coefficients depend on the the float-point (mainly significand) bits count. So there exist already coefficients for Cephes 128-bit reals, and arbitrary precision 100 decimal digits, for example, for which the C code might call big_double normal_quantile(big_double p); // Quantile of probability p. but C++ would be able to chose this using a templated version. Suggest you look at Walter Browns and PJP's TR-1? Paul Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539 561830 +44 7714 330204 mailto: pbristow@hetp.u-net.com

Somewhere in the E.U., le 08/11/2004 Bonjour In article <E1CQU5V-0007U7-00@he201war.uk.vianw.net>, "Paul A Bristow" <pbristow@hetp.u-net.com> wrote:
| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Deane Yang | Sent: 05 November 2004 16:02 | To: boost@lists.boost.org | Subject: [boost] Re: Math functions - requirements 'spec' | | Hubert Holin wrote: | > | > I must say I *strongly* disagree with having code with is C | > compatible, mainly because this will greatly hamper | genericity (or at | > least convenient and safe parametrisation of code). | > | > Even if the code turns up only feasible for, say, float and | > double, I strongly believe it should be templated upon the floating | > type, with specializations if need be. The C library in C++ | clothing | > approach is just plain wrong, IMHO. | > | | I completely agree with this. It makes no sense to have a C++ library | that does not use the full strength of the language.
This view has already been expressed several times
- but we have to face the fact that C99 and Walter Brown's functions are already in TR-1 to achieve C compatibility.
Compatibility is a nice thing, but I fear in this case it brings more woe than weal. I would even go so far as to say that in this case we would have been better served with formal provisions to make linking with FORTRAN do the right thing... An let's not forget language support for Inf and NaN (isnan, isinf,...) :-) .
I consider it essential to follow their example.
It is never necessary to follow other persons mistakes...
(Perhaps you should check PJP's reasoning on this).
So, despite that fact that I agree with you, I feel we must be pragmatic and face the facts.
If I don't get agreement on this before I start, there is no point in continuing as the code will be rejected on review.
Paul
In this case, I believe that, perhaps, normalization is but a secondary goal. We *absolutely* need a proper C++ stat library, one which is cross-platform as much as can be. We do not need a C library (it already exists!); we could, platform permitting, even use a FORTRAN one, if we were so desperate to use any (good) stat library. Let's build it, and they will come! And if they do not come,then at least we can actually work, and work in peace. As others (Thorsten Ottosen is the only one I can find now) in this thread have suggested, you may want to add an additional header and wrappers for C compatibility, but this is by no means a necessity. Merci Hubert Holin

On Thu, 4 Nov 2004 13:35:50 -0000, Paul A Bristow <pbristow@hetp.u-net.com> wrote:
Following the view of C and C++ Working groups at Redmond that a working implementation of my proposal for math functions was a necessary condition for consideration for a TR-2 standard,
I wasn't there for the discussion at the C standardization committee meeting, but I was there for the discussion in the C++ library working group. My interpretation of the discussion wasn't that a working implementation was necessary, but that the working group was skeptical about this proposal, period. Yes, the main reason for skepticism was that the implemention effort/user benefit ratio was perceived to be too large. So yes, an implementation might be one of the things that would help to change some people's minds. But if that's the goal you're setting yourself for an implementation, then it dictates the kind of work you'll need to do: you'll need to show that it's possible to produce a high quality implementation (let's say: maximum error of at most a few ulps for the entire range of every function) without heroic effort. I know that may seem like a tall order, but if your goal is to change the committees' minds then you need to understand the reasons people thought what they did. --Matt

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Matt Austern | Sent: 05 November 2004 18:22 | To: boost@lists.boost.org | Subject: Re: [boost] Math functions - requirements 'spec' | | On Thu, 4 Nov 2004 13:35:50 -0000, Paul A Bristow | <pbristow@hetp.u-net.com> wrote: | > Following the view of C and C++ Working groups at Redmond | > that a working implementation of my proposal for math functions | > was a necessary condition for consideration for a TR-2 standard, | | I wasn't there for the discussion at the C standardization committee | meeting, but I was there for the discussion in the C++ library working | group. My interpretation of the discussion wasn't that a working | implementation was necessary, but that the working group was skeptical | about this proposal, period. | | Yes, the main reason for skepticism was that the implemention | effort/user benefit ratio was perceived to be too large. So yes, an | implementation might be one of the things that would help to change | some people's minds. But if that's the goal you're setting yourself | for an implementation, then it dictates the kind of work you'll need | to do: you'll need to show that it's possible to produce a high | quality implementation (let's say: maximum error of at most a few ulps | for the entire range of every function) without heroic effort. I know | that may seem like a tall order, but if your goal is to change the | committees' minds then you need to understand the reasons people | thought what they did. I'm sorry - and completely baffled - that you are so negative about this proposal. Are you saying that statistics is not something that normal people do? When I presented it to the UK C++ Panel discussion, explaing that you had to leave C++ to use the statistics functions, one person said - "Oh yes - we do just that". Another said "Well I did my own - and it was tricky" - implying he was less that certain of the accuracy of his result. Fior something everyone agrees is not simple, the last thing that we should be encouraging is for every Tom, Dick and Harry to roll his own! If you are expected a result within a few ulp for all functions, then for some you are certain to be disappointed - even WITH heroic effort! It is impossible without using a system with much higher precision than the native one, (say a 100-ish decimal digits to achieve a 53-bit significand result) and that is going to be far too big and far too slow for the users. But I believe strongly that "at most a few ulps" is entirely the wrong objective. When the existing Standards make ABSOLUTELY NO accuracy requirements, I find this a surprising target. If Quality is Fitness-for-Purpose, then a much lower accuracy is entirely acceptable in the Real World. The loss of even 3 decimal digits precision in the incomplete beta still make a negligible difference to the probability calculated (which has a much greater uncertainty because of the sensitivity to physical measurement and degrees of freedom). Perhaps you can elaborate on why you believe such a high accuracy is even desirable? Perhaps other potential users can give their views? Paul PS If it is so difficult and unimportant, why do all the other math packages (VB even) provide a _useful_ implementation? Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539 561830 +44 7714 330204 mailto: pbristow@hetp.u-net.com

"Paul A Bristow" <pbristow@hetp.u-net.com> wrote in message news:E1CRF0A-00014Y-00@he201war.uk.vianw.net... | | But I believe strongly that "at most a few ulps" is entirely the wrong | objective. | When the existing Standards make ABSOLUTELY NO accuracy requirements, | I find this a surprising target. | | If Quality is Fitness-for-Purpose, | then a much lower accuracy is entirely acceptable in the Real World. I agree with this. Probabilities are often accompanied by a significant second order uncertainty. For example, it many real applications it is hard to justify a probability with a precision like 10.3%. | The loss of even 3 decimal digits precision in the incomplete beta | still make a negligible difference to the probability calculated | (which has a much greater uncertainty because of the sensitivity | to physical measurement and degrees of freedom). not to say when probabilities are expert jugdments based on a sample size of 50. I'm with you here Paul. -Thorsten

On Mon, 8 Nov 2004 19:16:49 -0000, Paul A Bristow <pbristow@hetp.u-net.com> wrote:
| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Matt Austern | Sent: 05 November 2004 18:22 | To: boost@lists.boost.org | Subject: Re: [boost] Math functions - requirements 'spec' | | On Thu, 4 Nov 2004 13:35:50 -0000, Paul A Bristow | <pbristow@hetp.u-net.com> wrote: | > Following the view of C and C++ Working groups at Redmond | > that a working implementation of my proposal for math functions | > was a necessary condition for consideration for a TR-2 standard, | | I wasn't there for the discussion at the C standardization committee | meeting, but I was there for the discussion in the C++ library working | group. My interpretation of the discussion wasn't that a working | implementation was necessary, but that the working group was skeptical | about this proposal, period. | | Yes, the main reason for skepticism was that the implemention | effort/user benefit ratio was perceived to be too large. So yes, an | implementation might be one of the things that would help to change | some people's minds. But if that's the goal you're setting yourself | for an implementation, then it dictates the kind of work you'll need | to do: you'll need to show that it's possible to produce a high | quality implementation (let's say: maximum error of at most a few ulps | for the entire range of every function) without heroic effort. I know | that may seem like a tall order, but if your goal is to change the | committees' minds then you need to understand the reasons people | thought what they did.
I'm sorry - and completely baffled - that you are so negative about this proposal.
Are you saying that statistics is not something that normal people do?
To some extent it's important to be careful with the use of the word "you". For the most part I'm not reporting my own views. Mostly I'm reporting what happened at the Redmond C++ meeting, because I think it's important to understand what the discussion was like. (I don't know what, if anything, happened at the Redmond C committee meeting.) Your earlier message suggested that you thought the tone of that discussion was "we want there to be an implementation before we accept this proposal", but I think a more accurate characterization would be "in light of experience implementing the TR1 special functions, we think that doing a good job on a proposal for still more special functions would be extremely hard and we're not convinced that it's a good cost/benefit tradeoff." You don't have to try to change people's minds about the potential benefits of standardizing these functions, of course, but if changing people's minds is your goal then you need to understand what the participants in this discussion really did say. (And when I say "people", by the way, that's not a euphemism for "me". On this particular subject, I listened more than I talked.) Some possible alternatives that might change people's minds: - Show that doing a good job, i.e. implementing these functions with the same sort of accuracy as a high-quality implementation achieves for sqrt or cos or lgamma or cyl_bessel_j, is easier than people thought. - Convince people that, in the case of these particular functions you're proposing, they should lower their standards for what a good job is. - Convince people that these functions are more important than they thought, so that even if folks think this is an extremely difficult implementation task they'll still think that the cost/benefit ratio is worth it. If you're going for point 3 then you might want to consider broadening your scope instead of narrowing it. One point that some people made at the Redmond meeting is that most people who want to do statistics use a statistical package, i.e. something that operates on data sets, rather than just using bare transcendental functions and writing all the data set manipulation themselves. It's possible that people would see a different cost/benefit tradeoff in a statistical package than in a collection of more special functions. --Matt

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Matt Austern | Sent: 08 November 2004 20:17 | To: boost@lists.boost.org | Subject: Re: [boost] Math functions - requirements 'spec' | | On Mon, 8 Nov 2004 19:16:49 -0000, Paul A Bristow | <pbristow@hetp.u-net.com> wrote: | > | > <snip> | Some possible alternatives that might change people's minds: | - Show that doing a good job, i.e. implementing these functions with | the same sort of accuracy as a high-quality implementation achieves | for sqrt or cos or lgamma or cyl_bessel_j, is easier than people | thought. | - Convince people that, in the case of these particular functions | you're proposing, they should lower their standards for what a good | job is. | - Convince people that these functions are more important than they | thought, so that even if folks think this is an extremely difficult | implementation task they'll still think that the cost/benefit ratio is | worth it. | | If you're going for point 3 then you might want to consider broadening | your scope instead of narrowing it. One point that some people made | at the Redmond meeting is that most people who want to do statistics | use a statistical package, i.e. something that operates on data sets, | rather than just using bare transcendental functions and writing all | the data set manipulation themselves. It's possible that people would | see a different cost/benefit tradeoff in a statistical package than in | a collection of more special functions. Well I agree people might like a 'package', but I find this a surprising suggestion considering the sorts of things currently Standardised. But the functions are not too far from what one actually would use, whereas getting the data in place is what the STL containers are 'about', and yet there are many different containers for data, from C arrays upwards. For a specific simple example, cribbing a bit from NR in C++, if we get two sets of measurements in two arrays data1 and data2, and get their sizes into size1 and size2 and their means and variances into mean1 & mean2 and variance1 & variance2, and the 'degrees of freedom' using published formulae (and probably std::algorithm?) (other 'Standard' functions perhaps - but pretty trivial) pooled_variance = (size1-1)*variance1 + (size2-1)*variance2)/df; double t = (mean1 - mean2)/sqrt(pooled_variance*(1./size1 + 1./size2)); probability_of_significantly_different = incomplete_beta(df, 0.5, df/(df + t * t)); if(probability_of_significantly_different > 0.95) { cout << "Batches differ!" << endl; // and/or Action! } (In contrast, stats packages report a lot of clutter, intelligible mainly to statisticians, but largely uninformative to users). Nonetheless, many thanks for your thoughts and helpful suggestions. I'll see what I can do. Paul Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539 561830 +44 7714 330204 mailto: pbristow@hetp.u-net.com

"Paul A Bristow" <pbristow@hetp.u-net.com> wrote in message news:E1CRYy5-00066d-DY@he203war.uk.vianw.net... [snip] | if we get two sets of measurements in two arrays data1 and data2, | and get their sizes into size1 and size2 | and their means and variances into mean1 & mean2 and variance1 & variance2, | and the 'degrees of freedom' using published formulae (and probably | std::algorithm?) | (other 'Standard' functions perhaps - but pretty trivial) | | pooled_variance = (size1-1)*variance1 + (size2-1)*variance2)/df; | double t = (mean1 - mean2)/sqrt(pooled_variance*(1./size1 + 1./size2)); | | probability_of_significantly_different = incomplete_beta(df, 0.5, df/(df + t | * t)); | | if(probability_of_significantly_different > 0.95) | { | cout << "Batches differ!" << endl; // and/or Action! | } | | (In contrast, stats packages report a lot of clutter, | intelligible mainly to statisticians, but largely uninformative to users). | | Nonetheless, many thanks for your thoughts and helpful suggestions. I'm already working on the STL "package", but of course can't get anywhere without your functions :-) So don't worry about the package for now. There is some sample statistics in the sandbox in the /stat/ directory. -Thorsten

On Tue, 9 Nov 2004 16:36:02 -0000, Paul A Bristow <pbristow@hetp.u-net.com> wrote:
Nonetheless, many thanks for your thoughts and helpful suggestions.
Actually, I was thinking that I should really tell you what would be my most helpful suggestion: *if* your goal is to get this standardized, as opposed to producing a library for Boost users, then what you should probably do first is get in touch with P. J. Plauger and convince him of one of two things: either (a) this won't be as hard to implement as he thinks it is; or (b) even if it is that hard, it'll be worth it. What we hard from him in Redmond was (paraphrased): Dinkumware is one of the few companies in the world witht he expertise to do high-quality implementations of mathematical special functions, it has finally finished implementing the special functions from TR1, and it took about a solid year of work. Having looked at your proposal, Dinkumware thinks that implementing it would take at least another year. Numbers like that scare people. If you can convince people that Dinkumware's numbers are off by a factor of ten, then people might be more enthusiastic. Ultimately that will involve a conversation between you and P. J. Plauger, so it's probably better for you to talk to him directly instead of having me trying to be a messenger.

I'm have for some time been in direct contact with PJP and fully understand his state of 'function fatigue' which makes him reluctant to do another lot with an apparently uncertain commercial future, when someone else might do a bit of spadework at least. His standards are notoriously high ;-) So I can fully understand that he would like someone else to have a stab at 'the rest of the functions'. I don't expect to meet the quality he has produced, but it might still be 'Fit-for-Purpose' and prove that the need exists and can be filled. (I'd also note that PJP has made the job significantly more time consuming for everyone at least by demanding C and C++ versions!) So I hope to produce a sample function for Boosters delectation in a while - but don't hold your breath. Paul Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539 561830 +44 7714 330204 mailto: pbristow@hetp.u-net.com | -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Matt Austern | Sent: 09 November 2004 18:48 | To: boost@lists.boost.org | Subject: Re: [boost] Math functions - requirements 'spec' | | On Tue, 9 Nov 2004 16:36:02 -0000, Paul A Bristow | <pbristow@hetp.u-net.com> wrote: | | > Nonetheless, many thanks for your thoughts and helpful suggestions. | | Actually, I was thinking that I should really tell you what would be | my most helpful suggestion: *if* your goal is to get this | standardized, as opposed to producing a library for Boost users, then | what you should probably do first is get in touch with P. J. Plauger | and convince him of one of two things: either (a) this won't be as | hard to implement as he thinks it is; or (b) even if it is that hard, | it'll be worth it. | | What we hard from him in Redmond was (paraphrased): Dinkumware is one | of the few companies in the world witht he expertise to do | high-quality implementations of mathematical special functions, it has | finally finished implementing the special functions from TR1, and it | took about a solid year of work. Having looked at your proposal, | Dinkumware thinks that implementing it would take at least another | year. Numbers like that scare people. | | If you can convince people that Dinkumware's numbers are off by a | factor of ten, then people might be more enthusiastic. Ultimately | that will involve a conversation between you and P. J. Plauger, so | it's probably better for you to talk to him directly instead of having | me trying to be a messenger.
participants (6)
-
Deane Yang
-
Hubert Holin
-
Matt Austern
-
Paul A Bristow
-
Thorsten Ottosen
-
Victor A. Wagner Jr.