
________________________________ From: "Simonson, Lucanus J" <lucanus.j.simonson@intel.com> To: "boost@lists.boost.org" <boost@lists.boost.org> Sent: Wednesday, August 31, 2011 7:46 PM Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point <snip> Thank you, Luke. I really appreciate your clear and detailed response. Please don't call it real. An mp float type is not at all a real number any more than a regular float type is a real number. With extended precision floating type you cannot represent irrationals and you cannot represent most rationals exactly either. What it is is a floating point type with a variable, but still finite number of bits. All computers have finite sized memory and are required to run programs in a finite amount of time. That is why these numerical types are properly called multiple precision and not infinte precision. I'd like to see the name be mp_float. I prefer: mp_float mp_int mp_rational Yes, I see your point here. In fact, it's quite illuminating. The use of "float" in the name is probably better than "real". Please remember, though, float, int and rational are enough to keep us all busy for now and later and on and on... We are only doing the floating-point type now. Obviously we need the mp_ prefix if we use float and int since these are keywords without it. Also, if the user puts using namespace boost::multiprecision then they will benefit from the prefix. I prefer multiprecision to mp since the library name I'm guessing would be multiprecision and it is conventional to have the namespace be the same as the library name. Affirmative. For me the rational data type is very important and should wrap the gmp mpq_type. It would be nice if the library could use mp_int with Boost.Rational to implement its own standalone mp_rational datatype, but I would prefer to use the gmp type whenever possible. Again, yes. I personally have not programmed the rational type --- Just int's and float's. The long-term idea is to understanding the algorithms *and* to program portable code, thereby avoiding restrictive GNU licensing issues (-lgmp,... barf). By the way, I have put extensive thought into how to build a good expression template system for dealing with multi-precision arithmetic and would like to be involved in some of the discussion of the design of the system. My stuff is in the sandbox under "e_float". John's stuff is in "big_number". I don't know much and I'm just doing the back-end. But I modestly welcome any assistance, particularly of architectural nature. Particularly, the most important thing about providing expression templates for multi-precsision is eliminating allocations and deallocations. The allocations can easily dominate the runtime of a multiprecision algorithm. There are many options and considerations for how to optimize the allocations, such as object pools, thread safety etc. Anyone who uses multiprecision numerical data types in the same way they would use built-in numerical data types is writing code that runs significantly slower than the equivalent C code that doesn't abstract away allocation and deallocation and forces the programmer to choose where and when to allocate so that they choose the proper place. I saw large speedups in my Polygon library by recycling gmpq_type variables ra ther than declaring them in the inner most scope in which they were used. I hear you. But I beg to augment this critical point. Multiple-precision mathematical algorithms spend roughly 80-90% of their computational time within the multiplication routine. If you can get a fast multiplication algorithm and subsequently apply this to Newton iteration for inverse and for roots, then you've got a good package. If you make it portable and adherent to the C++ semantics, well, then you're finally finished with this mess. If you investigate the MFLOPS of MP-math, even a modest 100 digits is, maybe, 100-200 times slower slower than double-precision. The rat-race lies within the multiplication routine, not the allocation mechanisms. Fixed-size arrays and a basic custom allocator above size=N can solve the allocation performance bottleneck (to within certain limits). But fast multiply and C++ adherence ultimately elevate the package from a the realm of a hack to a real, specifiable, high-performance type. Thanks, Luke _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost