[multiprecision] General/design query about which conversions should be implicit/explicit

Folks, I have an open bug report https://svn.boost.org/trac/boost/ticket/10082 that requests that conversions from floating point to rational multiprecision-types be made implicit (currently they're explicit). Now on the one hand the bug report is correct: these are non-lossy conversions, so there's no harm in them being implicit. However, it still sort of feels wrong to me, the only arguments against I can come up with are: 1) An implicit conversion lets you assign values such as 0.1 to a rational (which actually leads to 3602879701896397/36028797018963968 not 1/10), where as making the conversion explicit at least forces you to use a cast (or an explicit construction). 2) Floating point values can result in arbitrarily large integer parts to the rational, effectively running the machine out of memory. Arguably the converting constructor should guard against that, though frankly exactly how is less clear :-( Does anyone else have any strong views or insights into this? Thanks in advance, John.

2014-05-31 13:56 GMT+02:00 John Maddock <jz.maddock@googlemail.com>:
I remember when discussing the interface of std::optional on https://groups.google.com/a/isocpp.org/forum/?fromgroups#!forum/std-proposal... people were against having the conversion from U to optional<T> (where U is convertible to T) because it incurred a run-time cost not immediately visible to the guy that used the code: fun(u); // may last longer than you think Instead a more verbose/explicit construct was preferred. Regards, &rzej

On Sat, 31 May 2014, John Maddock wrote:
But the doc says they are implicit.
The problem is with people writing 0.1. If they mean the exact value, they have already lost. What Boost.Multiprecision does later is not that relevant, and it seems wrong to me to penalize a library because some users don't understand the basics (and their program is likely broken for a number of other reasons).
Er, that might be true if you include mpfr numbers in "floating point", but if you only consider double, the maximum size of the numerator is extremely limited. Even for a binary128 it can't be very big (about 2ko). There could be good reasons for not making it implicit, but I am not convinced by these two. -- Marc Glisse

If we consider the case of cpp_dec_float and cpp_bin_float, I believe we allow the implicit conversion from built-in floating-point types to the multiple-precision floating point types. This means that the user must be astute and keenly aware of what is going on in the class. If the floating-point approximation of 0.1 is desired, then the argument (0.1) is used. If the exact value of 1/10 is required, then the quoted argument of ("0.1") is required --- or creation from 1 and subsequent division by 10. We discussed this point a few times during the development of Boost.Multiprecision. As far as I recall, we opted for implicit conversions from the built-in floating-point type to the multiple-precision floating-point type. And I think we did this in a conscious fashion. Wouldn't it be consistent to have the same behavior for built-in float to multiple-precision rational? If so, then one would opt for implicit conversions? Cheers, Chris. On Saturday, May 31, 2014 2:37 PM, Marc Glisse <marc.glisse@inria.fr> wrote: On Sat, 31 May 2014, John Maddock wrote:
But the doc says they are implicit.
The problem is with people writing 0.1. If they mean the exact value, they have already lost. What Boost.Multiprecision does later is not that relevant, and it seems wrong to me to penalize a library because some users don't understand the basics (and their program is likely broken for a number of other reasons).
Er, that might be true if you include mpfr numbers in "floating point", but if you only consider double, the maximum size of the numerator is extremely limited. Even for a binary128 it can't be very big (about 2ko). There could be good reasons for not making it implicit, but I am not convinced by these two. -- Marc Glisse _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On May 31, 2014 7:56:00 AM EDT, John Maddock <jz.maddock@googlemail.com> wrote:
Apparently the problem isn't as bad as you first thought, but implicitly allocating significant amounts of memory is problematic. You could punt and use the preprocessor to control whether the conversion is explicit. ___ Rob (Sent from my portable computation engine)

Killer argument against implicit. But I can see that a not-at-all-dumb user might prefer otherwise, so as a compromise could allowing implicit be MACRO controlled? (We hit this sort of problem back with using NTL, and patched it to permit implicit conversions so I can understand both views). Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 01539 561830
participants (8)
-
Andrzej Krzemienski
-
Christopher Kormanyos
-
John Maddock
-
John Maddock
-
Marc Glisse
-
Paul A. Bristow
-
Rob Stewart
-
Vicente J. Botet Escriba