Is there interest in Computable Calculus?
Hi, I’m interested in developing basic support of “real numbers” for computable calculus applications. In computable calculus, reals have infinite precision (certain restrictions may apply). The usual approach is representing numbers as functions producing digits. An arithmetic operation produces a new function using the functions in the operands and some simple expression manipulations. When a number needs to be evaluated (e.g., for comparisons where expression rules are unknown), enough digits for reaching an answer are generated using the functions. For example, 3.14 < pi requires generating 3 digits to answer “true”. The algorithms for arithmetic operations and evaluation are well known, and multiple implementations were developed (mostly in the 90s). However, I couldn’t find an open source implementation and had to come up with my own for working in a project during the last 2 years. My preferred reference for the required algorithms is the book: Aberth, Oliver. Computable Calculus. Academic Press, 2001. In particular, my initial scope would be the 4 arithmetic operations (+, -, *, /), and the comparison operators (<, =) for reals. Someone is interested in such a library in boost?
On Jan 12, 2016, at 7:40 PM, Damian Vicino
wrote: I’m interested in developing basic support of “real numbers” for computable calculus applications. In particular, my initial scope would be the 4 arithmetic operations (+, -, *, /), and the comparison operators (<, =) for reals.
Someone is interested in such a library in boost?
Would this be an appropriate step toward more mathematically correct real number types for general use, or would it be limited to computational calculus applications? For example, are there underlying algorithms for common mathematical functions? I assume abs() is easy, but what about things like exp(), log() or pow()? I commonly run into the need for types that better correspond to mathematical reality for scientific computing and feel we need much better support along those lines. If this will lead to that, I’m all for it. Cheers, Brook
On Jan 12, 2016 18:40, "Damian Vicino"
Hi, I’m interested in developing basic support of “real numbers” for
computable calculus applications.
In computable calculus, reals have infinite precision ( certain
The usual approach is representing numbers as functions producing digits. An arithmetic operation produces a new function using the functions in
restrictions may apply). the operands and some simple expression manipulations.
When a number needs to be evaluated (e.g., for comparisons where expression rules are unknown), enough digits for reaching an answer are generated using the functions. For example, 3.14 < pi requires generating 3 digits to answer “true”.
The algorithms for arithmetic operations and evaluation are well known, and multiple implementations were developed (mostly in the 90s). However, I couldn’t find an open source implementation and had to come up with my own for working in a project during the last 2 years. My preferred reference for the required algorithms is the book: Aberth, Oliver. Computable Calculus. Academic Press, 2001.
In particular, my initial scope would be the 4 arithmetic operations (+, -, *, /), and the comparison operators (<, =) for reals.
Someone is interested in such a library in boost?
Yes! I really want this.
Damian Vicino wrote:
I'm interested in developing basic support of "real numbers" for computable calculus applications.
In computable calculus, reals have infinite precision (certain restrictions may apply). The usual approach is representing numbers as functions producing digits.
Considering, for example, the Delaunay condition: bool delaunay(T ax, T ay, T bx, T by, T cx, T cy, T dx, T dy) { T cos_abc = (ax-bx)*(cx-bx) + (ay-by)*(cy-by); T cos_cda = (cx-dx)*(ax-dx) + (cy-dy)*(ay-dy); if (cos_abc >= 0 && cos_cda >= 0) return false; if (cos_abc < 0 && cos_cda < 0) return true; T sin_abc = (ax-bx)*(cy-by) - (cx-bx)*(ay-by); T sin_cda = (cx-dx)*(ay-dy) - (ax-dx)*(cy-dy); T sin_sum = sin_abc * cos_cda + cos_abc * sin_cda; return sin_sum < 0; } I'd be interested to know how this approach of effectively "lazy evaluation" compares to using eagerly-evaluated arbitrary-precision types (e.g. Boost.Multiprecision's floating point types). I.e. in terms of performance and any complexity introduced to the code. For what sorts of problems is it better? How much work is actually saved by laziness in practice? Regards, Phil.
Considering, for example, the Delaunay condition:
bool delaunay(T ax, T ay, T bx, T by, T cx, T cy, T dx, T dy) { T cos_abc = (ax-bx)*(cx-bx) + (ay-by)*(cy-by); T cos_cda = (cx-dx)*(ax-dx) + (cy-dy)*(ay-dy); if (cos_abc >= 0 && cos_cda >= 0) return false; if (cos_abc < 0 && cos_cda < 0) return true;
T sin_abc = (ax-bx)*(cy-by) - (cx-bx)*(ay-by); T sin_cda = (cx-dx)*(ay-dy) - (ax-dx)*(cy-dy);
T sin_sum = sin_abc * cos_cda + cos_abc * sin_cda; return sin_sum < 0; }
I'd be interested to know how this approach of effectively "lazy evaluation" compares to using eagerly-evaluated arbitrary-precision types (e.g. Boost.Multiprecision's floating point types). I.e. in terms of performance and any complexity introduced to the code. For what sorts of problems is it better? How much work is actually saved by laziness in practice?
Short answer: The complexity depends in the definition of ax, ay, bx, by, cx, cy, dx, dy. Longer one below (with a lot of oversimplification of the algorithms) Suppose the function pi(int digits) providing individual digits of pi. pi(0) = 3, pi(1)=1, pi(2) = 4 … and e(digits) providing digits of e, e(0)=2, e(1)=7…, and for simplicity in the example let assume all numbers have the decimal comma in position 1, and we do not need to check it. Then, we have that the number pi is the sum(pi(digit)*10^(-digit)), and e is sum(e(digit)*10^(-digit)). Rational numbers, can be easily expanded having the numerator and denominator. Doing the sum from 0 to n, has an error of “10^(-n)”, one unit in the last computed digit. Now, we want to evaluate the expression: pi < 3/1 Iteration 0: Pi(0) = 3 3/1(0) = 3 3 - 3 = 0 ?, yes, then we need to expand another. Iteration 1: Pi(1) = 1 3/1(1) = 0 1 - 0 > 0, then we can answer false, and stop expanding digits. Arithmetic operations are thought as interval operations defined to produce digits based in the parameters. For example: (e + e) (0) = 5 is computed using first 2 digits of e, and not just the first one. Iteration 0: e(0) = 2 : meaning e is in [2, 3) [2, 3) + [2, 3) = [4, 6), then the result could be 4, or 5, we require a second iteration Iteration 1: e(1) = 7 : meaning e is in [2.7, 2.8) [2.7, 2.8) + [2.7, 2.8) = [3.4, 3.6), then we are certain about first digit (not second yet). Spacial complexity: The complexity of addition is just keeping pointer to the functions received in the parameters. The complexity of comparing, is the complexity of expanding the operands until they can be decided, if an operand is a sum, complexity is linear to the size of the sum. Time complexity: The complexity of addition is assignation of 2 pointers, one per operand. The complexity of comparison is the complexity of expanding enough digits to be able to decide. Disclaimer 1: in practice it is not necessary to compute every digit of the numbers, numbers does not change, large database of interesting numbers with millions of digits exist, and in the case they are not available, computing once is enough if using some kind of cache or internal database. Going back to comparing with multi precision. Multi precision will be definitely faster, assuming you know how many digits you need. However, failing the estimation has a cost. If overestimated, you paying on memory and time. If underestimated, you are getting the wrong results. Usually, that is what happens when using floating-point, you have 23 bits, then pi is 23 bits, but if the values you evaluating differs in the bit 50 (not represented) you believe them equal when they are not. About space complexity, one depends quantity of variables, the other in how they are combined by operators. Disclaimer 2: Using arbitrary numbers (something you really want to avoid), it is proven that equality comparison is undecidable, then pi_version_1 == pi_version_2 will never end execution. For this reason, it is common to set a max_expansion constant, for most jobs a few thousands will do the job (which is orders of magnitude away from what you can usually get in other approaches). If an operation expanded to this limit and did not decide an operation, most certain using other approaches produced an inexact result.
participants (4)
-
Brook Milligan
-
Damian Vicino
-
Matt Calabrese
-
Phil Endecott