... snip ...
Three things are proved undecidable if using arbitrary algorithms for generating the numbers: irrationality, equal comparison, and comparison against 0.
However, if you restrict the set of numbers participating in the an operation to well-known sets you can answer interesting questions.
What do you mean by restricting the set of numbers? Restricting the expressibility for the digit generating functions; e.g., providing some basic reals (pi, e, ...) and the only other way to generate new ones is by using arithmetic operations? In that setting, equality might indeed be decidable, but not by the algorithm you proposed. But I think you have to restrain to symbolic manipulations, most likely by finding some normal form for arithmetic expressions and comparing them. (Btw, that's what I meant by Computer Algebra System in my earlier message.) For clarity, is the goal infinite precision equality?
Boost.Multiprecision also has infinite precision (in theory). The difference of your suggestion would be that as you do lazy-evaluation, you don't have to specify the precision a priori.
Here, multi precision needs the whole representation to be available for every operation. In the case of comparing using algorithms, you only have representation for the current digit dependencies. For example, if you have compared the first few digits in an operation and they are equal, then for comparing the following thousands you don’t need them (unless you require them to compare the following digits, but thats a problem with the algorithm used for the number, not with the approach). For example the binary number composed by all zeros, but positions p where p is power of 2. Only requires a counter for deciding each number. And the limitation of how many digits can be expanded is imposed by time, almost no space usage at all. It is a tradeoff between time/space.
I don't really understand. I see that for certain numbers the approach might work -- if, together with the number, you also provide implementations for equality and all the other relations and operations.
I have a good use case for this in the area of simulation, but I’m sure there is a lot more places you can use them, e.g. physics computations, geometrical computations, etc…
The performance of the arithmetic operations would heavily depend on the "history" of the numbers (i.e., the sequence of "axiom reals" and arithmetic operations you used to generate them) involved, right? For example, in simulation the numbers come from iterative solvers and often propagate over several time steps. The "history" of the numbers would thus rapidly be huge. (You could eager-evaluate up to a certain precision intermittently to start with new "axiom reals", but then you're back to Boost.Multiprecision.)
Yes, they depend on the generators, always, some speedup can be obtain by caching known digits, since numbers never change. However, we expand only digits we need, while in multi precision, you would have to generate all the digits of the predefined precision to obtain the same results.
Let's make it more specific: You can generate pi, for example, by different series, some converging faster than others. Do you mean that by generator? Then, of course, the speed of the generators is the basis for the performance, but that's not my point. Maybe I don't understand, so let's consider an example: Take any natural i and put y = x_i, where x_0=pi and x_i=((x_{i-1}+1)-1 for natural i>0. How do you implement deciding y == pi? Alex