
On 10/11/2012 2:18 AM, Paul Mensonides wrote:
And that is the flaw of these algorithms. That behavior doesn't scale. What they need to do is take a /relative/ number of steps which they can use (not a number of steps left over at the end). The exponential algorithms can get enormous numbers of steps extremely quickly. The number is actually:
Translating the ASCII-ized garbage...
/f/(/s/, /?/, /p/, /b/, /?/) = /p/ * (1 - /b/^(/?/ - /s/ - /?/ - 1)) / (1 - /b/) - 1
f(s, delta, p, b, omega) = p * (1 - b^(omega - s - delta - 1)) / (1 - b) - 1 omega = LIMIT_EXPR delta = buffer size
where /p/ is the number of wrapping bootstraps, /b/ is the exponential base used by the algorithm, /?/ is LIMIT_EXPR which is 512 currently, and /?/ is the passed buffer size. The /?/ is global, so we have:
/g/(/s/, /?/, /p/, /b/) = /f/(/s/, /?/, /p/, /b/, 512)
g(s, delta, p, b) = f(s, delta, p, b, 512)
The value of /b/ is algorithm-specific. I believe it is always 2 (which can be increased fairly easily), but for REPEAT_X, I /know/ it is 2. So, for REPEAT_X we have:
/h/(/s/, /?/, /p/) = /g/(/s/, /?/, /p/, 2)
h(s, delta, p) = g(s, delta, p, 2)
So, in the case above, we have /h/(500, 5, 1) = 62. However, at the beginning with no buffer, we'd have /h/(1, 0, 1) which is
335195198248564927489350624955146153186984145514 809834443089036093044100751838674420046857454172 585692250796454662151271343847070298664248660841 2251521022
The number type that REPEAT_X uses cannot count that high, but the arbitrary precision arithmetic mechanism could /in theory/.
Of course, the above is ridiculously high. Assuming infinite memory, if you generated one repetition per nanosecond, it would take about 3 x 10^226 years to complete (if I did the math right).
So, obviously you don't need that many, but the what the algorithm should do is uses a relative depth so that the number of available steps is not dependent on s. Instead, it would be something like:
/f/(/?/, /p/, /b/) = /p/ * (1 - /b/^(/?/ - 1)) / (1 - /b/) - 1
f(delta, p, b) = p * (1 - b^(delta - 1)) / (1 - b) - 1 Where delta is now the number of backend steps...
Where /?/ is now the number of backend steps that the algorithm is allowed to use. So EXPR(REPEAT_X(10, n, M)) could generate something like 510 steps before trampolining without going past s + 10 - 1 in the backend. Even something as low as 20 yields unrealistic numbers of steps (about 524286).
So, what these algorithms /should/ be are algorithms which allow parametric resumption but use either linear multipliers or exponential multipliers with relative upper bounds on the backend. The linear version would be faster, but the exponential version would yield a lost more steps. The current _PARAMETRIC and _X algorithms don't do that.
Regards, Paul Mensonides
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost