
swagat konchada wrote:
I had a look upon the BigInt code from the sandbox. It's entirely wrapped around GMP(linked to gmblib). [snip] That is uncompatible with the Boost license, which hinders the distribution of a Boost_bigInt library which wholly is a wrapper around GMP.
Keeping in view the above issues what should be our priority? Improving the wrapping around GMP for time being or Starting Boost's own version of BigInt. I see mixed reactions on this subject. My personal opinion is that suggested by Paul A. Bristow, i.e, of a Boost's own BigInt, which I think is in line with Boost's long term aspirations.
Rob Stewart wrote
The usual idea is that BigInt would provide expression templates and other optimizations to avoid calling the back end engine inefficiently. That may require an adapter layer between the user-visible BigInt API and the computation engine, like GMP, at the back end, to insulate the API from differences in the back end engines. (Then again, there may be nothing left to a layer above the adapter layer; I'm not saying there must be three layers.)
If GMP is an optional batck end supported by BigInt, along with other back ends, controlled by conditional compilation, then a BigInt user can choose to use GMP if they like, but the licensing issue will be theirs and not Boost's.
HTH,
_____ Rob Stewart robert.stewart@sig.com Software Engineer, Core Software using std::disclaimer; Susquehanna International Group, LLP http://www.sig.com
If we consider performance issues based on design, planning the BigInt architecture could be designed in two ways 1. 3 layers to allow optional binding to Boost's own bigint, GMP, or anyother bigint library, for the user during compile time. 2. 2 layers if we plan to limit Boost.Bigint's options to only itself and GMP. In a 3 layered architecture, 3 function calls would take place for each overloaded operator, whatever the option chosen by the user. Whereas in a 2 layered architecture, in which only GMP is an extra option, there is an equal probability of a user using Boost's own bigint or GMP and thus half of the times, 2 function calls are made and in the other half only 1 function call for each overloaded operator. This means doubling in the number of function calls. Though I realize most of the time is spent inside the function rather than in calling the functions, does this result in an overhead we should worry about? -- Problem is a phenomenon indicating lack of thought. Swagat Konchada