
Dear All, I'd like to encourage people who are looking at the proposed Multiprecision library to remind themselves of the xint review last year. See e.g. http://thread.gmane.org/gmane.comp.lib.boost.devel/218624 (result) http://thread.gmane.org/gmane.comp.lib.boost.devel/215968 (my review) etc. etc. To what extent do we believe that this new library addresses the weaknesses of the last proposal? It would be great if Vladimir Prus, review manager for xint, could write a review of Multiprecision. Here's my take on it: Multiprecision addresses many of the problems of xint, and also extends the scope into areas that xint did not address at all. However it still suffers some of the same problems: if I want an int128_t that is just 128 bits of data, it doesn't help me. Furthermore, it doesn't help me to build the int128_t that I want. Although Multiprecision is divided into "front" and "back"-ends, this division is not in the right place to let me substitute my own backend that just provides the 128 bits of storage. In the xint reviews, we suggested that the algorithms should be decoupled from the data structures and the same applies here. Similarly, this implementation doesn't provide hooks that would allow me to substitute e.g. assembler implementations of the algorithms. Regards, Phil.

Multiprecision addresses many of the problems of xint, and also extends the scope into areas that xint did not address at all. However it still suffers some of the same problems: if I want an int128_t that is just 128 bits of data, it doesn't help me.
Sigh. There was such a type (it's still in the sandbox somewhere as fixed_int.hpp, and uses a 2's complement representation), but I was encouraged to move to the current form pre-review - rightly IMO, as it's both more flexible and faster *most* of the time (though in some specific cases the old fixed_int code wins 'cos it's simpler).
Furthermore, it doesn't help me to build the int128_t that I want. Although Multiprecision is divided into "front" and "back"-ends, this division is not in the right place to let me substitute my own backend that just provides the 128 bits of storage. In the xint reviews, we suggested that the algorithms should be decoupled from the data structures and the same applies here. Similarly, this implementation doesn't provide hooks that would allow me to substitute e.g. assembler implementations of the algorithms.
Point taken. However there is a difficulty here, lets say we separate out some of the low level code from cpp_int and define a routine such as: template <class Limb> void some_operator(Limb* presult, const Limb* a, const Limb* b, unsigned a_len, unsigned b_len); This raises a number of questions: 1) What order are the limbs in? There are two choices, and it's not clear to me that there is "right" answer for all situations. Low order first makes the logic easier and is pretty much required for arbitrary precision types, but high order first works well for fixed precision types - and is a lot easier to for the user to debug - set the debugger display to hexadecimal and the value is just right there. 2) We can noticeably simplify the code if a_len and b_len are always the same. 3) Further if a_len and b_len are actually constants, then the compiler can unroll/optimize the loop more easily (not sure if constexp can help there). 4) 2's complement or sign-magnitude? It makes a difference for many operations. 5) How large is presult? It's tempting to require that it be large enough to hold the result, but that would prohibit use in fixed width types (which may overflow). Or we could add the logic to handle both, as long as you're prepared to take the hit in performance when dealing with trivially small values. I other words I'm suggesting that there is an inherent coupling between the data structure, and the "optimal" algorithm. So.... I would be more than happy to separate out a low level customization point for cpp_int that would allow substitution of assembly routines - but don't imagine that those interfaces would be optimal for your use case - I don't believe it's possible to achieve an interface that's achieves that in all situations. Of course I'd be happy to be proved wrong ;-) I'm sorry if this sounds rather negative, but once you get down to the specifics of what such an interface should look like, then IMO it actually just gets harder to please everyone (unless there should be a sudden outbreak of unanimity!). Regards, John.

On 6/15/2012 10:46 AM, John Maddock wrote:
I other words I'm suggesting that there is an inherent coupling between the data structure, and the "optimal" algorithm.
I haven't been following this discussion and I know nothing about multi-precision arithmetic, so this may be a wildly inappropriate observation, but ... ... in the STL, when there is a coupling between data structure and optimal algorithm, the coupling is broken by defining a base concept and a refinement of that concept, and dispatching to the optimal algorithm based on the concept modeled by the data structure. Back in my hole now, -- Eric Niebler BoostPro Computing http://www.boostpro.com

Dear All,
I'd like to encourage people who are looking at the proposed Multiprecision library to remind themselves of the xint review last year. See e.g.
<snip> Thank you for your words, Phil. Your post is of central importance to the concept of Boost.Multiprecision. These matters constitute a rich and deep topic in the area of high-performance computing. Even though I may not be an experienced booster, I would like to express my detailed opinion on them. I suppose I will get in trouble for this on the weekend.
Multiprecision addresses many of the problems of xint, and also extends the scope into areas that xint did not address at all.
At the present time, neither boost nor C++ supports extended integers, big floats or fixed-points. C++ only supports a handful of built-in integral and floating-point types. Those interested in high-performance computing or ultra fast fixed-point (e.g. for embedded systems) must, therefore, unavoidably write specialized types. In my opinion, Boost.Multiprecision does the following: 1. Provide a uniform template abstraction for extended numeric types. 2. Provide default implementations of integer, rational and float. 3. Allow for a future default implementation of fixed-point. 4. Allow for use with a future extended complex class.Although the proposed Boost.Multiprecision provides default implementations of int, rational and float, its main goal is not to compete with the world's best performing implementations thereof. I would prefer spending review time working on the long-game of boost and C++. How will boost or C++ potentially specify extended number types? template<typename back_end_numeric_type, typename numeric_traits, typename allocator_type, typename expression_method> mp_number { }; I would prefer to work on the interface, not the performance of the default classes that John and I have provided. If we bicker about these, we will never get to the real matter at hand which is specifying the abstraction for non-built-in types. John's concept takes the first step toward establishing an architecture for extended numeric types. In this way, the previous discussions about default styles of template expression and the proposed name of the abstraction (mp_number or simply number) are more relevant than detailed performance analyses of the types made available in the first release of a proposed Boost.Multiprecision---of course, assuming that they are not embarrassingly slow.
However it still suffers some of the same problems: if I want an int128_t that is just 128 bits of data, it doesn't help me.
The proposed Boost.Multiprecision says, "You have to write it yourself!" Just like a compiler supplier would. That's the concept of the proposed Boost.Multiprecision. I do, however, have UINT128 and INT128. I wrote them sometime around 2002. If you *really* would like, I could approach John and ask how we could make these boost.ready. I also have UINT24 for embedded systems.
Furthermore, it doesn't help me to build the int128_t that I want.
Again, the proposed Boost.Multiprecision is not aimed to solve this problem. We can, however, talk about generating a sensible back-end for binary fixed-size INT128 in the future, as mentioned above.
Although Multiprecision is divided into "front" and "back"-ends, this division is not in the right place to let me substitute my own backend that just provides the 128 bits of storage.
I disagree. Perhaps I am wrong. If you write INT128 with the back-end requirements for an integer type, then it will successfully be abstracted by mp_number.
In the xint reviews, we suggested that the algorithms should be decoupled from the data structures and the same applies here.
(Please interpret this as my opinion. Others will have different opinion.) In my opinion, the suggestion was not correct, but understandable. Just because the proposed Boost.Multiprecision provides a template abstraction for arithmetic operations and C99 elementary transcendental functions, does not mean we would like the template itself to be modifiable. We don't expect std::basic_string to provide a hookup-point for making a better implementation of signed char. Rather, we expect basic_string to be used with an existing character implementation that has traits and possibly a special allocator. Similarly, it is not the role of the proposed Boost.Multiprecision to allow customization of its limbs, add, multiply, divide, etc. Boost.Multiprecision is a template, not the implementation. The reference implementations for integer, rational and float that we provide are only defaults for the reference application. The duty of achieving optimal performance lies with the back-end author.
Similarly, this implementation doesn't provide hooks that would allow me to substitute e.g. assembler implementations of the algorithms. Regards, Phil.
(Please interpret this as my opinion. Others will have different opinion.) This is correct for Boost.Multiprecision, and it is as it should be. Please don't expect to put hand-written assembler optimizations in the template. If you can do it better than GMP and MPFR, then you must do it in the back-end---an entire back-end. I hope I have clarified at least my opinion with this long response. Best regards, Chris.

Christopher Kormanyos wrote:
In my opinion, Boost.Multiprecision does the following:
1. Provide a uniform template abstraction for extended numeric types. 2. Provide default implementations of integer, rational and float. 3. Allow for a future default implementation of fixed- point. 4. Allow for use with a future extended complex class.
Reasonable goals.
Although the proposed Boost.Multiprecision provides default implementations of int, rational and float, its main goal is not to compete with the world's best performing implementations thereof.
Of course it needn't compete with the world-class implementations, at least not initially. However, it must be fast enough to be usable in enough use cases to get sufficient experience to determine whether the interface is right and whether the customization points are appropriate, in order to have a solid foundation for a standard proposal.
I would prefer spending review time working on the long-game of boost and C++.
Yes, given the minimal performance caveat noted above.
How will boost or C++ potentially specify extended number types?
template<typename back_end_numeric_type, typename numeric_traits, typename allocator_type, typename expression_method> mp_number { };
That looks ugly, for a numeric type, but typedefs and template aliases will be very useful in making nice, user-defined types and class templates.
I would prefer to work on the interface, not the performance of the default classes that John and I have provided. If we bicker about these, we will never get to the real matter at hand which is specifying the abstraction for non-built-in types.
While I agree with your sentiment, note that Phil's concern about being able to create a fast, slightly-larger-than-built-in type is important. Showing how such a type can be created is an important exercise because it will show whether the abstraction and points of customization have been properly conceived to permit creating such types. Indeed, given the likelihood of folks wanting to do what Phil did, the library could provide a template-based backend implementation that does most of the heavy lifting.
John's concept takes the first step toward establishing an architecture for extended numeric types.
It is reasonable to view this as "the first step" and leave the fulfillment of some of these other requirements for later. However, if there is no proof of concept for the various use cases, then you can't be sure the abstraction and points of customizations are correct.
In this way, the previous discussions about default styles of template expression and the proposed name of the abstraction (mp_number or simply number) are more relevant than
"number" is better. The namespace can always be used, including with a namespace alias, to make for a more specific name. If "mp_" is in the name, it cannot be removed.
However it still suffers some of the same problems: if I want an int128_t that is just 128 bits of data, it doesn't help me.
The proposed Boost.Multiprecision says, "You have to write it yourself!" Just like a compiler supplier would. That's the concept of the proposed Boost.Multiprecision.
That's fine for esoteric things, but not for something that would be extremely common.
I do, however, have UINT128 and INT128. I wrote them sometime around 2002. If you *really* would like, I could approach John and ask how we could make these boost.ready. I also have UINT24 for embedded systems.
One of those might be an excellent tutorial for defining a backend, though actually including the code, in some form, with the library would be ideal.
In the xint reviews, we suggested that the algorithms should be decoupled from the data structures and the same applies here.
(Please interpret this as my opinion. Others will have different opinion.)
This goes without saying since you aren't quoting anyone else. OTOH, if you wish to be clear that you are not speaking for John, you can say that explicitly.
In my opinion, the suggestion was not correct, but understandable. Just because the proposed Boost.Multiprecision provides a template abstraction for arithmetic operations and C99 elementary transcendental functions, does not mean we would like the template itself to be modifiable.
If the operations are broken down to calls to specific algorithms, why not permit customizing those algorithms? If your concern is that doing so exposes implementation details that would thwart future improvements, I can certainly understand your reticence. Furthermore, it may be reasonable to see some of what was being requested for xint is what your library provides in its backend.
Similarly, this implementation doesn't provide hooks that would allow me to substitute e.g. assembler implementations of the algorithms.
This is correct for Boost.Multiprecision, and it is as it should be. Please don't expect to put hand-written assembler optimizations in the template. If you can do it better than GMP and MPFR, then you must do it in the back-end---an entire back-end.
Perhaps the documentation needs to be clearer about the role of the frontend and backend parts of the library. Specifically, there should be information that notes that performance optimizations are, mostly, applied to the backend. _____ Rob Stewart robert.stewart@sig.com Software Engineer using std::disclaimer; Dev Tools & Components Susquehanna International Group, LLP http://www.sig.com ________________________________ IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

Thanks for your comments Robert.
Although the proposed Boost.Multiprecision provides default implementations of int, rational and float, its main goal is not to compete with the world's best performing implementations thereof.
Of course it needn't compete with the world-class implementations, at least not initially. However, it must be fast enough to be usable in enough use cases to get sufficient experience to determine whether the interface is right and whether the customization points are appropriate, in order to have a solid foundation for a standard proposal.
Agreed. In my opinion, though, the back-ends already provided in the review candidate provide sufficient performance to make the library absolutely usable and simultaneously gain experience with the interface.
I would prefer spending review time working on the long-game of boost and C++.
Yes, given the minimal performance caveat noted above.
How will boost or C++ potentially specify extended number types?
template<typename back_end_numeric_type, typename numeric_traits, typename allocator_type, typename expression_method> mp_number { };
That looks ugly, for a numeric type, but typedefs and template aliases will be very useful in making nice, user-defined types and class templates.
Well, it's just an idea for potential standardization in my dream of the future. To me it doesn't really seem much uglier than std::basic_string. But honestly, we are far away from standardizing number representations. It's just one of my long-term goals.
I would prefer to work on the interface, not the performance of the default classes that John and I have provided. If we bicker about these, we will never get to the real matter at hand which is specifying the abstraction for non-built-in types.
While I agree with your sentiment, note that Phil's concern about being able to create a fast, slightly-larger-than-built-in type is important. Showing how such a type can be created is an important exercise because it will show whether the abstraction and points of customization have been properly conceived to permit creating such types.
I also agree. In the past months, I had an opportunity to use multiprecision with a variety of other types that are in my catalog. So I am convinced that it works well for the domain of number abstraction. But you are right. A real example with such a popular type might be beneficial. John, in your opinion, what might be the best way to include fixed-size integers of 128 and 256 bits? Would you recommend template specializations of one of the existing ones or dedicated high-performance back-ends? I could add a few of these these in the fall, if desired (or if makes sense). I've got the code and it's in pretty good shape.
Indeed, given the likelihood of folks wanting to do what Phil did, the library could provide a template-based backend implementation that does most of the heavy lifting.
John's concept takes the first step toward establishing an architecture for extended numeric types.
It is reasonable to view this as "the first step" and leave the fulfillment of some of these other requirements for later. However, if there is no proof of concept for the various use cases, then you can't be sure the abstraction and points of customizations are correct.
Well remember, we actually do provide integer, rational and floating-point back-ends, right off-the-rack and ready to use.
In this way, the previous discussions about default styles of template expression and the proposed name of the abstraction (mp_number or simply number) are more relevant than
"number" is better. The namespace can always be used, including with a namespace alias, to make for a more specific name. If "mp_" is in the name, it cannot be removed.
However it still suffers some of the same problems: if I want an int128_t that is just 128 bits of data, it doesn't help me.
The proposed Boost.Multiprecision says, "You have to write it yourself!" Just like a compiler supplier would. That's the concept of the proposed Boost.Multiprecision.
That's fine for esoteric things, but not for something that would be extremely common.
I guess we need a list of *common* things. I do more with floating-point and less with integers. Others are focused on cryptographic hash algos using integers, while others still may use a lot of rational numbers. Perhaps fixed-size signed and unsigned 128 and 256 bit integers are so common that we need to support them explicitly.
I do, however, have UINT128 and INT128. I wrote them sometime around 2002. If you *really* would like, I could approach John and ask how we could make these boost.ready. I also have UINT24 for embedded systems.
One of those might be an excellent tutorial for defining a backend, though actually including the code, in some form, with the library would be ideal.
I can do it, but can't start until sometime in the fall. <snip>
In my opinion, the suggestion was not correct, but understandable. Just because the proposed Boost.Multiprecision provides a template abstraction for arithmetic operations and C99 elementary transcendental functions, does not mean we would like the template itself to be modifiable.
If the operations are broken down to calls to specific algorithms, why not permit customizing those algorithms? If your concern is that doing so exposes implementation details that would thwart future improvements, I can certainly understand your reticence.
I personally do not like that level of granularity. Over the years, I have written quite a few specialized numeric types, both large and small. In my first designs, I used custom hooks for ugly assembler stuff, only to notice that my algorithm was less than optimal in the first place. My preference is to keep the numeric type in and of itself whole. I do, however, understand user customization.
Furthermore, it may be reasonable to see some of what was being requested for xint is what your library provides in its backend.
I am a bit new and came to the board just during the review of xint. But as far as I remember, it was just the integer type. Wouldn't xint simply be a big integer back-end for the proposed multiprecision? Here's the deal in my opinion. The community needs big integers, rationals and floats. But there are only a few sources. The proposed multiprecision provides both an abstraction for these numeric types as well as reference implementations for integers, rationals and floats. The proposed multiprecision provides an architecture for the abstraction of the numeric types. Getting near world-class types would take a coalition of programmers because it's quite hard to write very good big number types. Boost is full of previous attempts to write individual number types, but no one ever completed any of them. Perhaps it would be good to use the proposed multiprecision as the abstraction and find out who would like to contribute to writing the improved back-ends for it. But this should be done later because it is a long-term project, requiring more development time, planing and coordination.
Similarly, this implementation doesn't provide hooks that would allow me to substitute e.g. assembler implementations of the algorithms.
This is correct for Boost.Multiprecision, and it is as it should be. Please don't expect to put hand-written assembler optimizations in the template. If you can do it better than GMP and MPFR, then you must do it in the back-end---an entire back-end.
Perhaps the documentation needs to be clearer about the role of the frontend and backend parts of the library. Specifically, there should be information that notes that performance optimizations are, mostly, applied to the backend.
Good idea. You are right. There have been other comments along these lines. In fact, the performance is predominantly determined by that of the back-end. Thanks again for your comments. Best regards, Chris.

Although the proposed Boost.Multiprecision provides default implementations of int, rational and float, its main goal is not to compete with the world's best performing implementations thereof.
Of course it needn't compete with the world-class implementations, at least not initially. However, it must be fast enough to be usable in enough use cases to get sufficient experience to determine whether the interface is right and whether the customization points are appropriate, in order to have a solid foundation for a standard proposal.
Point taken, however it does wrap many world class implementations such as GMP and MPFR. Frankly in the general case it's foolish to believe that Boost can compete with those guys. However, basic C++ implementations for integer and floating point types are provided which are much better than toys, much less than world class, but perfectly usable for most cases (or where a non-GNU license is necessary).
I would prefer to work on the interface, not the performance of the default classes that John and I have provided. If we bicker about these, we will never get to the real matter at hand which is specifying the abstraction for non-built-in types.
While I agree with your sentiment, note that Phil's concern about being able to create a fast, slightly-larger-than-built-in type is important. Showing how such a type can be created is an important exercise because it will show whether the abstraction and points of customization have been properly conceived to permit creating such types.
Indeed, given the likelihood of folks wanting to do what Phil did, the library could provide a template-based backend implementation that does most of the heavy lifting.
There is a problem here - in that there is no "one true way" for a slightly larger integer type - it all depends on how much larger, and the particular size of the integers it encounters in practice. For example: * For int128, simplicity will probably always win out, making the "naive" implementation the best. * By the time you get to say int1024, computational complexity wins out, so something like the fixed sized integers already provided are likely to be best (modulo the fact that more profiling and fine tuning is still required). That's why I was persuaded to switch to those prior to the review from a more traditional fixed int (like Phil's). * To complicate matters more, for say int1024, the "naive" version wins if most of the values use all the bits, where as the version that maintains a runtime-length wins when most values are small. Having said all that, I think it is true that a weakness of the library is inherent overhead when dealing with "trivial" types that are extremely close to the metal to begin with. Probably that reflects the original focus of the library where this is a non-issue.
John's concept takes the first step toward establishing an architecture for extended numeric types.
It is reasonable to view this as "the first step" and leave the fulfillment of some of these other requirements for later. However, if there is no proof of concept for the various use cases, then you can't be sure the abstraction and points of customizations are correct.
True. However, you can't carry on forever, you have to ask for a review at some point. At present the focus is more on "breadth" than "depth".
I do, however, have UINT128 and INT128. I wrote them sometime around 2002. If you *really* would like, I could approach John and ask how we could make these boost.ready. I also have UINT24 for embedded systems.
One of those might be an excellent tutorial for defining a backend, though actually including the code, in some form, with the library would be ideal.
Adding a couple of tutorials for backend writing is an excellent idea. Chris - I think we all have such beasts as an int128 - there's one (not part of this review) under boost/multiprecision/depreciated/fixed_int.hpp. Unfortunately I suspect that int128 is so close the metal, that the fastest implementation wouldn't use mp_number - however, it might make a good use case to try and find out where the bottlenecks are etc. Regards, John.

John Maddock wrote:
Rob Stewart wrote:
Christopher Kormanyos wrote:
Of course it needn't compete with the world-class implementations, at least not initially. However, it must be fast enough to be usable in enough use cases to get sufficient experience to determine whether the interface is right and whether the customization points are appropriate, in order to have a solid foundation for a standard proposal.
Point taken, however it does wrap many world class implementations such as GMP and MPFR. Frankly in the general case it's foolish to believe that Boost can compete with those guys.
No argument
However, basic C++ implementations for integer and floating point types are provided which are much better than toys, much less than world class, but perfectly usable for most cases (or where a non-GNU license is necessary).
I wasn't trying to disparage what you've done. Rather, I was arguing that if a class of potential users, exemplified by Phil, are not well satisfied, there might be issues with the library's abstractions. Please note the conditional and vague phrasing in that last sentence: if, potential, well, might.
I would prefer to work on the interface, not the performance of the default classes that John and I have provided. If we bicker about these, we will never get to the real matter at hand which is specifying the abstraction for non-built-in types.
While I agree with your sentiment, note that Phil's concern about being able to create a fast, slightly-larger-than- built-in type is important. Showing how such a type can be created is an important exercise because it will show whether the abstraction and points of customization have been properly conceived to permit creating such types.
Indeed, given the likelihood of folks wanting to do what Phil did, the library could provide a template-based backend implementation that does most of the heavy lifting.
There is a problem here - in that there is no "one true way" for a slightly larger integer type - it all depends on how much larger, and the particular size of the integers it encounters in practice.
What about int_fast32_t and friends as models? That is, might there be different types for different purposes that can be selected based upon how the main class template is parameterized? IOW, allow the user to indicate which backend (or intermediate bridge layer) to use based upon the user's own preference or performance analysis.
For example:
* For int128, simplicity will probably always win out, making the "naive" implementation the best. * By the time you get to say int1024, computational complexity wins out, so something like the fixed sized integers already provided are likely to be best (modulo the fact that more profiling and fine tuning is still required). That's why I was persuaded to switch to those prior to the review from a more traditional fixed int (like Phil's). * To complicate matters more, for say int1024, the "naive" version wins if most of the values use all the bits, where as the version that maintains a runtime-length wins when most values are small.
Cannot those options be selected by the user?
Having said all that, I think it is true that a weakness of the library is inherent overhead when dealing with "trivial" types that are extremely close to the metal to begin with. Probably that reflects the original focus of the library where this is a non-issue.
Surely templates can select an optimal backend and erase all overhead in using it. For example, if the user selects the naïve, but optimal, backend for a 128 bit integer type, must there necessarily be overhead that wouldn't exist in a customized 128 bit integer type? If your current design imposes such overhead, might there be a way to refactor the design such that one interface (the user-facing class template) can select the right IL to bridge that interface and the optimal backend?
John's concept takes the first step toward establishing an architecture for extended numeric types.
It is reasonable to view this as "the first step" and leave the fulfillment of some of these other requirements for later. However, if there is no proof of concept for the various use cases, then you can't be sure the abstraction and points of customizations are correct.
True. However, you can't carry on forever, you have to ask for a review at some point. At present the focus is more on "breadth" than "depth".
I cannot argue your point beyond saying I'd hate to think that your library was accepted and then six months later you realized a fundamental oversight (and I don't assert there is one). OTOH, there is no requirement for backward compatibility in Boost and if you did discover such an oversight, you could choose to break compatibility -- for a fledgling library -- for the sake of significant improvement and avoidance of having to creating BMP2. _____ Rob Stewart robert.stewart@sig.com Software Engineer using std::disclaimer; Dev Tools & Components Susquehanna International Group, LLP http://www.sig.com ________________________________ IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

There is a problem here - in that there is no "one true way" for a slightly larger integer type - it all depends on how much larger, and the particular size of the integers it encounters in practice.
What about int_fast32_t and friends as models? That is, might there be different types for different purposes that can be selected based upon how the main class template is parameterized? IOW, allow the user to indicate which backend (or intermediate bridge layer) to use based upon the user's own preference or performance analysis.
Nod.
For example:
* For int128, simplicity will probably always win out, making the "naive" implementation the best. * By the time you get to say int1024, computational complexity wins out, so something like the fixed sized integers already provided are likely to be best (modulo the fact that more profiling and fine tuning is still required). That's why I was persuaded to switch to those prior to the review from a more traditional fixed int (like Phil's). * To complicate matters more, for say int1024, the "naive" version wins if most of the values use all the bits, where as the version that maintains a runtime-length wins when most values are small.
Cannot those options be selected by the user?
Not yet, it gets kind of complicated real fast if there are too many options, but yes I realise theres a lot of detailed analysis to do to figure out what can be improved where.
Having said all that, I think it is true that a weakness of the library is inherent overhead when dealing with "trivial" types that are extremely close to the metal to begin with. Probably that reflects the original focus of the library where this is a non-issue.
Surely templates can select an optimal backend and erase all overhead in using it. For example, if the user selects the naïve, but optimal, backend for a 128 bit integer type, must there necessarily be overhead that wouldn't exist in a customized 128 bit integer type? If your current design imposes such overhead, might there be a way to refactor the design such that one interface (the user-facing class template) can select the right IL to bridge that interface and the optimal backend?
I don't know, it's a question of getting stuck in and analysing specific use cases and seeing why they do/don't perform as you expect. The problem is that if the operation being performed is sufficiently "trivial" (algorithmically speaking), then quite simple front end changes can have a surprising (and also quite capricious) performance change. So typically you improve one thing, and something else suffers an inexplicable slowdown :-( All quite frustrating, but like I said there's still more to do for sure. Also if folks have more "real world use cases" I'd really like to see them, it helps a lot. I already have a few ideas I need to explore from looking at Phil's case.
John's concept takes the first step toward establishing an architecture for extended numeric types.
It is reasonable to view this as "the first step" and leave the fulfillment of some of these other requirements for later. However, if there is no proof of concept for the various use cases, then you can't be sure the abstraction and points of customizations are correct.
True. However, you can't carry on forever, you have to ask for a review at some point. At present the focus is more on "breadth" than "depth".
I cannot argue your point beyond saying I'd hate to think that your library was accepted and then six months later you realized a fundamental oversight (and I don't assert there is one). OTOH, there is no requirement for backward compatibility in Boost and if you did discover such an oversight, you could choose to break compatibility -- for a fledgling library -- for the sake of significant improvement and avoidance of having to creating BMP2.
I think we'd all like to avoid capricious breaking changes in the future. However, if someone down the road wants to introduce a number kind we haven't even thought of yet, it's sort of inevitable that some changes (or at least extensions) may be required. It's the old chicken & egg issue, and I can't tell you at what point one hatches into the other ;-) Thanks, John.

I cannot argue your point beyond saying I'd hate to think that your library was accepted and then six months later you realized a fundamental oversight (and I don't assert there is one). OTOH, there is no requirement for backward compatibility in Boost and if you did discover such an oversight, you could choose to break compatibility -- for a fledgling library -- for the sake of significant improvement and avoidance of having to creating BMP2.
Thank you for your wise advice. (Or maybe even three months later...)
I think we'd all like to avoid capricious breaking changes in the future. However, if someone down the road wants to introduce a number kind we haven't even thought of yet, it's sort of inevitable that some changes (or at least extensions) may be required. It's the old chicken & egg issue, and I can't tell you at what point one hatches into the other ;-)
Thanks, John.
I agree. In my opinion, we have a solid start. We will, however, definitely learn as the library gets more exposure via review and potential inclusion in Boost. We may well end up saying, yeah that was, like, obvious. We do, however, have to start sometime. There are many calculations that can be done with this thing right off-the-rack in its current state. Many developers can make use of the power of this library even if we must refactor parts of it later. Thanks again for your comments. Best regards, Chris.
participants (5)
-
Christopher Kormanyos
-
Eric Niebler
-
John Maddock
-
Phil Endecott
-
Stewart, Robert