Herb Sutter's Assignment operator can be optimized + Proposal for very small boost utility library

Hi, Do you know that well-known Herb Sutter's assignment operator in most cases does unnecessary copying? No? Then read it to learn how to elminate it. class Matrix { //.... Matrix& operator = (const Matrix& that) { Matrixtemp(that); swap(temp); return *this; } //... }; Consider first example: Matrixa, b; //.... a = b; Here Herb Sutter's operator = works just fine. But consider second example: Matrix func (); Matrix a; //... Matrix = func(); Result of func() is copied to temporary objectand then operator = copies it again. So assigning to result of function creates one useless object. Elminating of temporary int object will cost nothing, but elminating of temporary std::vector of 10000 elements will cost a lot. But how can we define that copying is necessary? - Every function returns object not of type T but temporary object of type T. So let's code something: template<typename T> struct temporary : T { temporary() {} temporary(const T & t) : T(t) {} //some operators and constructors to create base object in-place not listed here }; Now let's change function consider operator +: Matrix operator + (const Matrix & a, const Matrix & b) { Matrix temp(a); temp += b; return temp; } To optimize it we do the following: temporary<Matrix> operator + (const Matrix & a, const Matrix & b) { Matrix temp(a); temp += b; return temp; } Maybe it can be hardly optimized using NRVO, so let's help the complier rewriting as following: temporary<Matrix> operator + (const Matrix & a, const Matrix & b) { temporary<Matrix> temp(a); temp += b; return temp; } By now, even returning temporary<Matrix> instead of Matrix, no optimization is done. To enable the optimization we need to change the class Matrix by adding one more assignment operator : Matrix & operator = (const temporary<Matrix> & that) { swap (const_cast<Matrix &>(static_cast<const Matrix &>(that))); return *this; } or using "deprecated" casts Matrix & operator = (const temporary<Matrix> & that) { swap ((Matrix &) that); return *this; } By now we've elminated unnecassary copying. Is this worth being called Pavel Chikulaev's temporary assignment operator? Is this worth being included in Boost? By including temporary.hpp, adding one operator in class, and replacing each function result to explicitly marked temporary you get really a lot. My temporary assignment operator as safe as Herb Sutter's. Maybe returing temporary<T> will make some obstacles to templated programming, so i'm going to add temporary_traits: is_temporary, add_temporary, remove_temporary. Some changes of BOOST_TYPEOF will be also needed. So, what do you think? Copyright 2005 Pavel Chikulaev. Use, modification, and distribution are subject to the Boost Software License, Version 1.0

Pavel Chikulaev wrote:
Do you know that well-known Herb Sutter's assignment operator in most cases does unnecessary copying?
I don't know that it's "Herb's" assignment operator. He certainly popularized the idiom with his book, but it may have existed before that.
[...] Matrix func (); Matrix a; //... Matrix = func();
Result of func() is copied to temporary objectand then operator = copies it again. So assigning to result of function creates one useless object. [...]
I would like to see compiler tests that show that neither RVO nor NRVO work in these cases first. Dave

"David B. Held" <dheld@codelogicconsulting.com> writes:
Pavel Chikulaev wrote:
Do you know that well-known Herb Sutter's assignment operator in most cases does unnecessary copying?
I don't know that it's "Herb's" assignment operator. He certainly popularized the idiom with his book, but it may have existed before that.
Umm, yeah, it did. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Does boost provide a big (unbounded) integer type? I was looking for one and I can't find it. -Jason

From: "Jason Hise" <chaos@ezequal.com>
Does boost provide a big (unbounded) integer type? I was looking for one and I can't find it.
-Jason
I have been working on a big integer library, and it is almost ready for review. However, at the moment I haven't got much time to work on the library, because I'm graduating in three months. After those three months, I will try to get the library really ready for review. In the meantime, you can try out the library which is available from http://groups.yahoo.com/group/boost/files/big_integer/. I use it myself in my crypto library, and it works quite well IMHO, although every now and then a bug shows up. best regards, Richard

Richard Peters wrote:
I have been working on a big integer library, and it is almost ready for review. However, at the moment I haven't got much time to work on the library, because I'm graduating in three months. After those three months, I will try to get the library really ready for review. In the meantime, you can try out the library which is available from http://groups.yahoo.com/group/boost/files/big_integer/. I use it myself in my crypto library, and it works quite well IMHO, although every now and then a bug shows up.
Excellent, I'll download it and try it out. Thanks! -Jason

Richard Peters wrote:
From: "Jason Hise" <chaos@ezequal.com>
Does boost provide a big (unbounded) integer type? I was looking for one and I can't find it.
-Jason
I have been working on a big integer library, and it is almost ready for review.
Is this the expression-template based library that someone mentioned last year? Jonathan

"Jonathan Turkanis" <technews@kangaroologic.com> wrote in message news:d0agkj$fr5$1@sea.gmane.org...
Richard Peters wrote:
I have been working on a big integer library, and it is almost ready for review.
Is this the expression-template based library that someone mentioned last year?
Jonathan
Yes, this is the one. It needs some real good examination, though, because quite a few of the optimizations based on the expression templates that I had imagined would work nice didn't improve speed or worked at all. Richard

On Fri, Mar 04, 2005 at 08:53:15AM +0100, Richard Peters wrote:
I have been working on a big integer library, and it is almost ready for review. However, at the moment I haven't got much time to work on the library, because I'm graduating in three months. After those three months, I will try to get the library really ready for review. In the meantime, you can try out the library which is available from http://groups.yahoo.com/group/boost/files/big_integer/. I use it myself in my crypto library, and it works quite well IMHO, although every now and then a bug shows up.
I only superficially looked through the documentation and the headers of your library. And as much as I liked to, I cannot invest much time in the evaluation of your work. Nevertheless, here are some random notes in no particular order: * The documentation is obviously unfinished. What I'd like to see besides the missing parts of the reference are some statements that clarify - what sets your library apart from other bigint libraries (The maintained ones that I am aware of are GnuMP / cln, NTL, Piologie, MIRACL. I am sure there are more.) - how your library relates to the proposal for a bigint type for the C++ standard library (papers N1692 and N1744). * Specification of div_and_mod(): The formal input parameters are named x and y, but the effect section refers to e and f. * You always refer to the parameter types as "IntegerType", but some parameters need obviously to be references. What can IntegerType actually be? Only big_integer and types resulting from the expression template machinery built around big_integer? Or also int, long etc.? If the latter, what happens if there is an overflow? * I am glad you defined conversions to bit-strings in little / big endian format. That's something I missed from time to time in other libraries. * Did I miss the functions that convert big_integer to long, double etc.? And the other way around, how can I convert a double to a big_integer? (Such conversion functions are essential in my area of interest, lattice basis reduction.) * The function names group_divide and group_inverse are too generic, IMO. They don't convey that you are referring to the prime residue group mod m. As soon as you start implementing, e.g., EC arithmetic or the arithmetic in GF(p^n), the names become confusing because there are several groups involved. I'd prefer something along the lines of divide_mod, inverse_mod. * There are some functions in NTL I found convenient and that I didn't see in your library: E.g., bit-wise operators, Hamming-weight, CRT, logarithm with result type double. Do you plan to support these? (There are even more functions in NTL, like Jacobi symbol, primality tests etc. that are required in any library for computational number theory. Do you have a clear policy what belongs in your big_integer library and what you consider to be too specialized?) Again, this is after a superficial look only, so it's likely I missed something. Regards Christoph -- http://www.informatik.tu-darmstadt.de/TI/Mitarbeiter/cludwig.html

From: "Christoph Ludwig" <cludwig@cdc.informatik.tu-darmstadt.de>
I only superficially looked through the documentation and the headers of your library. And as much as I liked to, I cannot invest much time in the evaluation of your work. Nevertheless, here are some random notes in no particular order:
* The documentation is obviously unfinished. What I'd like to see besides
missing parts of the reference are some statements that clarify - what sets your library apart from other bigint libraries (The
Thank you for going through my library :) the maintained
ones that I am aware of are GnuMP / cln, NTL, Piologie, MIRACL. I am sure there are more.) - how your library relates to the proposal for a bigint type for the
C++
standard library (papers N1692 and N1744).
Yes, the documentation is not finished. I would like to use boostbook for documentation, but I haven't been able to set it up right. This library is written purely in portable c++, and is therefore platform independent. An assembly implementation is probably a lot faster, but my library has the option to plug in other libraries. At the moment, cln and GnuMP can be used as back-end. My library will be released under the boost license. The bigint type proposed for the C++ standard library does not permit the use of expression templates. Using expression templates has a positive and a negative aspect: A significant speedup can be gained using expression templates. The downside is that templates cannot deduce the correct type of expressions, for example: template<class T> void f(T a, T b) will not work when invoked as f(x + y, x - y), because x + y returns an object of type different from x - y. Because the C++ standard library proposal specifies the result type of the operators to be of type const integer, a library implementing that proposal cannot use expression templates.
* Specification of div_and_mod(): The formal input parameters are named x and y, but the effect section refers to e and f.
* You always refer to the parameter types as "IntegerType", but some parameters need obviously to be references. What can IntegerType actually be? Only big_integer and types resulting from the expression template machinery built around big_integer? Or also int, long etc.? If the latter, what happens if there is an overflow?
The idea is to allow big_integer, types resulting from expression templates and int, long, etc. There's still some work in specifying all of this.
* I am glad you defined conversions to bit-strings in little / big endian format. That's something I missed from time to time in other libraries.
I use these functions very much in my cryptography-related software. I wouldn't know what to do without them :)
* Did I miss the functions that convert big_integer to long, double etc.? And the other way around, how can I convert a double to a big_integer? (Such conversion functions are essential in my area of interest, lattice basis reduction.)
I don't have functions converting to/from floating point types. Actually, I have never had the need for floating point values, and I'm wouldn't know how to implement them. If you'd like, I can contact you when I resume work on this library, so that we can work something out.
* The function names group_divide and group_inverse are too generic, IMO. They don't convey that you are referring to the prime residue group mod m. As soon as you start implementing, e.g., EC arithmetic or the arithmetic in GF(p^n), the names become confusing because there are several groups involved. I'd prefer something along the lines of divide_mod, inverse_mod.
* There are some functions in NTL I found convenient and that I didn't see in your library: E.g., bit-wise operators, Hamming-weight, CRT, logarithm with result type double. Do you plan to support these? (There are even more functions in NTL, like Jacobi symbol, primality tests etc.
Good point. that
are required in any library for computational number theory. Do you have a clear policy what belongs in your big_integer library and what you consider to be too specialized?)
Well... I try to aim at core operations only. At the moment, I support jacobi symbol and primality testing, but I will almost certainly remove them. Bit-wise operators should probably be available. The other operations that you mentioned are without doubt useful, but I think they belong in a separate library.
Again, this is after a superficial look only, so it's likely I missed something.
Regards
Christoph
best regards, Richard Peters

From: "Richard Peters" <r.a.peters@student.tue.nl>
The bigint type proposed for the C++ standard library does not permit the use of expression templates. Using expression templates has a positive and a negative aspect: A significant speedup can be gained using expression templates. The downside is that templates cannot deduce the correct type of expressions, for example: template<class T> void f(T a, T b) will not work when invoked as f(x + y, x - y), because x + y returns an object of type different from x - y. Because the C++ standard library proposal specifies the result type of the operators to be of type const integer, a library implementing that proposal cannot use expression templates.
PMFJI, but I wonder whether this would work: template <typename T, typename U> void f(T a, U b) { typedef typename <magical_incantation<T, U>::type V; f_impl<V>(a, b); } a and b can have different types, but then magical_incantation computes a common type to which both can be converted and the implementation function can be invoked with that type. I may be off base, and you may have already tried this approach, but I just wanted to mention it if it could help. -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;

From: "Richard Peters" <r.a.peters@student.tue.nl>
The bigint type proposed for the C++ standard library does not permit
use of expression templates. Using expression templates has a positive and a negative aspect: A significant speedup can be gained using expression templates. The downside is that templates cannot deduce the correct type of expressions, for example: template<class T> void f(T a, T b) will not work when invoked as f(x + y, x - y), because x + y returns an object of type different from x - y. Because the C++ standard library proposal specifies the result type of
operators to be of type const integer, a library implementing that
From: "Rob Stewart" <stewart@sig.com> the the proposal
cannot use expression templates.
PMFJI, but I wonder whether this would work:
PMFJI... I have no clue what it means, sorry.
template <typename T, typename U> void f(T a, U b) { typedef typename <magical_incantation<T, U>::type V; f_impl<V>(a, b); }
a and b can have different types, but then magical_incantation computes a common type to which both can be converted and the implementation function can be invoked with that type.
I may be off base, and you may have already tried this approach, but I just wanted to mention it if it could help.
Oh, there are a few ways in which you can perform computations with the expression templates, but that was not what I meant. My point was: If you specify the result of operators + and - to be big_integer, then expression templates cannot be made to work, because then those operators return types other than big_integer. User code may depend on the specification that those operators do return big_integers. Therefore, any implementation satisfying the requirements of the C++ standard library proposal will miss the possible speedincrease gained by the use of expression templates. best regards, Richard Peters

From: "Richard Peters" <r.a.peters@student.tue.nl>
From: "Rob Stewart" <stewart@sig.com>
PMFJI, but I wonder whether this would work:
PMFJI... I have no clue what it means, sorry.
Pardon Me For Jumping In
template <typename T, typename U> void f(T a, U b) { typedef typename <magical_incantation<T, U>::type V; f_impl<V>(a, b); }
a and b can have different types, but then magical_incantation computes a common type to which both can be converted and the implementation function can be invoked with that type.
I may be off base, and you may have already tried this approach, but I just wanted to mention it if it could help.
Oh, there are a few ways in which you can perform computations with the expression templates, but that was not what I meant. My point was: If you specify the result of operators + and - to be big_integer, then expression templates cannot be made to work, because then those operators return types other than big_integer. User code may depend on the specification that those operators do return big_integers. Therefore, any implementation satisfying the requirements of the C++ standard library proposal will miss the possible speedincrease gained by the use of expression templates.
Oh, sorry. I missed your point. -- Rob Stewart stewart@sig.com Software Engineer http://www.sig.com Susquehanna International Group, LLP using std::disclaimer;

On Sat, Mar 05, 2005 at 07:34:22PM +0100, Richard Peters wrote:
Oh, there are a few ways in which you can perform computations with the expression templates, but that was not what I meant. My point was: If you specify the result of operators + and - to be big_integer, then expression templates cannot be made to work, because then those operators return types other than big_integer. User code may depend on the specification that those operators do return big_integers. Therefore, any implementation satisfying the requirements of the C++ standard library proposal will miss the possible speedincrease gained by the use of expression templates.
Can you point me to any data about the speedup of bigint operations due to ET? How complex need the expressions to be and what is the bitlength threshold so that programs profit from ET? (I am aware that ET can have a huge effect on the speed of matrix operations, but there you typically need to allocate / copy / deallocate much more data if you have temporary objects.) Does anyone know what speedup one can expect if the move-semantic proposal is applied to bigint types? (Say, for simple expressions like x = y + z and x = y * z.) Regards Christoph -- http://www.informatik.tu-darmstadt.de/TI/Mitarbeiter/cludwig.html LiDIA: http://www.informatik.tu-darmstadt.de/TI/LiDIA/Welcome.html

Le samedi 05 mars 2005 à 21:13 +0100, Christoph Ludwig a écrit :
Can you point me to any data about the speedup of bigint operations due to ET? How complex need the expressions to be and what is the bitlength threshold so that programs profit from ET? (I am aware that ET can have a huge effect on the speed of matrix operations, but there you typically need to allocate / copy / deallocate much more data if you have temporary objects.)
Does anyone know what speedup one can expect if the move-semantic proposal is applied to bigint types? (Say, for simple expressions like x = y + z and x = y * z.)
I can't tell you what it would be for this particular implementation, but I can give you some results I got when using GMP some times ago. I hope everybody will agree that GMP is a high quality implementation of big integers, and hence the results are relevant. The tests were comparing the cost of a superfluous temporary when using basic arithmetic operations on 256 bit wide integers. So it applies to your question, since we can suppose that a correct ET implementation would not use a temporary for a single operation. When using a temporary, a multiplication was executed around two times slower, and a addition was executed around six times slower. Since GMP is heavily optimized, the cost of a temporary (allocating memory, copying data, freeing memory) for each operation is no more negligible. Quite the contrary in fact, since it accounts for more than 80% of the cost of an addition. And the cost of a temporary becomes even worse, when the additional memory requirements kick you out of L1 cache. Especially since a big expression with temporaries can quickly require twice the memory needed for the same expression without temporaries. Best regards, Guillaume The body of the test for "c = a * b;" was mpz_t d; mpz_init(d); mpz_mul(d, a, b); mpz_set(c, d); mpz_clear(d); when a temporary d was used to store the result of the multiplication. Without temporary, it was simply mpz_mul(c, a, b);

On Sun, 06 Mar 2005 01:33:50 +0100 Guillaume Melquiond <guillaume.melquiond@ens-lyon.fr> wrote:
I can't tell you what it would be for this particular implementation, but I can give you some results I got when using GMP some times ago. I hope everybody will agree that GMP is a high quality implementation of big integers, and hence the results are relevant.
Hmmm. Did you try the C++ wrappers, which are already implemented with expression templates?

Le samedi 05 mars 2005 à 22:24 -0500, Jody Hagins a écrit :
On Sun, 06 Mar 2005 01:33:50 +0100 Guillaume Melquiond <guillaume.melquiond@ens-lyon.fr> wrote:
I can't tell you what it would be for this particular implementation, but I can give you some results I got when using GMP some times ago. I hope everybody will agree that GMP is a high quality implementation of big integers, and hence the results are relevant.
Hmmm. Did you try the C++ wrappers, which are already implemented with expression templates?
The tests were purely C tests, so they weren't using the C++ wrappers. But I have also used GMP C++ wrappers in other occasions, and I know they are able to remove the last temporary before assignation. So the results in C++ would have been similar for single arithmetic operations: the ET addition would have been 6 times faster than the dumb addition, and the ET multiplication would have been 2 times faster than the dumb multiplication. The GMP C++ wrappers do not reuse the temporaries though, so the more complex expressions would benefit from C hand-writing. For example, in the expression "a*b + c*d + e*f", the temporary needed to computed "c*d" won't be reused later to compute "e*f. I'm not even sure such optimizations can be done by strictly using C++ meta-programming though. It seems to me some kind of language support would be required. But maybe I'm overly pessimistic and a complex ET implementation would be able to reuse temporaries. Best regards, Guillaume

Somewhere in the E.U., le 07/03/2005 Bonjour In article <05b401c52187$b668a4e0$7a469b83@campus.tue.nl>, "Richard Peters" <r.a.peters@student.tue.nl> wrote:
Well... I try to aim at core operations only. At the moment, I support jacobi symbol and primality testing, but I will almost certainly remove them.
Huh? Why? Hubert Holin

"Hubert Holin" <Hubert.Holin@meteo.fr> wrote in message news:Hubert.Holin-D2560E.18082507032005@sea.gmane.org...
Well... I try to aim at core operations only. At the moment, I support jacobi symbol and primality testing, but I will almost certainly remove them.
Huh? Why?
Hubert Holin
Because I think they do not belong in a big_integer library. They belong in a number theory library. If functions like jacobi symbol make it into a big_integer library, then so should the legendre symbol, lucas function, rabin-miller, lucas probable prime, etc, etc. best regards, Richard Peters

"Richard Peters" <r.a.peters@student.tue.nl> writes:
"Hubert Holin" <Hubert.Holin@meteo.fr> wrote in message news:Hubert.Holin-D2560E.18082507032005@sea.gmane.org...
Well... I try to aim at core operations only. At the moment, I support jacobi symbol and primality testing, but I will almost certainly remove them.
Huh? Why?
Hubert Holin
Because I think they do not belong in a big_integer library. They belong in a number theory library. If functions like jacobi symbol make it into a big_integer library, then so should the legendre symbol, lucas function, rabin-miller, lucas probable prime, etc, etc.
You might want to peruse the discussion between me and Peter Dimov in boost-users: http://news.gmane.org/find-root.php?message_id=%3cd053gs%249km%241%40sea.gma... Whether these things "belong to a library" at all is open for debate. -- Dave Abrahams Boost Consulting www.boost-consulting.com

On Tue, Mar 08, 2005 at 10:46:21AM -0500, David Abrahams wrote:
"Richard Peters" <r.a.peters@student.tue.nl> writes:
"Hubert Holin" <Hubert.Holin@meteo.fr> wrote in message news:Hubert.Holin-D2560E.18082507032005@sea.gmane.org...
Well... I try to aim at core operations only. At the moment, I support jacobi symbol and primality testing, but I will almost certainly remove them.
Huh? Why?
Hubert Holin
Because I think they do not belong in a big_integer library. They belong in a number theory library. If functions like jacobi symbol make it into a big_integer library, then so should the legendre symbol, lucas function, rabin-miller, lucas probable prime, etc, etc.
You might want to peruse the discussion between me and Peter Dimov in boost-users:
http://news.gmane.org/find-root.php?message_id=%3cd053gs%249km%241%40sea.gma...
Whether these things "belong to a library" at all is open for debate.
I do not get your point. In the thread you referred to the only discussion between you and Peter Dimov I see is about about points of customization, ADL etc. What's the connection with the jacobi symbol and similar number theoretic functions? I do not recall anyone proposing an interface to such functions that you are supposed to customize for your_favorite_integer_type. Confused... Christoph -- http://www.informatik.tu-darmstadt.de/TI/Mitarbeiter/cludwig.html LiDIA: http://www.informatik.tu-darmstadt.de/TI/LiDIA/Welcome.html

Christoph Ludwig <cludwig@cdc.informatik.tu-darmstadt.de> writes:
On Tue, Mar 08, 2005 at 10:46:21AM -0500, David Abrahams wrote:
"Richard Peters" <r.a.peters@student.tue.nl> writes:
"Hubert Holin" <Hubert.Holin@meteo.fr> wrote in message news:Hubert.Holin-D2560E.18082507032005@sea.gmane.org...
Well... I try to aim at core operations only. At the moment, I support jacobi symbol and primality testing, but I will almost certainly remove them.
Huh? Why?
Hubert Holin
Because I think they do not belong in a big_integer library. They belong in a number theory library. If functions like jacobi symbol make it into a big_integer library, then so should the legendre symbol, lucas function, rabin-miller, lucas probable prime, etc, etc.
You might want to peruse the discussion between me and Peter Dimov in boost-users:
http://news.gmane.org/find-root.php?message_id=%3cd053gs%249km%241%40sea.gma...
Whether these things "belong to a library" at all is open for debate.
I do not get your point. In the thread you referred to the only discussion between you and Peter Dimov I see is about about points of customization, ADL etc. What's the connection with the jacobi symbol and similar number theoretic functions? I do not recall anyone proposing an interface to such functions that you are supposed to customize for your_favorite_integer_type.
Confused...
Look for Peter's arguments about which library "owns" a customization point. These functions will apply to a variety of numeric types, and therefore will be used as customization points. Someone will define a concept that depends on them. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Somewhere in the E.U., le 09/03/2005 Bonjour In article <d0kavn$klk$1@sea.gmane.org>, "Richard Peters" <r.a.peters@student.tue.nl> wrote:
"Hubert Holin" <Hubert.Holin@meteo.fr> wrote in message news:Hubert.Holin-D2560E.18082507032005@sea.gmane.org...
Well... I try to aim at core operations only. At the moment, I support jacobi symbol and primality testing, but I will almost certainly remove them.
Huh? Why?
Hubert Holin
Because I think they do not belong in a big_integer library. They belong in a number theory library. If functions like jacobi symbol make it into a big_integer library, then so should the legendre symbol, lucas function, rabin-miller, lucas probable prime, etc, etc.
best regards,
Richard Peters
I guess I should have added a smiley... Still, I find it a waste to take away something which both already works and is eminently useful. Full-blown libraries are marvelous, of course, but even small bits of functionality are welcome. May I suggest that you either present two libraries (lots of work), or that you just present the Big Int library, with an additional "illustrative" package of functionality which may form the nucleus of an ulterior library (to be reviewed separately perhaps). Merci Hubert Holin

From: "Hubert Holin" <Hubert.Holin@meteo.fr>
Well... I try to aim at core operations only. At the moment, I support jacobi symbol and primality testing, but I will almost certainly remove them.
Huh? Why?
Hubert Holin
Because I think they do not belong in a big_integer library. They belong in a number theory library. If functions like jacobi symbol make it into a big_integer library, then so should the legendre symbol, lucas function, rabin-miller, lucas probable prime, etc, etc.
best regards,
Richard Peters
I guess I should have added a smiley... Still, I find it a waste to take away something which both already works and is eminently useful. Full-blown libraries are marvelous, of course, but even small bits of functionality are welcome. May I suggest that you either present two libraries (lots of work), or that you just present the Big Int library, with an additional "illustrative" package of functionality which may form the nucleus of an ulterior library (to be reviewed separately perhaps).
I'll take it into consideration, and I'll just wait to see what the general response here at boost is when I submit it for review. best regards, Richard Peters

On Sat, Mar 05, 2005 at 02:31:59PM +0100, Richard Peters wrote:
From: "Christoph Ludwig" <cludwig@cdc.informatik.tu-darmstadt.de>
* Did I miss the functions that convert big_integer to long, double etc.? And the other way around, how can I convert a double to a big_integer? (Such conversion functions are essential in my area of interest, lattice basis reduction.)
I don't have functions converting to/from floating point types. Actually, I have never had the need for floating point values,
In lattice reduction, you work on Gram-Schmidt decompositions of (big_)integer matrices where the rank is several hundreds. The Gram-Schmidt coefficients are rational, but in high dimensions it is way too expensive to keep them in exact arithmetic. Therefore, the Schnorr-Euchner variant of the Lenstra-Lenstra-Lovasz algorithm stores the Gram-Schmidt decomposition in (preferably hardware) floating point arithmetic. But eventually all transformations applied to the floating point approximations need to be applied to the integer matrix as well. That is why you need integer / floating point conversions for this type of application.
and I'm wouldn't know how to implement them. If you'd like, I can contact you when I resume work on this library, so that we can work something out.
Right now I use NTL <url:http://www.shoup.net/ntl/index.html> for my programs. I didn't check how it performs the bigint / double conversions but it also supports gmp as core library. So it should be possible for your library as well. Regards Christoph -- http://www.informatik.tu-darmstadt.de/TI/Mitarbeiter/cludwig.html LiDIA: http://www.informatik.tu-darmstadt.de/TI/LiDIA/Welcome.html

"Pavel Chikulaev" <pavel_chikulaev@yahoo.com> writes:
Hi,
Do you know that well-known Herb Sutter's assignment operator in most cases does unnecessary copying?
That issue was discussed exhaustively in this thread: http://aspn.activestate.com/ASPN/Mail/Message/boost/1394489 I suggest looking at posts by Anrdrei Alexandrescu and Howard Hinnant. -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams" <dave@boost-consulting.com> wrote in message news:ur7iw2fe0.fsf@boost-consulting.com...
"Pavel Chikulaev" <pavel_chikulaev@yahoo.com> writes:
Hi,
Do you know that well-known Herb Sutter's assignment operator in most cases does unnecessary copying?
That issue was discussed exhaustively in this thread: http://aspn.activestate.com/ASPN/Mail/Message/boost/1394489
I suggest looking at posts by Anrdrei Alexandrescu and Howard Hinnant.
Ok. I'm not the one. But now, two years later, who uses this optimization? Almost no one. Nobody know about it. Boost can popularize this old idea, adding it to boost utility library for example? Or C++0x going to have this optimization embedded?

Pavel Chikulaev writes:
Do you know that well-known Herb Sutter's assignment operator in most cases does unnecessary copying? No? Then read it to learn how to elminate it.
[...]
But how can we define that copying is necessary? - Every function returns object not of type T but temporary object of type T. So let's code something:
template<typename T> struct temporary : T { temporary() {} temporary(const T & t) : T(t) {}
//some operators and constructors to create base object in-place not listed here };
Now let's change function consider operator +:
Matrix operator + (const Matrix & a, const Matrix & b) { Matrix temp(a); temp += b; return temp; }
To optimize it we do the following:
temporary<Matrix> operator + (const Matrix & a, const Matrix & b) { Matrix temp(a); temp += b; return temp; }
[...] That's called "move semantics", a library-based approach to which was thought of as far back as 2001 -- http://tinyurl.com/4nzcq (http://groups-beta.google.com/group/comp.lang.c++.moderated/browse_thread/th...). -- Aleksey Gurtovoy MetaCommunications Engineering

Aleksey Gurtovoy <agurtovoy@meta-comm.com> writes:
That's called "move semantics", a library-based approach to which was thought of as far back as 2001 -- http://tinyurl.com/4nzcq (http://groups-beta.google.com/group/comp.lang.c++.moderated/browse_thread/th...).
And I believe the most advanced work on library-based move semantics to date is in the sandbox http://cvs.sourceforge.net/viewcvs.py/boost-sandbox/boost-sandbox/boost/move... http://cvs.sourceforge.net/viewcvs.py/boost-sandbox/boost-sandbox/libs/move/... And look for N1770 and N1771 in the upcoming committee mailing (due out any day now) for a complete language solution to move semantics and a description of associated standard library changes. -- Dave Abrahams Boost Consulting www.boost-consulting.com
participants (12)
-
Aleksey Gurtovoy
-
Christoph Ludwig
-
David Abrahams
-
David B. Held
-
Guillaume Melquiond
-
Hubert Holin
-
Jason Hise
-
Jody Hagins
-
Jonathan Turkanis
-
Pavel Chikulaev
-
Richard Peters
-
Rob Stewart