
Thank you very much for the suggestions on how to reduce the need for recompilation. I will definitely look into that. We are currently using ccache to precompile header in order to avoid unnecessary recompilation.
From what I understand there are no other way to improve compilation time on first compilation other than changing the compiler? GCC really has trouble instantiating all the templates to account for those 98000 symbols in the object file. Unfortunately the MS VC compiler is not an option for us since most of our developers work on Linux.
It also a problem for us that when the compilation takes 2GB of RAM
people are unlikely to be able to compile our tool on an older machine
or laptop, and we have to support those environments as well (but we
could of cause make the serialization optional so it is not a big
problem).
thanks,
Andreas
On Wed, Jul 9, 2008 at 10:48 PM, Robert Ramey
a) Try buld for release. This should drop the binary down to a reasonable size. If it doesn't that would be interesting to know.
b) Don't use inline code for serialization definitions.
c) Separate the serialization code into separate modules so they will only need to be compiled when the class definition changes.
d) Consider creating as many as one module per class.
e) Consider making a library of all the modules containing your serialization code. In this way, even if you make a new applicaiton, the serialization code won't have to be recompiled.
f) Consider using a polymorphic archive (in a library as above). In this way, one compiliation will server for ALL archive classes.
g) Review the demo_pimpl.cpp for an example on how to do this.
The result of the above will be that only those modules whose header is changed, will need to be compiled.
That is, the compilations will take just as long, but should be necessary far less frequently.
This approach is useful for all large applications.
You might not like this idea - but you might want to consider using MS VC as compiler. Its much faster than GCC. Making your code conformant with both is very little effort.
Robert Ramey
Andreas Sæbjørnsen wrote:
We are a team that develops a C/C++/Fortran/Binary compiler project called ROSE and we want to serialize the Abstract Syntax Tree (AST). This AST has 436 different custom class types and uses maps, sets, vectors, lists and hash maps. These classes have about 1229 different variables. For convenience reasons we wanted to serialize the AST using boost::serialize, but we have some compiler performance problems. Currently the compilation takes (GCC 4.1.2) -48 minutes to compile and link a program that takes seconds to compile without serialization -The resulting binary goes from <1MB to 122 MB in size. -The compilation uses 2 GB of RAM -The object file contains 98075 symbols with serialization and 41 without
If the templates for loading the AST is not instantiated the instantiated templates for saving the AST takes about - 19 minutes to compile and link - The resulting binary goes from <1MB to 77MB - The compilation uses 1.1GB of RAM
We have our own custom serialization mechanism that is not that easy to support, but it does not visibly increa
My machine is a 2.66 Ghz Quad Core Xeon 5355, 16 GB RAM and 2 TB of striped storage.
Is there any trick I can use to reduce the compilation time? When the compilation time is 48 minutes boost::serialization is unfortunately not an option for us although it is a great tool at runtime.
thanks, Andreas
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users