Hey guys, One of the requirements for a continuous integration process is that the build for a specific target be quick. Something under 10 minutes is ideal. I find that when not using precompiled headers, boost can significantly impact the compilation time of the project. I've experienced 20 minute compiles when using boost fairly extensively throughout the code base. So far I only see two obvious solutions to this problem. First, use precompiled headers. I really don't want to do this because it causes issues with include dependencies and makes the code not reusable as a result. The second option is to beef up the machines doing the compiles, however this can have diminishing returns. I am sure that a lot of people in the community have had this specific issue with Boost's compile times, as well as in many other areas. What would you guys recommend? Is there any other solution beyond the obvious? Thanks for your time.
Hi!
One possible solution would be to use pImpl idiom and use boost in cpp files
via the forward declartion. This might work in conjunction by splitting one
big project in producing multiple libs or shared objects (dlls) and link
against binaries with slim headers.
Greetings,
Ovanes
On Sun, Apr 26, 2009 at 10:40 PM, Robert Dailey
Hey guys, One of the requirements for a continuous integration process is that the build for a specific target be quick. Something under 10 minutes is ideal. I find that when not using precompiled headers, boost can significantly impact the compilation time of the project. I've experienced 20 minute compiles when using boost fairly extensively throughout the code base.
So far I only see two obvious solutions to this problem. First, use precompiled headers. I really don't want to do this because it causes issues with include dependencies and makes the code not reusable as a result. The second option is to beef up the machines doing the compiles, however this can have diminishing returns.
I am sure that a lot of people in the community have had this specific issue with Boost's compile times, as well as in many other areas. What would you guys recommend? Is there any other solution beyond the obvious?
Thanks for your time.
One possible solution would be to use pImpl idiom and use boost in cpp files via the forward declartion. This might work in conjunction by splitting one big project in producing multiple libs or shared objects (dlls) and link against binaries with slim headers.
The problem is that such an approach doesn't work if a project contains a lot of class templates, which in turn use boost. MSVC9 has /MP switch that enables "build with multiple processes", but it's very limited, as it conflicts with /Gm (enable minimal rebuilds) and some other options...
On Sun, Apr 26, 2009 at 5:32 PM, Igor R
The problem is that such an approach doesn't work if a project contains a lot of class templates, which in turn use boost. MSVC9 has /MP switch that enables "build with multiple processes", but it's very limited, as it conflicts with /Gm (enable minimal rebuilds) and some other options...
A portable solution would be ideal, but not a requirement. At this point even MSVC specific solutions would be acceptable.
Sure it works.
It depends how you design your interfaces. If you consequently use boost in
cpp files only and declare in a header a pointer to a pImpl object, that
works very well (with Visual Studio as well). Robert, I would suggest for
you to take a look in the book: Large-Scale C++ Software Design by John
Lakos. There he explains all that idioms and how to organize projects for
better compilation speed. And much more other things.
http://www.amazon.com/Large-Scale-Software-Addison-Wesley-Professional-Computing/dp/0201633620/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1240821340&sr=8-1
I did it many times in my projects and it always worked well.
Additional articles by Herb Sutter can be of help:
http://www.gotw.ca/gotw/024.htm
http://www.gotw.ca/gotw/007.htm
Regards,
Ovanes
On Mon, Apr 27, 2009 at 12:32 AM, Igor R
One possible solution would be to use pImpl idiom and use boost in cpp files via the forward declartion. This might work in conjunction by splitting one big project in producing multiple libs or shared objects (dlls) and link against binaries with slim headers.
The problem is that such an approach doesn't work if a project contains a lot of class templates, which in turn use boost. MSVC9 has /MP switch that enables "build with multiple processes", but it's very limited, as it conflicts with /Gm (enable minimal rebuilds) and some other options...
Igor R wrote:
One possible solution would be to use pImpl idiom and use boost in cpp files via the forward declartion. This might work in conjunction by splitting one big project in producing multiple libs or shared objects (dlls) and link against binaries with slim headers.
The problem is that such an approach doesn't work if a project contains a lot of class templates, which in turn use boost. MSVC9 has /MP switch that enables "build with multiple processes", but it's very limited, as it conflicts with /Gm (enable minimal rebuilds) and some other options...
We use Incredibuild from http://www.xoreax.com/ it's worth every penny. It distributes our builds over 20 processors. Jeff
One of the requirements for a continuous integration process is that the build for a specific target be quick. Something under 10 minutes is ideal. I find that when not using precompiled headers, boost can significantly impact the compilation time of the project. I've experienced 20 minute compiles when using boost fairly extensively throughout the code base.
So far I only see two obvious solutions to this problem. First, use precompiled headers. I really don't want to do this because it causes issues with include dependencies and makes the code not reusable as a result. The second option is to beef up the machines doing the compiles, however this can have diminishing returns.
I am sure that a lot of people in the community have had this specific issue with Boost's compile times, as well as in many other areas. What would you guys recommend? Is there any other solution beyond the obvious?
I think most of the solutions have been expanded already: * precompiled headers (I know dependencies are a killer). * Use the Pimpl idium to hide Boost dependencies from class interfaces, and therefore remove them from the build (but then you loose inline expansion which may be an issue). The big question though, is what is it *specifically* that's causing the long compile times? Once you know that you can try and isolate the problem code to a single translation unit. One other thing that may help, if you have headers that make extensive use of metaprogramming via mpl for example: if say 90% of the code is instantiating the same template instances, then providing full specializations of those templates *without* the metaprogramming logic can be a big win. HTH, John.
Robert Dailey wrote:
Hey guys,
One of the requirements for a continuous integration process is that the build for a specific target be quick. Something under 10 minutes is ideal. I find that when not using precompiled headers, boost can significantly impact the compilation time of the project. I've experienced 20 minute compiles when using boost fairly extensively throughout the code base.
So far I only see two obvious solutions to this problem. First, use precompiled headers. I really don't want to do this because it causes issues with include dependencies and makes the code not reusable as a result. The second option is to beef up the machines doing the compiles, however this can have diminishing returns.
I am sure that a lot of people in the community have had this specific issue with Boost's compile times, as well as in many other areas. What would you guys recommend? Is there any other solution beyond the obvious?
One thing I've found which helps reasonable build times for continuous integration (besides the obvious) is hiding the use of Boost or other header-heavy libraries behind my own abstractions and lots of forward declarations. However, if you are using MPL, then good luck to you! The right choice of build system also helps. SCons is notorious for being slow on incremental builds, especially with lots of header files. For me, the trade-off (available features) is acceptable. -- Sohail Somani http://uint32t.blogspot.com
Robert Dailey wrote:
Hey guys,
One of the requirements for a continuous integration process is that the build for a specific target be quick. Something under 10 minutes is ideal. I find that when not using precompiled headers, boost can significantly impact the compilation time of the project. I've experienced 20 minute compiles when using boost fairly extensively throughout the code base.
You have to remember that CI comes from an area of research in project development that also tells us to split our project into numerous small libraries. Check out RCM's article on "Granularity" and "Stability". If you have extraordinarily long build times you may be breaking one or more of the packaging principles.
So far I only see two obvious solutions to this problem. First, use precompiled headers. I really don't want to do this because it causes issues with include dependencies and makes the code not reusable as a result. The second option is to beef up the machines doing the compiles, however this can have diminishing returns.
You forgot a third option, split your project into numerous small libraries so that any changes require the smallest amount of rebuilding that is possible.
On Mon, Apr 27, 2009 at 10:57 AM, Noah Roberts
You forgot a third option, split your project into numerous small libraries so that any changes require the smallest amount of rebuilding that is possible.
This is actually a good idea that I overlooked. Thank you. I mean, I already modularize my code fairly well, I just never had thought about only testing components that have changed. I think such a build system might be complex to make. I'm using CMake primarily for my unit testing system. I wonder if CMake alone is enough for this. Usually the CI testing is done on the client machine in my case, but maybe the server could do it as a post-commit? These are all details that get a bit off-topic, but hopefully I can figure something out. The important thing is that I don't have to make any changes to the way I am using boost (mostly), and I can focus on doing a bit more than naive testing approach of testing everything regardless of if it changed.
Robert Dailey wrote:
On Mon, Apr 27, 2009 at 10:57 AM, Noah Roberts
mailto:roberts.noah@gmail.com> wrote: You forgot a third option, split your project into numerous small libraries so that any changes require the smallest amount of rebuilding that is possible.
This is actually a good idea that I overlooked. Thank you. I mean, I already modularize my code fairly well, I just never had thought about only testing components that have changed. I think such a build system might be complex to make. I'm using CMake primarily for my unit testing system. I wonder if CMake alone is enough for this. Usually the CI testing is done on the client machine in my case, but maybe the server could do it as a post-commit? These are all details that get a bit off-topic, but hopefully I can figure something out.
We were using cmake for a while. I had more trouble deciding how to integrate the whole large project vs. dealing with the smaller libs. I just told the CI server (CC.Net) to run make tests as part of the build process. If the unit tests for that module fail it flags as red that way. Full integration tests for us take too long for the quick turn around time. We have to run them nightly. This is still quick enough I think. Unit tests turn up most problems before it reaches integration testing. Integration testing just turns up more opportunity for unit testing.
[it would be nice if you could post using plain text]
"Robert Dailey"
wrote in message news:496954360904270903ka6a2326i635b3f73df00ea8@mail.gmail.com... On Mon, Apr 27, 2009 at 10:57 AM, Noah Roberts
wrote:
You forgot a third option, split your project into numerous small libraries so that any changes require the smallest amount of rebuilding that is >>possible.
This is actually a good idea that I overlooked. Thank you. I mean, I already modularize my code fairly well, I just never had thought about only testing >components that have changed. I think such a build system might be complex to make. I'm using CMake primarily for my unit testing system. I wonder if >CMake alone is enough for this. Usually the CI testing is done on the client machine in my case, but maybe the server could do it as a post-commit?
CI should definitely always be performed at the server, in combination with running unit testing at the client. Use a decent CI server (CC, ...) to support running builds whenever the repo is updated.
These are all details that get a bit off-topic, but hopefully I can figure something out.
Run only an incremental build on the client (including incremental tests). If you are in a heavily collaborative project/environment, update from the repo and run the (incremental) tests again before actually submitting/publishing. HTH / Johan
To address this problem, I organize my program into separately compiled modules and these modules into a library. Then I link against this application specific library. So code isn't re-instantiated all over again every time. It does take a little time to re-organize code this way and sometime I just have to explicitly instantiate stuff, but the net result is that compile time isn't an issue for me regardless of the size of the application. BTW 10 minutes is way to long for me. I need more instant gratification - under a minute.
Robert Ramey
"Robert Dailey"
participants (9)
-
Igor R
-
Jeff Flinn
-
Johan Nilsson
-
John Maddock
-
Noah Roberts
-
Ovanes Markarian
-
Robert Dailey
-
Robert Ramey
-
Sohail Somani