Distributed Boost with CMake: proposal and volunteering

Hi all, I would like to propose my approach to a modularized build of the Boost C++ Libraries with CMake. The project is found on http://github.com/purpleKarrot/Boost.CMake The following features are currently implemented: * Aggregate modules from different sources (CVS, SVN, GIT, tarballs, ...). * Build, Test, Install * Create a binary installer with selectable components for Windows. * Create a source package (with the modules included) that can do everything in this list (except the first entry). * Create a Debian source package that can be uploaded to a Launchpad PPA where it is built and packaged into many binary Debian packages. * Build Documentation (the usual quickbook-doxygen-boostbook-chain). * Tested on Windows (Visual Studio 10) and Ubuntu (GCC). * Precompiled headers (currently MSVC only). * Build two Boost.MPI libraries on Debian: boost_openmpi and boost_mpich2. * Tests actually make use of Boost's autolinking feature. This tool would allow the following Boost development process: * Each Boost library uses it's own repository (no matter which VCS) * Boost.CMake has a list of modules (or multiple list: eg 'boost' and 'incubating') * A modules definition consists of the information about where to get the source code from. * Boost.CMake can be used to aggregate all modules, run tests, build release packages... * Incubating libraries can be tested before they become an official part of boost. Please note that I am not proposing a development process. I am just proposing a tool. I am volunteering to extend and maintain the tool so that it can (help to) drive the development process. If you decide that all libraries should use git, I can drop the support for all other VCSs, no problem. Also, I am happy to implement other missing features. In the next couple of days I was going to write some documentation and a tutorial how to migrate the libraries to CMake. Cheers, Daniel

On Sat, Jan 29, 2011 at 7:21 PM, Daniel Pfeifer <daniel@pfeifer-mail.de> wrote:
Hi all,
I would like to propose my approach to a modularized build of the Boost C++ Libraries with CMake. The project is found on http://github.com/purpleKarrot/Boost.CMake
[snip awesomeness] You sir, are *awesome*. How about contributing this work to the ryppl project? :)
From the bottom of my heart, thank you sir. This is very much appreciated.
-- Dean Michael Berris about.me/deanberris

On 1/29/2011 6:50 PM, Dean Michael Berris wrote:
On Sat, Jan 29, 2011 at 7:21 PM, Daniel Pfeifer <daniel@pfeifer-mail.de> wrote:
Hi all,
I would like to propose my approach to a modularized build of the Boost C++ Libraries with CMake. The project is found on http://github.com/purpleKarrot/Boost.CMake
[snip awesomeness]
Uh, YEAH. If even half of that stuff works, it's really, really cool. CC'ing ryppl-dev. Here's what was snipped:
The following features are currently implemented: * Aggregate modules from different sources (CVS, SVN, GIT, tarballs, ...). * Build, Test, Install * Create a binary installer with selectable components for Windows. * Create a source package (with the modules included) that can do everything in this list (except the first entry). * Create a Debian source package that can be uploaded to a Launchpad PPA where it is built and packaged into many binary Debian packages. * Build Documentation (the usual quickbook-doxygen-boostbook-chain). * Tested on Windows (Visual Studio 10) and Ubuntu (GCC). * Precompiled headers (currently MSVC only). * Build two Boost.MPI libraries on Debian: boost_openmpi and boost_mpich2. * Tests actually make use of Boost's autolinking feature.
This tool would allow the following Boost development process: * Each Boost library uses it's own repository (no matter which VCS) * Boost.CMake has a list of modules (or multiple list: eg 'boost' and 'incubating') * A modules definition consists of the information about where to get the source code from. * Boost.CMake can be used to aggregate all modules, run tests, build release packages... * Incubating libraries can be tested before they become an official part of boost.
You sir, are *awesome*. How about contributing this work to the ryppl project? :)
+1
From the bottom of my heart, thank you sir. This is very much appreciated.
+1 -- Eric Niebler BoostPro Computing http://www.boostpro.com

Am Samstag, den 29.01.2011, 19:50 +0800 schrieb Dean Michael Berris:
On Sat, Jan 29, 2011 at 7:21 PM, Daniel Pfeifer <daniel@pfeifer-mail.de> wrote:
Hi all,
I would like to propose my approach to a modularized build of the Boost C++ Libraries with CMake. The project is found on http://github.com/purpleKarrot/Boost.CMake
[snip awesomeness]
You sir, are *awesome*.
Thank you, that feels really good.
How about contributing this work to the ryppl project? :)
I have no objections. However, I fear that the goals of ryppl are a little high. I will explain my opinion in the reply to Dave's message. cheers, Daniel
From the bottom of my heart, thank you sir. This is very much appreciated.

At Sat, 29 Jan 2011 12:21:54 +0100, Daniel Pfeifer wrote:
Daniel Pfeifer <daniel@pfeifer-mail.de> Subject: [boost] Distributed Boost with CMake: proposal and volunteering Date: Sat, 29 Jan 2011 12:21:54 +0100 To: <boost@lists.boost.org> Message-ID: <9258a13ec6727bac1f0dc8baa849fc35-EhVcX1dJTQZXRw0RBgwNVDBXdh9XVlpBXkJHHF5ZWC9fUkYLQVx+H1dWXjBeQ0wEXFpZQVlR-webmailer2@server04.webmailer.hosteurope.de> User-Agent: Host Europe Webmailer/2.0 Reply-To: boost@lists.boost.org List-Post: <mailto:boost@lists.boost.org>
Hi all,
I would like to propose my approach to a modularized build of the Boost C++ Libraries with CMake. The project is found on http://github.com/purpleKarrot/Boost.CMake
Ummm... wow! This is certainly a welcome surprise. * Did you do this all by yourself? * Is it based on prior Boost/CMake efforts? * Did you know about the overlapping effort at Ryppl?
The following features are currently implemented: * Aggregate modules from different sources (CVS, SVN, GIT, tarballs, ...). * Build, Test, Install
* Have you verified that it builds, tests, and installs equivalent targets to what is being done by bjam? And if so, how?
* Create a binary installer with selectable components for Windows. * Create a source package (with the modules included) that can do everything in this list (except the first entry). * Create a Debian source package that can be uploaded to a Launchpad PPA where it is built and packaged into many binary Debian packages. * Build Documentation (the usual quickbook-doxygen-boostbook-chain). * Tested on Windows (Visual Studio 10) and Ubuntu (GCC).
* What was the test methodology?
* Precompiled headers (currently MSVC only).
I know what precompiled headers are, but I'm not sure specifically what it means that you're listing them here.
* Build two Boost.MPI libraries on Debian: boost_openmpi and boost_mpich2. * Tests actually make use of Boost's autolinking feature.
Nice.
This tool would allow the following Boost development process: * Each Boost library uses it's own repository (no matter which VCS) * Boost.CMake has a list of modules (or multiple list: eg 'boost' and 'incubating') * A modules definition consists of the information about where to get the source code from. * Boost.CMake can be used to aggregate all modules, run tests, build release packages... * Incubating libraries can be tested before they become an official part of boost.
Very nice!
Please note that I am not proposing a development process. I am just proposing a tool. I am volunteering to extend and maintain the tool so that it can (help to) drive the development process.
Whoa, seriously? Gratefully accepted! However, what I'd really like is to get you and the other stakeholders who are investing in the same thing (e.g. Kitware) on the same page, so that it's not just up to you alone. One of the main goals of the Ryppl project is to use tools that are maintained by more than one or two individual developers.
If you decide that all libraries should use git, I can drop the support for all other VCSs, no problem.
Also, I am happy to implement other missing features.
Let's talk!
In the next couple of days I was going to write some documentation and a tutorial how to migrate the libraries to CMake.
I'm a little confused. From the results you've reported, it sounds like you've already done that migration. So what's left to do? Thanks, -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Am Samstag, den 29.01.2011, 18:00 -0500 schrieb Dave Abrahams:
At Sat, 29 Jan 2011 12:21:54 +0100, Daniel Pfeifer wrote: * Did you do this all by yourself? * Is it based on prior Boost/CMake efforts? * Did you know about the overlapping effort at Ryppl?
It is based on the work of Douglas Gregor and Troy Straszheim. I borrowed some ideas from the "MinGW cross compiling environment" project, from ExternalProject.cmake and from Ryppl. So: yes, I know about Ryppl.
The following features are currently implemented: * Aggregate modules from different sources (CVS, SVN, GIT, tarballs, ...). * Build, Test, Install
* Have you verified that it builds, tests, and installs equivalent targets to what is being done by bjam? And if so, how?
No. But for the 'how' part of the question: I plan to make it possible to test against the installed Boost libraries. So we can easily compare the following three cases: A built and tested with bjam B built and installed with bjam, tested with CTest C built with CMake, tested with CTest. Do you think this is sufficient enough?
* Create a binary installer with selectable components for Windows. * Create a source package (with the modules included) that can do everything in this list (except the first entry). * Create a Debian source package that can be uploaded to a Launchpad PPA where it is built and packaged into many binary Debian packages. * Build Documentation (the usual quickbook-doxygen-boostbook-chain). * Tested on Windows (Visual Studio 10) and Ubuntu (GCC).
* What was the test methodology?
Maybe I should have written 'runs' instead of 'tested' since it does not include unit tests. Boost.CMake can successfully aggregate the modules, compile the source code, install the libraries and headers, generate packages with dependancies, and run tests. Whether the results are correct mainly depends on whether the individual CMakeLists.txt files are correct.
* Precompiled headers (currently MSVC only).
I know what precompiled headers are, but I'm not sure specifically what it means that you're listing them here.
There is no built in support for PCHs in CMake (yet). What Boost.CMake does in MSVC is simply: * create a header file that includes all headers that you want to precompile for your library or executable * create a source file that includes this header file * compile this source file with the /Yc flag * compile all other source files with /Yu and /FI In case of Boost.Math (which is the only library that makes use of PCH) this looks as follows: boost_add_library(math_tr1 PRECOMPILE <boost/math/special_functions.hpp> SOURCES ${sources} )
Please note that I am not proposing a development process. I am just proposing a tool. I am volunteering to extend and maintain the tool so that it can (help to) drive the development process.
Whoa, seriously?
Seriously.
Gratefully accepted! However, what I'd really like is to get you and the other stakeholders who are investing in the same thing (e.g. Kitware) on the same page, so that it's not just up to you alone. One of the main goals of the Ryppl project is to use tools that are maintained by more than one or two individual developers.
This is reasonable. Concerning reusing tools, I guess my approach goes even farther than Ryppl. Example: In the prior Boost/CMake effort there was a function to sort all modules depending on their dependancies. I completely removed that. Build dependancies are handled by the build tool (Visual Studio, GNU Make) while install dependancies are handled by the installer (NSIS, APT, ...). Ryppl unites [...] package management [...]. These are awesome features, no question. However, I am happy with APT on Debian, Pacman on Arch, RPM on Fedora, Ports on Mac... OK, I am not that happy on Windows, but that is a different story. Even if Ryppl becomes the best distributed cross-platform software management system you could imagine, it will not replace any of the package management tools in place. All you could expect is that someone creates a Ryppl-based Linux destro, or that Ryppl causes confusion for end users (Should I install that Python module with aptitude or with easy_install? Should I install that Ruby module with yum or with gem? Should I install that boost library with pacman or with ryppl?) So my approach would be a little bit different: I want a system that is able to serve the currently existing package management systems. I started with DEB, because I am a Ubuntu user. I also made it work on Windows and Visual Studio because I think this is still the most important platform for most end users. I want to add support for RPM etc in the future. This sounds like much more work (supporting many PMS versus just one), but this work has to be done anyways. There already are package managers that create Boost packages for different PMS. All we need is to pull them on bord like: "Help us help you to automate your work."
In the next couple of days I was going to write some documentation and a tutorial how to migrate the libraries to CMake.
I'm a little confused. From the results you've reported, it sounds like you've already done that migration. So what's left to do?
You should see Boost.CMake as four parts: 1. A set of CMake functions to simplify writing libraries for Boost. These include functions to generate documentation, define testcases, and then some thin wrappers around corresponding functions already in CMake (eg add_library -> boost_add_library). The goal of these functions is to provide naming consistency and so on. These functions are not intended to be reused for projects outside of Boost. All features, that might be of interest for other projects too, should rather be pushed upstream to CMake than implemented here. 2. A set of CMake functions to aggregate modules from differenct sources. 3. A database of Boost modules and where to get the sources. 4. All those CMakeLists.txt files for projects that do not provide their own. I volunteer to take responsibility for 1 and 2. In my vision 3 will be maintainded either by the individual authors of the modules or by the release managers. Currently this part is quite simple since all modules are just copied from a boost release. Once all modules provide their own listfiles there will be no 4 any more. What belongs into a projects listfile: a) The name of the project must be set: 'boost_project("Date Time")' b) Header files should be declared: 'boost_add_headers(<list>)' c) boost_add_library / boost_add_executable d) boost_add_documentation e) boost_add_test I did a), b) and c) for the most of the projects. I did d) and e) for at least one project (which is Accumulators, since I started at the top). I cannot migrate all projects completely because * there are some headers where I do not know to which project they belong to, * I don't know all dependancies of each project (talking about the headers here, compiling deps are easy) and * it is huge amount of work. The interface of the CMake functions I talked about in 1. of the "four parts of Boost.CMake" is now more or less stable. I thought I document them and then maybe call for help from the Guild. I will do what I can in this process, but I require some help from others. Maybe I could participate in this years Summer of Code, but I fear there will be not enough work left for a whole summer :-). cheers, Daniel

At Mon, 31 Jan 2011 10:27:58 +0100, Daniel Pfeifer wrote:
Am Samstag, den 29.01.2011, 18:00 -0500 schrieb Dave Abrahams:
At Sat, 29 Jan 2011 12:21:54 +0100, Daniel Pfeifer wrote: * Did you do this all by yourself? * Is it based on prior Boost/CMake efforts? * Did you know about the overlapping effort at Ryppl?
It is based on the work of Douglas Gregor and Troy Straszheim. I borrowed some ideas from the "MinGW cross compiling environment" project, from ExternalProject.cmake and from Ryppl. So: yes, I know about Ryppl.
Great. At least we probably are not duplicating work.
The following features are currently implemented: * Aggregate modules from different sources (CVS, SVN, GIT, tarballs, ...). * Build, Test, Install
* Have you verified that it builds, tests, and installs equivalent targets to what is being done by bjam? And if so, how?
No. But for the 'how' part of the question: I plan to make it possible to test against the installed Boost libraries. So we can easily compare the following three cases: A built and tested with bjam B built and installed with bjam, tested with CTest C built with CMake, tested with CTest.
Do you think this is sufficient enough?
Sounds like it's a great start. I keep thinking about a tool that compares command-lines and does some kind of heuristically-informed fuzzy matching to find likely problems, but I'm not sure we need to go that far. I *would* like to see something that verifies all the same tests get run.
* Create a binary installer with selectable components for Windows. * Create a source package (with the modules included) that can do everything in this list (except the first entry). * Create a Debian source package that can be uploaded to a Launchpad PPA where it is built and packaged into many binary Debian packages. * Build Documentation (the usual quickbook-doxygen-boostbook-chain). * Tested on Windows (Visual Studio 10) and Ubuntu (GCC).
* What was the test methodology?
Maybe I should have written 'runs' instead of 'tested' since it does not include unit tests. Boost.CMake can successfully aggregate the modules, compile the source code, install the libraries and headers, generate packages with dependancies, and run tests. Whether the results are correct mainly depends on whether the individual CMakeLists.txt files are correct.
Naturally. We need some strategy that can give us reasonable confidence in that correctness.
Please note that I am not proposing a development process. I am just proposing a tool. I am volunteering to extend and maintain the tool so that it can (help to) drive the development process.
Whoa, seriously?
Seriously.
Gratefully accepted! However, what I'd really like is to get you and the other stakeholders who are investing in the same thing (e.g. Kitware) on the same page, so that it's not just up to you alone. One of the main goals of the Ryppl project is to use tools that are maintained by more than one or two individual developers.
This is reasonable. Concerning reusing tools, I guess my approach goes even farther than Ryppl. Example: In the prior Boost/CMake effort there was a function to sort all modules depending on their dependancies. I completely removed that.
I don't think we are doing that anymore. At least, I hope not!
Build dependancies are handled by the build tool (Visual Studio, GNU Make) while install dependancies are handled by the installer (NSIS, APT, ...).
Awesome.
Ryppl unites [...] package management [...]. These are awesome features, no question. However, I am happy with APT on Debian, Pacman on Arch, RPM on Fedora, Ports on Mac... OK, I am not that happy on Windows, but that is a different story. Even if Ryppl becomes the best distributed cross-platform software management system you could imagine, it will not replace any of the package management tools in place.
Correct. However, I believe it could become a de-facto standard for people who are interested in developing and building from source.
All you could expect is that someone creates a Ryppl-based Linux destro, or that Ryppl causes confusion for end users (Should I install that Python module with aptitude or with easy_install? Should I install that Ruby module with yum or with gem? Should I install that boost library with pacman or with ryppl?)
Yeah, that's annoying. Most of what's installed by ryppl I expect to be done in a virtual environment that's set up just for development and testing, but when you get to the point of installation on a system with an existing package manager, I agree it would be much better to let that handle the install.
So my approach would be a little bit different: I want a system that is able to serve the currently existing package management systems. I started with DEB, because I am a Ubuntu user. I also made it work on Windows and Visual Studio because I think this is still the most important platform for most end users. I want to add support for RPM etc in the future. This sounds like much more work (supporting many PMS versus just one), but this work has to be done anyways. There already are package managers that create Boost packages for different PMS. All we need is to pull them on bord like: "Help us help you to automate your work."
That's fine; you don't need to buy into that part of Ryppl in order to join forces with the Ryppl developers working on boost modularization. In fact, if I understand what you're saying, your vision really doesn't conflict with Ryppl's at all; it's just a (relatively simple) extra layer. So, if what you're saying is that by default "ryppl install" on a debian system should just build and install .debs, I buy it.
In the next couple of days I was going to write some documentation and a tutorial how to migrate the libraries to CMake.
I'm a little confused. From the results you've reported, it sounds like you've already done that migration. So what's left to do?
You should see Boost.CMake as four parts:
,---- | Aside: should we call this something else, that includes a reference | to the word "modularized" somehow, just to distinguish it from the | several unmodularized boost-cmakes? We'll have to deal with these | naming issues eventually. `----
1. A set of CMake functions to simplify writing libraries for Boost. These include functions to generate documentation, define testcases, and then some thin wrappers around corresponding functions already in CMake (eg add_library -> boost_add_library). The goal of these functions is to provide naming consistency and so on. These functions are not intended to be reused for projects outside of Boost. All features, that might be of interest for other projects too, should rather be pushed upstream to CMake than implemented here.
KK. But they'll probably have to be implemented here before we can propose them as part of CMake. Fortunately, the Kitware guys are keen to help us out here.
2. A set of CMake functions to aggregate modules from differenct sources.
3. A database of Boost modules and where to get the sources.
4. All those CMakeLists.txt files for projects that do not provide their own.
I volunteer to take responsibility for 1 and 2. In my vision 3 will be maintainded either by the individual authors of the modules or by the release managers. Currently this part is quite simple since all modules are just copied from a boost release. Once all modules provide their own listfiles there will be no 4 any more.
What belongs into a projects listfile: a) The name of the project must be set: 'boost_project("Date Time")' b) Header files should be declared: 'boost_add_headers(<list>)' c) boost_add_library / boost_add_executable d) boost_add_documentation e) boost_add_test
I did a), b) and c) for the most of the projects. I did d) and e) for at least one project (which is Accumulators, since I started at the top). I cannot migrate all projects completely because * there are some headers where I do not know to which project they belong to,
Take a look at Eric's modularization work at https://github.com/ryppl/ryppl/tree/master/boost. That should give you pretty good guidance.
* I don't know all dependancies of each project (talking about the headers here, compiling deps are easy) and
Ditto.
* it is huge amount of work.
The interface of the CMake functions I talked about in 1. of the "four parts of Boost.CMake" is now more or less stable. I thought I document them and then maybe call for help from the Guild. I will do what I can in this process, but I require some help from others. Maybe I could participate in this years Summer of Code, but I fear there will be not enough work left for a whole summer :-).
It's my aim to get things in decent shape before BoostCon, but if you sign up for GSoC I will gladly work to find ways to expand this project for you :-) -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Mon, Jan 31, 2011 at 11:34 AM, Dave Abrahams <dave@boostpro.com> wrote:
... Most of what's installed by ryppl I expect to be done in a virtual environment that's set up just for development and testing...
Whoa! You mentioned before that you were expecting testers to run in a virtual environment. I'm really worried this would seriously limit the number of folks willing to participate as testers. As a developer, I already use a virtual Linux machine on a Windows host for occasional local Linux development work. I'd be willing to do the same for Mac OS X, but apparently there is no license compliant way I can do so on my Windows host. But for my main development work, I want to use continue to use native Windows, not run on a virtual machine. AFAIK, Microsoft's Windows XP virtual machine is the only one I can legally use without buying additional copies of Windows, which I'm not willing to do. But regardless, the extra hassle of running in a virtual machine just isn't worth it for a primary development machine. Although easier than it was five years ago, it looks to me that it will be another five years or more before virtual machines are up to that task. --Beman

On Sat, Feb 19, 2011 at 1:15 PM, Beman Dawes <bdawes@acm.org> wrote:
On Mon, Jan 31, 2011 at 11:34 AM, Dave Abrahams <dave@boostpro.com> wrote:
... Most of what's installed by ryppl I expect to be done in a virtual environment that's set up just for development and testing...
Whoa! You mentioned before that you were expecting testers to run in a virtual environment. I'm really worried this would seriously limit the number of folks willing to participate as testers.
As a developer, I already use a virtual Linux machine....
Nonono. I don't mean anything about VMs. I just mean something like what Python's virtualenv does: create a parallel filesystem hierarchy on which you have write permission, where ryppl can do its installations, and then update PATH and other things in the environment before running the tests to simulate the conditions of a normal installation. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 02/19/2011 11:46 AM, Dave Abrahams wrote:
On Sat, Feb 19, 2011 at 1:15 PM, Beman Dawes<bdawes@acm.org> wrote:
On Mon, Jan 31, 2011 at 11:34 AM, Dave Abrahams<dave@boostpro.com> wrote:
...�Most of what's installed by ryppl I expect to be done in a virtual environment that's set up just for development and testing... Whoa! You mentioned before that you were expecting testers to run in a virtual environment. I'm really worried this would seriously limit the number of folks willing to participate as testers.
As a developer, I already use a virtual Linux machine.... Nonono. I don't mean anything about VMs. I just mean something like what Python's virtualenv does: create a parallel filesystem hierarchy on which you have write permission, where ryppl can do its installations, and then update PATH and other things in the environment before running the tests to simulate the conditions of a normal installation. So something along the lines of Fedora's Mock system? http://fedoraproject.org/wiki/Projects/Mock
This is a useful tool for teasing out build dependencies.

On Mon, Feb 21, 2011 at 9:55 AM, Rob Riggs <rob@pangalactic.org> wrote:
. I don't mean anything about VMs. I just mean something like what Python's virtualenv does: create a parallel filesystem hierarchy on which you have write permission, where ryppl can do its installations, and then update PATH and other things in the environment before running the tests to simulate the conditions of a normal installation.
Well, that level of isolation would be ideal, and I'd love to use a chroot jail on systems where it's available. However, I think that for some platforms we'll have to settle for something much less ambitious, and I plan to put my attention on the common denominator first, i.e. something that can easily be coded up in portable C++ or Python. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

"Daniel Pfeifer" <daniel@pfeifer-mail.de> wrote in message news:9258a13ec6727bac1f0dc8baa849fc35-EhVcX1dJTQZXRw0RBgwNVDBXdh9XVlpBXkJHHF5ZWC9fUkYLQVx+H1dWXjBeQ0wEXFpZQVlR-webmailer2@server04.webmailer.hosteurope.de... A side note:
* Precompiled headers (currently MSVC only).
Until CMake gets proper precompiled headers support you can easily add support for Xcode (Mac OS X) with: set( CMAKE_XCODE_ATTRIBUTE_GCC_PREFIX_HEADER "<path to header>" CACHE STRING "" FORCE ) set( CMAKE_XCODE_ATTRIBUTE_GCC_PRECOMPILE_PREFIX_HEADER "YES" CACHE STRING "" FORCE ) ;) -- "What Huxley teaches is that in the age of advanced technology, spiritual devastation is more likely to come from an enemy with a smiling face than from one whose countenance exudes suspicion and hate." Neil Postman

Am Sonntag, den 30.01.2011, 18:13 +0100 schrieb Domagoj Saric:
"Daniel Pfeifer" <daniel@pfeifer-mail.de> wrote in message news:9258a13ec6727bac1f0dc8baa849fc35-EhVcX1dJTQZXRw0RBgwNVDBXdh9XVlpBXkJHHF5ZWC9fUkYLQVx+H1dWXjBeQ0wEXFpZQVlR-webmailer2@server04.webmailer.hosteurope.de...
A side note:
* Precompiled headers (currently MSVC only).
Until CMake gets proper precompiled headers support you can easily add support for Xcode (Mac OS X) with:
set( CMAKE_XCODE_ATTRIBUTE_GCC_PREFIX_HEADER "<path to header>" CACHE STRING "" FORCE )
set( CMAKE_XCODE_ATTRIBUTE_GCC_PRECOMPILE_PREFIX_HEADER "YES" CACHE STRING "" FORCE )
Thank you for the input! I hope it works like this too: set_target_properties(<targets> PROPERTIES XCODE_ATTRIBUTE_GCC_PREFIX_HEADER "<path to header>" XCODE_ATTRIBUTE_GCC_PRECOMPILE_PREFIX_HEADER "YES" ) Otherwise it would mean that all targets have to use the same PCH. Is the header then included automatically in all source files or do you still have to include it manually? cheers, Daniel

"Daniel Pfeifer" <daniel@pfeifer-mail.de> wrote in message news:1296457482.2147.7.camel@daniel-desktop...
Until CMake gets proper precompiled headers support you can easily add support for Xcode (Mac OS X) with:
set( CMAKE_XCODE_ATTRIBUTE_GCC_PREFIX_HEADER "<path to header>" CACHE STRING "" FORCE )
set( CMAKE_XCODE_ATTRIBUTE_GCC_PRECOMPILE_PREFIX_HEADER "YES" CACHE STRING "" FORCE )
Thank you for the input! I hope it works like this too:
set_target_properties(<targets> PROPERTIES XCODE_ATTRIBUTE_GCC_PREFIX_HEADER "<path to header>" XCODE_ATTRIBUTE_GCC_PRECOMPILE_PREFIX_HEADER "YES" )
Otherwise it would mean that all targets have to use the same PCH.
Hi, sorry for the delay... I would expect it to work this way too but cannot confirm as I haven't tried it...
Is the header then included automatically in all source files or do you still have to include it manually?
It is included automatically... -- "What Huxley teaches is that in the age of advanced technology, spiritual devastation is more likely to come from an enemy with a smiling face than from one whose countenance exudes suspicion and hate." Neil Postman
participants (7)
-
Beman Dawes
-
Daniel Pfeifer
-
Dave Abrahams
-
Dean Michael Berris
-
Domagoj Saric
-
Eric Niebler
-
Rob Riggs