boost development process / scope

Here is a point I'd like to bring up in the context of the current discussion concerning the boost development process. I expect this issue to be highly controversial, hopefully it will result in a constructive discussion nonetheless... Boost is all about modern C++ libraries. However, in the (very admirable) drive for perfection, developers not only strive for the ultimate C++ libraries, but also for perfect development tools. Thus, an integral part of boost is * a build tool / infrastructure * a documentation harness * a testing harness This, IMO, creates a number of problems : * it heightens the barrier of entry for users as well as contributors, as they need to learn new languages and new tools (which, in general, are really understood only by a handful of people) * it creates considerable instability, as these tools have their own development lifecycle. (I remember someone mentioning that his documentation in the 1.34 branch depends on features of quickbook in trunk, etc.) * it dilutes the focus of boost itself So, in the spirit of 'lessons learned', I'd like to invite readers to imagine how life would be if the boost development would use existing (i.e. external) tools, if possible. (Example: docbook, rst, etc., instead of qbk; make instead of bjam, etc.) Please don't get me wrong: I'm not criticising any of these tools per se. They may be wonderful, and have clear advantages over external tools. But please don't underestimate the advantage taken from a great user base, or the experience that went into all these tools over the years. At least, I'd like to suggest that the development of these tools should be decoupled as much as possible from the development of the 'boost C++ libraries'. That should benefit everybody: The boost development would draw on a much more stable development environment, and these tools would hopefully be recognized in domains outside of boost, helping them to evolve and stabilize quicker than if the only deployment was inside boost. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld wrote:
Here is a point I'd like to bring up in the context of the current discussion concerning the boost development process. I expect this issue to be highly controversial, hopefully it will result in a constructive discussion nonetheless...
Boost is all about modern C++ libraries. However, in the (very admirable) drive for perfection, developers not only strive for the ultimate C++ libraries, but also for perfect development tools. Thus, an integral part of boost is
* a build tool / infrastructure * a documentation harness * a testing harness
This, IMO, creates a number of problems :
* it heightens the barrier of entry for users as well as contributors, as they need to learn new languages and new tools (which, in general, are really understood only by a handful of people)
* it creates considerable instability, as these tools have their own development lifecycle. (I remember someone mentioning that his documentation in the 1.34 branch depends on features of quickbook in trunk, etc.)
* it dilutes the focus of boost itself
So, in the spirit of 'lessons learned', I'd like to invite readers to imagine how life would be if the boost development would use existing (i.e. external) tools, if possible. (Example: docbook, rst, etc., instead of qbk; make instead of bjam, etc.)
Please don't get me wrong: I'm not criticising any of these tools per se. They may be wonderful, and have clear advantages over external tools. But please don't underestimate the advantage taken from a great user base, or the experience that went into all these tools over the years.
At least, I'd like to suggest that the development of these tools should be decoupled as much as possible from the development of the 'boost C++ libraries'. That should benefit everybody: The boost development would draw on a much more stable development environment, and these tools would hopefully be recognized in domains outside of boost, helping them to evolve and stabilize quicker than if the only deployment was inside boost.
I doubt Boost.Build can be decoupled much more. We have our own mailing list, our own issue tracker, our own webpage and our own releases. The only coupling is that Boost.Build lives in Boost CVS, and I'm not sure if that's bad thing, or good thing. - Volodya

Vladimir Prus wrote:
I doubt Boost.Build can be decoupled much more. We have our own mailing list, our own issue tracker, our own webpage and our own releases.
The only coupling is that Boost.Build lives in Boost CVS, and I'm not sure if that's bad thing, or good thing.
Technically that may be correct. However, as a matter of fact, boost.build does get developed as part of boost. How long did it take to fix bbv2 bugs *after* boost was converted to use it ? And how much is the slip of the release due to problems in the infrastructure, as opposed to actual "C++ library code" ? As a boost developer I'd rather not worry about infrastructure. At least not more than absolutely necessary. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld wrote:
Vladimir Prus wrote:
I doubt Boost.Build can be decoupled much more. We have our own mailing list, our own issue tracker, our own webpage and our own releases.
The only coupling is that Boost.Build lives in Boost CVS, and I'm not sure if that's bad thing, or good thing.
Technically that may be correct. However, as a matter of fact, boost.build does get developed as part of boost. How long did it take to fix bbv2 bugs *after* boost was converted to use it ? And how much is the slip of the release due to problems in the infrastructure, as opposed to actual "C++ library code" ?
Nobody has the hard numbers. My gut feeling is that most of the slip is not due to infrastructure problems, but other factors. You should also keep in mind that of all Boost.Build work, only a fraction is Boost.Build core work. For example, funny naming rules for built libraries is not a core functionality of *any* build system. Likewise, Boost.Python support is not something you get for free when using make.
As a boost developer I'd rather not worry about infrastructure. At least not more than absolutely necessary.
That's a good abstract goal. Would you clarify when and how you was forced to "worry about infrastructure" and what concrete steps do you suggest to be taken? "be decoupled as much as possible" is not a concrete step as far as I'm concerned. - Volodya

Vladimir Prus wrote:
Stefan Seefeld wrote:
Vladimir Prus wrote:
I doubt Boost.Build can be decoupled much more. We have our own mailing list, our own issue tracker, our own webpage and our own releases.
The only coupling is that Boost.Build lives in Boost CVS, and I'm not sure if that's bad thing, or good thing. Technically that may be correct. However, as a matter of fact, boost.build does get developed as part of boost. How long did it take to fix bbv2 bugs *after* boost was converted to use it ? And how much is the slip of the release due to problems in the infrastructure, as opposed to actual "C++ library code" ?
Nobody has the hard numbers. My gut feeling is that most of the slip is not due to infrastructure problems, but other factors.
I don't think it was necessarily all problems, although there certainly were some. Some of it is that making a change to something as core as the build system is like driving and aircraft carrier -- after someone says 'turn now', you have to wait a long time for the effect. As I recall, it took a long time (months) to get all the regression testers transitioned to build v2. There may have been a couple problems in there, but much of it was just the inertia of turning the big ship. You know, people are traveling so they can't switch this week, then there's some pilot error it doesn't work and they run out of time, then something comes up and it drags into another week. Your plans get blown apart by these little delays that add up. And this is precisely why I'm suggesting 1.35 as new libs on the current 1.34 base with some invitation only critical patches. The only way I know to avoid these sort of unanticipated delays is to cut out the possibility of them cropping up by minimizing change. BTW, this is why I very much favor removing support for the 'legacy compilers' as well. If we eliminate them it removes another distraction that delays the release. Each little individual patch or update to the expected results is a small thing, but in aggregate it adds up to substantial time. Effectively the value we are currently punishing the users of modern compilers by delaying functionality to wait for back porting to legacy compilers. I'd rather ship 1.35 right away and then let the legacy compiler changes catch up as needed. I think we did reach some agreement awhile back that we are going to eliminate some of the old compilers as a 1.35 requirement. Jeff

Vladimir Prus wrote:
Stefan Seefeld wrote:
Vladimir Prus wrote:
I doubt Boost.Build can be decoupled much more. We have our own mailing list, our own issue tracker, our own webpage and our own releases.
The only coupling is that Boost.Build lives in Boost CVS, and I'm not sure if that's bad thing, or good thing. Technically that may be correct. However, as a matter of fact, boost.build does get developed as part of boost. How long did it take to fix bbv2 bugs *after* boost was converted to use it ? And how much is the slip of the release due to problems in the infrastructure, as opposed to actual "C++ library code" ?
Nobody has the hard numbers. My gut feeling is that most of the slip is not due to infrastructure problems, but other factors.
You should also keep in mind that of all Boost.Build work, only a fraction is Boost.Build core work. For example, funny naming rules for built libraries is not a core functionality of *any* build system. Likewise, Boost.Python support is not something you get for free when using make.
Yes, I agree. Again, I didn't mean to criticise the tools themselves. But even if the ultimate problems stem from incorrect use of these tools, it's caused by those tools being moving targets. There were fixes to the boost.python build system even after the freeze, and they ultimately were caused by bbv2 being relatively new and not yet well understood and tested by a broad community. In contrast, 'make' is a very well known and understood tool, and people know its limitations and how to navigate around them. (For avoidance of doubt: this is *not* an argument to use 'make' instead. My point merely is that using new and experimental tools makes for a very fragile development platform.)
As a boost developer I'd rather not worry about infrastructure. At least not more than absolutely necessary.
That's a good abstract goal. Would you clarify when and how you was forced to "worry about infrastructure" and what concrete steps do you suggest to be taken? "be decoupled as much as possible" is not a concrete step as far as I'm concerned.
Sure. As you know there have been some problems very late in the release cycle with boost.python tests, where accidentally a wrong python interpreter was dragged in. Only very few people (inculding a core bbv2 developer) understood the problem well enough to be able to come up with a fix. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld wrote:
As a boost developer I'd rather not worry about infrastructure. At least not more than absolutely necessary.
That's a good abstract goal. Would you clarify when and how you was forced to "worry about infrastructure" and what concrete steps do you suggest to be taken? "be decoupled as much as possible" is not a concrete step as far as I'm concerned.
Sure. As you know there have been some problems very late in the release cycle with boost.python tests, where accidentally a wrong python interpreter was dragged in. Only very few people (inculding a core bbv2 developer) understood the problem well enough to be able to come up with a fix.
Ok, and what are the concrete steps you propose? - Volodya

Vladimir Prus wrote: [...]
I doubt Boost.Build can be decoupled much more. We have our own mailing list, our own issue tracker, our own webpage and our own releases.
The only coupling is that Boost.Build lives in Boost CVS, and I'm not sure if that's bad thing, or good thing.
The problem is that from the outside it is often unclear which problems concern BB itself and which concern its use. If you allow me a parallel, it is hard to distinguish when you fix make from when you fix the makefiles. Cheers, Nicola Musatti

Nicola Musatti wrote:
Vladimir Prus wrote: [...]
I doubt Boost.Build can be decoupled much more. We have our own mailing list, our own issue tracker, our own webpage and our own releases.
The only coupling is that Boost.Build lives in Boost CVS, and I'm not sure if that's bad thing, or good thing.
The problem is that from the outside it is often unclear which problems concern BB itself and which concern its use. If you allow me a parallel, it is hard to distinguish when you fix make from when you fix the makefiles.
So, you suggest it would be better to freeze Boost.Build version used by a particular Boost release, which would mean that in case of any issues you get to fix Jamfiles? - Volodya

Vladimir Prus <ghost <at> cs.msu.su> writes:
Nicola Musatti wrote: [...]
The problem is that from the outside it is often unclear which problems concern BB itself and which concern its use. If you allow me a parallel, it is hard to distinguish when you fix make from when you fix the makefiles.
So, you suggest it would be better to freeze Boost.Build version used by a particular Boost release, which would mean that in case of any issues you get to fix Jamfiles?
Yes. The switch to a new BB release should only take place when that release is considered complete and stable, after which time only bug fixes should be allowed. This would force the switch to take place when everything is ready and avoid impacting all libraries with last minute changes. Cheers, Nicola Musatti

Stefan Seefeld wrote:
So, in the spirit of 'lessons learned', I'd like to invite readers to imagine how life would be if the boost development would use existing (i.e. external) tools, if possible. (Example: docbook, rst, etc., instead of qbk; make instead of bjam, etc.)
If I had nothing but free time, I'd investigate using CMake instead of, or in addition to, Boost.Build. From the website:
CMake generates native makefiles and workspaces that can be used in the compiler environment of your choice. CMake is quite sophisticated: it is possible to support complex environments requiring system configuration, pre-processor generation, code generation, and template instantiation.
With CMake, we could deliver makefiles and vc project files, so people can use their own build environments instead of having to learn ours. I think this would remove a barrier to Boost's adoption. This article describes the experience of the KDE team switching to CMake: http://dot.kde.org/1172083974/ Caveat: I haven't actually used CMake. I'd need to investigate it. FWIW, quickbook is merely a front end for existing external tools (docbook, doxygen, fop). Nobody actually has to use quickbook -- they could program directly to these lower level tools, and BBv2 supports that. But the flakiness of these tools, and the length of the toolchain, has been a constant source of trouble for us. -- Eric Niebler Boost Consulting www.boost-consulting.com

If I had nothing but free time, I'd investigate using CMake instead of, or in addition to, Boost.Build. From the website:
CMake generates native makefiles and workspaces that can be used in the compiler environment of your choice. CMake is quite sophisticated: it is possible to support complex environments requiring system configuration, pre-processor generation, code generation, and template instantiation.
With CMake, we could deliver makefiles and vc project files, so people can use their own build environments instead of having to learn ours. I think this would remove a barrier to Boost's adoption. This article describes the experience of the KDE team switching to CMake:
This would be great. A significant number of OSS projects already use CMake (in addition to kde : ITK/VTK, in particular, are close to my heart). It also supports XCode projects, which would simplify the generation of Boost frameworks for OSX developers...
FWIW, quickbook is merely a front end for existing external tools (docbook, doxygen, fop). Nobody actually has to use quickbook -- they could program directly to these lower level tools, and BBv2 supports that. But the flakiness of these tools, and the length of the toolchain, has been a constant source of trouble for us.
Fortunately, there is no mandate on the specific format of documentation for Boost libraries. However, I agree that it is desirable to have some level of consistency of documentation format. I know that Steven spent a significant amount of time getting quickbook to work for the Boost.Units documentation, so I would agree that flakiness is a problem. Given that writing documentation is generally the least favorite activity in any given software project, adding even small hurdles is clearly undesirable... Matthias

Matthias Schabel wrote:
Fortunately, there is no mandate on the specific format of documentation for Boost libraries. However, I agree that it is desirable to have some level of consistency of documentation format.
This is a much bigger problem than at first it appears. I think some projects (like PHP and Java) get a big 'boost' from users because all the documentation looks the same, has the same conventions and look and feel. Josh Bloch (on of the major forces in Java standard library development) credits JavaDoc as one of the critical elements in adoption of Java libraries. I personally don't like the uneven look of current Boost docs. The more recent docs (like math tookit, bimap, all the Niebler libs) are really nice -- they look pleasing and have consistent structure. I'd like to see all the library docs look this way...
I know that Steven spent a significant amount of time getting quickbook to work for the Boost.Units documentation, so I would agree that flakiness is a problem. Given that writing documentation is generally the least favorite activity in any given software project, adding even small hurdles is clearly undesirable...
The current toolchain is hard to setup, no doubt. Overall, though, I think it's worth it. Lot's of developers have managed it, so it's not that hard in the end. Not saying we shouldn't look at ways of making it better, we should. We really need volunteers to help ;-) The current system can clearly produce both nice html and pdf. Even though John and Paul struggled, they were able to get both PDF for the math toolkit which is chock full of references, graphs, etc. I've had a date-time PDF docs for several releases. This is a very useful feature that the straight html docs can't serve. The date-time docs are all in BoostBook xml -- I really wish they were in QuickBook b/c it would be so much nicer to maintain them... Jeff

Jeff Garland wrote:
Matthias Schabel wrote:
Fortunately, there is no mandate on the specific format of documentation for Boost libraries. However, I agree that it is desirable to have some level of consistency of documentation format.
This is a much bigger problem than at first it appears. I think some projects (like PHP and Java) get a big 'boost' from users because all the documentation looks the same, has the same conventions and look and feel. Josh Bloch (on of the major forces in Java standard library development) credits JavaDoc as one of the critical elements in adoption of Java libraries.
I personally don't like the uneven look of current Boost docs. The more recent docs (like math tookit, bimap, all the Niebler libs) are really nice -- they look pleasing and have consistent structure. I'd like to see all the library docs look this way...
Agreed 100%. Unfortunately converting old html docs is a lot of work :-(
I know that Steven spent a significant amount of time getting quickbook to work for the Boost.Units documentation, so I would agree that flakiness is a problem. Given that writing documentation is generally the least favorite activity in any given software project, adding even small hurdles is clearly undesirable...
The current toolchain is hard to setup, no doubt. Overall, though, I think it's worth it. Lot's of developers have managed it, so it's not that hard in the end. Not saying we shouldn't look at ways of making it better, we should. We really need volunteers to help ;-)
The current system can clearly produce both nice html and pdf. Even though John and Paul struggled, they were able to get both PDF for the math toolkit which is chock full of references, graphs, etc. I've had a date-time PDF docs for several releases. This is a very useful feature that the straight html docs can't serve. The date-time docs are all in BoostBook xml -- I really wish they were in QuickBook b/c it would be so much nicer to maintain them...
I think the toolchain is getting there: we just need better documentation for setting it up. Actually quickbook is trivial to get going, it's the Docbook XSL and particularly the FO to PDF translations that are hard: in other words it's the third party stuff that get's you! I have some improved Docbook->FO stylesheets I've been hacking away at: I think we can do much better with PDF generation (syntax highlighted code for starters), which just leaves the toolchain setup issues. Using XEP rather than Apache FOP for PDF generation makes a huge difference too. If someone would like to roadtest some improved setup instructions, I'll try and put something together... I'm just not sure when! John.

John Maddock wrote:
I think the toolchain is getting there: we just need better documentation for setting it up. Actually quickbook is trivial to get going, it's the
As I recall some of our SoC students last year submitted some improvements.
Docbook XSL and particularly the FO to PDF translations that are hard: in other words it's the third party stuff that get's you!
Yep.
I have some improved Docbook->FO stylesheets I've been hacking away at: I think we can do much better with PDF generation (syntax highlighted code for starters),
That would be nice :-)
which just leaves the toolchain setup issues. Using XEP rather than Apache FOP for PDF generation makes a huge difference too.
Yeah, the Apache FOP has been a big problem. So did you buy an XEP license or is it just the trial? Unfortunately it's quite expensive... Jeff

on Fri May 04 2007, Eric Niebler <eric-AT-boost-consulting.com> wrote:
Stefan Seefeld wrote:
So, in the spirit of 'lessons learned', I'd like to invite readers to imagine how life would be if the boost development would use existing (i.e. external) tools, if possible. (Example: docbook, rst, etc., instead of qbk; make instead of bjam, etc.)
If I had nothing but free time, I'd investigate using CMake instead of, or in addition to, Boost.Build. From the website:
CMake generates native makefiles and workspaces that can be used in the compiler environment of your choice. CMake is quite sophisticated: it is possible to support complex environments requiring system configuration, pre-processor generation, code generation, and template instantiation.
With CMake, we could deliver makefiles and vc project files, so people can use their own build environments instead of having to learn ours. I think this would remove a barrier to Boost's adoption. This article describes the experience of the KDE team switching to CMake:
http://dot.kde.org/1172083974/
Caveat: I haven't actually used CMake. I'd need to investigate it.
Me neither, but I think it's an attractive idea. Boost shouldn't be in the build tool business, really. We only got into it because no 3rd party tools could do the job we needed. The questions are, * what are our needs, really? * can CMake fulfill them? * if not, can we give up a few of those needs? ;-) If we wanted to conduct such an inquiry, http://www.boost.org/tools/build/v1/build_system.htm#design_criteria might be a good place to start. -- Dave Abrahams Boost Consulting www.boost-consulting.com Don't Miss BoostCon 2007! ==> http://www.boostcon.com

Eric Niebler wrote:
Stefan Seefeld wrote:
So, in the spirit of 'lessons learned', I'd like to invite readers to imagine how life would be if the boost development would use existing (i.e. external) tools, if possible. (Example: docbook, rst, etc., instead of qbk; make instead of bjam, etc.)
If I had nothing but free time, I'd investigate using CMake instead of, or in addition to, Boost.Build. From the website:
For what its worth - and just to keep the pot boiling - I use VC IDE for building and testing the serialization library. I only use boost build for generating the table of results for all the compilers. I did have to setup a very large VC solution with a project for each test and variations for archives, etc which was a huge pain. But it is very convenient now that I have it setup. Robert Ramey

On 5/4/07, Robert Ramey <ramey@rrsd.com> wrote:
Eric Niebler wrote:
Stefan Seefeld wrote:
So, in the spirit of 'lessons learned', I'd like to invite readers to imagine how life would be if the boost development would use existing (i.e. external) tools, if possible. (Example: docbook, rst, etc., instead of qbk; make instead of bjam, etc.)
If I had nothing but free time, I'd investigate using CMake instead of, or in addition to, Boost.Build. From the website:
For what its worth - and just to keep the pot boiling - I use VC IDE for building and testing the serialization library. I only use boost build for generating the table of results for all the compilers.
I did have to setup a very large VC solution with a project for each test and variations for archives, etc which was a huge pain. But it is very convenient now that I have it setup.
Also, FWIW. All my projects are using bbv2. But since I use VC IDE, I have a solution just to group projects and files (but no building). I believe that bbv2 is a much more secure building system. Being bitten sometimes with different macros and compiler options, now I only use bbv2. Besides, bbv2 allows much more flexibility for self-configuration of the project building, which is only possible because bbv2 is built in jam itself. We can have as low-level or high-level as we need. If only there were more documentation.
Robert Ramey
Sorry for the noise, Best regards, -- Felipe Magno de Almeida

On May 4, 2007, at 1:24 PM, Eric Niebler wrote:
With CMake, we could deliver makefiles and vc project files, so people can use their own build environments instead of having to learn ours. I think this would remove a barrier to Boost's adoption. This article describes the experience of the KDE team switching to CMake:
http://dot.kde.org/1172083974/
Caveat: I haven't actually used CMake. I'd need to investigate it.
It's a grand system. A long time ago (about 4 years, I think) in a land far away (New York), Brad King and I managed to get Boost building with CMake. It was actually quite trivial at the time, and CMake has improved significantly since then. It helped that Brad was (and still is) one of the CMake developers. In addition to building and installing Boost, we had nightly regression tests running using CMake [1] and DART [2], hosted at Kitware. In many ways, the regression testing system we had running back then was more advanced even than what we have today with BBv2, and they're still actively improving CMake and DART. BBv1/BBv2 and our regression-testing infrastructure is quite good for the volunteer effort that's gone into it, but CMake and DART have had the benefit of full-time, funded developers working on them.
FWIW, quickbook is merely a front end for existing external tools (docbook, doxygen, fop). Nobody actually has to use quickbook -- they could program directly to these lower level tools, and BBv2 supports that. But the flakiness of these tools, and the length of the toolchain, has been a constant source of trouble for us.
Yeah, it's an interesting case... we went with the "standard" documentation tools (DocBook with XSL, Doxygen), and the tool chain we needed to build to integrate those tools is, well, horrendous. It's impressive that BBv2 handles that tool chain so well. The question, as always is, if we "buy" into a different system--- say, CMake---will we end up saving ourselves and our users more time overall? I suspect that with CMake and DART, the answer is "yes". However, like Eric, I just don't have the time to make this happen... and I'm a little reluctant given my previous failed attempt. - Doug [1] http://www.cmake.org/HTML/Index.html [2] http://public.kitware.com/Dart/HTML/Index.shtml

Doug Gregor wrote: <snip>
The question, as always is, if we "buy" into a different system--- say, CMake---will we end up saving ourselves and our users more time overall? I suspect that with CMake and DART, the answer is "yes". However, like Eric, I just don't have the time to make this happen... and I'm a little reluctant given my previous failed attempt.
Why did your previous attempt fail? -- Eric Niebler Boost Consulting www.boost-consulting.com

On May 7, 2007, at 10:46 AM, Eric Niebler wrote:
Doug Gregor wrote:
<snip>
The question, as always is, if we "buy" into a different system--- say, CMake---will we end up saving ourselves and our users more time overall? I suspect that with CMake and DART, the answer is "yes". However, like Eric, I just don't have the time to make this happen... and I'm a little reluctant given my previous failed attempt.
Why did your previous attempt fail?
Dart failed to gain traction because the client involves Tcl scripts, and there was a significant resistance to requiring regression testers to have Tcl installed on their systems. With CMake... I don't recall what happened. - Doug

Doug Gregor wrote:
On May 7, 2007, at 10:46 AM, Eric Niebler wrote:
Doug Gregor wrote:
<snip>
The question, as always is, if we "buy" into a different system--- say, CMake---will we end up saving ourselves and our users more time overall? I suspect that with CMake and DART, the answer is "yes". However, like Eric, I just don't have the time to make this happen... and I'm a little reluctant given my previous failed attempt. Why did your previous attempt fail?
Dart failed to gain traction because the client involves Tcl scripts, and there was a significant resistance to requiring regression testers to have Tcl installed on their systems.
FYI, CMake now comes with CTest which is a full DART client, so there is no need to install Tcl on every system anymore. There is now also a DART "2" that provides much more flexible display of testing results. IIRC, the new DART requires only java and a web server on the server side. CTest supports submission to both DART versions. Here is the DART "2" dashboard for CMake itself: http://dart.na-mic.org/CMake/Dashboard/ and for a much larger project more on boost's scale: http://dart.na-mic.org/Insight/Dashboard/ CMake now also comes with CPack which creates configuration files for native packaging tools. The 2.4 CPack version is beta but is already good enough to package the CMake release itself. Together CMake, CTest and CPack provide a full development, testing, and distribution tool suite. They all come in a single installer. -Brad

on Mon May 07 2007, Brad King <brad.king-AT-kitware.com> wrote:
FYI, CMake now comes with CTest which is a full DART client, so there is no need to install Tcl on every system anymore. There is now also a DART "2" that provides much more flexible display of testing results. IIRC, the new DART requires only java and a web server on the server side. CTest supports submission to both DART versions.
Here is the DART "2" dashboard for CMake itself:
http://dart.na-mic.org/CMake/Dashboard/
and for a much larger project more on boost's scale:
Hmm, well I hope the presentation is tunable because it doesn't look at first glance like what I see there would be very useful for Boost in the form it is presented.
CMake now also comes with CPack which creates configuration files for native packaging tools. The 2.4 CPack version is beta but is already good enough to package the CMake release itself.
Together CMake, CTest and CPack provide a full development, testing, and distribution tool suite. They all come in a single installer.
Wow. Any chance you can make it to our testing sprint at BoostCon? -- Dave Abrahams Boost Consulting http://www.boost-consulting.com Don't Miss BoostCon 2007! ==> http://www.boostcon.com

Eric Niebler wrote:
CMake generates native makefiles and workspaces that can be used in the compiler environment of your choice. CMake is quite sophisticated: it is possible to support complex environments requiring system configuration, pre-processor generation, code generation, and template instantiation.
With CMake, we could deliver makefiles and vc project files, so people can use their own build environments instead of having to learn ours. I think this would remove a barrier to Boost's adoption. This article describes the experience of the KDE team switching to CMake:
http://dot.kde.org/1172083974/
Caveat: I haven't actually used CMake. I'd need to investigate it.
I'd like to note that it's not like KDE took CMake and magically started building. Quite some effort went into that, and I believe CMake changes were also requires, so at some point you could not build KDE with any released CMake version. Therefore, he who wants to "investigate" CMake is probably up to some serious work, not just a weekend project. - Volodya

Hi ! An'n Maandag 07 Mai 2007 hett Vladimir Prus schreven:
Eric Niebler wrote:
CMake generates native makefiles and workspaces that can be used in the compiler environment of your choice. CMake is quite sophisticated: it is possible to support complex environments requiring system configuration, pre-processor generation, code generation, and template instantiation.
With some effort ;-))
With CMake, we could deliver makefiles and vc project files, so people can use their own build environments instead of having to learn ours. I think this would remove a barrier to Boost's adoption. This article describes the experience of the KDE team switching to CMake:
http://dot.kde.org/1172083974/
Caveat: I haven't actually used CMake. I'd need to investigate it.
Well, especially supporting vc project files turns out to be quite hard ;-)) Makefiles are quite easy (on Unix)...
I'd like to note that it's not like KDE took CMake and magically started building. Quite some effort went into that, and I believe CMake changes were also requires, so at some point you could not build KDE with any released CMake version.
Therefore, he who wants to "investigate" CMake is probably up to some serious work, not just a weekend project.
Yes, you get quick results very fast. My points: -CTest does not support "failed" tests (aka compile-fail, link-fail, run-fail) -CTest does not support "compile" and "link" only tests. CMake has only a default set of four (4) default built variants. and quite a few other points I can assemble on demand ;-)) On the other hand, CMake's package detection support is quite sophisticated, which is one of the reasons KDE adopted CMake. Just some .02€., Yours, Jürgen -- * Dipl.-Math. Jürgen Hunold ! Ingenieurgesellschaft für * voice: ++49 511 262926 57 ! Verkehrs- und Eisenbahnwesen mbH * fax : ++49 511 262926 99 ! Lister Straße 15 * juergen.hunold@ivembh.de ! www.ivembh.de * * Geschäftsführer: ! Sitz des Unternehmens: Hannover * Prof. Dr.-Ing. Thomas Siefer ! Amtsgericht Hannover, HRB 56965 * PD Dr.-Ing. Alfons Radtke !

Jürgen Hunold wrote:
-CTest does not support "failed" tests (aka compile-fail, link-fail, run-fail)
Sure it does. See the SET_TESTS_PROPERTIES command: http://www.cmake.org/HTML/Documentation.html it has a WILL_FAIL property and others to specify regular expressions to match in the test output to indicate failure or passing.
-CTest does not support "compile" and "link" only tests.
For a compile-only test just add the objects to a static library. This will just archive them without really linking: ADD_LIBRARY(mycompiletest STATIC compile_only.cpp) How does a "link only" test work? Where does it get the object files without compiling? If you do have them somewhere, you can list them as sources and CMake will just link them: ADD_EXECUTABLE(mylinktest /path/to/linkable.obj) Note that in the above examples the tests are actually added with ADD_TEST commands in the main project and the above code appears in CMakeLists.txt files in test directories. This can be packaged up in a macro that writes everyting needed to disk at CMake configuration time. -Brad
participants (13)
-
Brad King
-
David Abrahams
-
Doug Gregor
-
Eric Niebler
-
Felipe Magno de Almeida
-
Jeff Garland
-
John Maddock
-
Jürgen Hunold
-
Matthias Schabel
-
Nicola Musatti
-
Robert Ramey
-
Stefan Seefeld
-
Vladimir Prus