CMake - one more time

Inspired by the recent discussion regarding CMake, I've spend some more time looking at this. As a result, I've done the following. a) I experimented with using CMake in the more customary manner in my canonical project Safe Numerics - (see Boost Library Incubator). The current version does distribute CMakeLists.txt around the subdirectories of the library root. My original objections to doing this were assuaged when I realized that boost build does the same thing. That is I typically have Jamfiles in side of test, example and sometimes other directories. b) One of the features that I much like about Boost Build is the possibility of arbitrary nesting of Jamfiles. This permits one to build/test a portion of one's library (e.g. test, example) as well as the whole library. For a large library this is indispensable. In spite of what many people think, this is not easily supported with CMake. However, I have found that with some extra effort, it is possible to support this to the extent we need it. So in this project ONE CAN BUILD (actually create an IDE project or makefile that builds) ANY OF THE "SUBPROJECTS" (TEST, EXAMPLE) AS WELL AS THE LIBRARY "SUPERPROJECT" c) CMake is indispensable to me: * it creates IDE projects for "any?" platform. I use IDE alot. * everyone else uses it - so it makes it easier to promote my work to others. * it's easier to make work - still a pain though * There is lots of information, around the net about how to use CMake, how easy it is etc. Although they help when you're looking for an answer (which is all the time) - they really betray how complex, arbitrary and complex the system is. * It has an almost of idiot proof GUI version which I use a lot. I really like this. * CMake is well maintained and supported by it's promoters. d) Boost Build * either just works (great!) or doesn't. If it doesn't it's almost impossible fix without getting help * I've never run across anyone outside of boost who uses it. It makes it harder to promote my work. * It's natural to compose projects into "super projects" * it's almost impossible to integrate with and IDE. At one time, I had things setup so I could debug executables created with boost build with the Visual Studio IDE and debugger. But it was just too fragile and time consuming to keep everything in sync. * it has a lot of "automatic" behavior which can be really, really confusion. A small example: you've got multiple compilers on your system. When it discovers this, it just picks the "best" one and you don't know which one you got until the project builds (or not). I'm sure this was implemented this way to make usage of boost build "simple" but it has the opposite effect. Much better would be fail immediately with a message "multiple compilers found:... use toolset=<compiler name> to select desired one." Some Conclusions - I'm trying to not make this a rant a) The ideal, platform independent build system does not yet exist. I guessing it never will. I'm sure it won't happen in my life time - but then I'm 68 - maybe you'll get lucky. b) Both systems are much more fragile, complicated and opaque than their promoters try to make you believe. It's not that they're lying, they truely believe in their own stuff. There is much re-invention of the wheel - The each created their own (half-assed) little language for goodness sake!!! c) Neither has really resolved the issue of nested projects in a clear way. Boost Build probably does or can do this. CMake has a system of "packages" and a whole 'nuther layer about "finding" them. Assuming it can be made to work - given the amount of time I've invested in CMake, I should know how to do this by now. d) I think it's time for Boost to be a little more open about tolerating/encouraging alternative build systems. I think our documentation approach is a model. Yeah it's a hodgepodge. But the various ways of doing pretty much work and generally don't stop working and we don't have to constantly spend effort bringing things up to the latest greatest system (which we couldn't agree upon anyway). We have libraries which have been going strong 15 years - and people can still read the docs. e) We should find some way to recognize those who have made the system work as well it has. Doug Gregor (boost book), Eric Niebler, Joel Guzman (quickbook). Vladimir Prus, Rene Rivera, Steve Watanabe. I know there others but these come to mind immediately. Note that I have only addressed the issue of library development which is my main interest. I'm really not seeing this issues related to users of libraries. In particular, CMake has the whole "find" thing which I'm still not even seeing the need for. If I want to use a library, I can build the library and move it to a common place with the headers, specify the include directory and I'm on my way. I'm sure someone will step up to enlighten me on this. Robert Ramey

On Wed, Apr 20, 2016 at 11:49 AM, Robert Ramey <ramey@rrsd.com> wrote:
d) Boost Build ... * it's almost impossible to integrate with and IDE. At one time, I had things setup so I could debug executables created with boost build with the Visual Studio IDE and debugger. But it was just too fragile and time consuming to keep everything in sync.
Not sure about your specific situation, but I use Visual Studio 2015 Update 2 and I have no trouble at all debugging executables built with Boost.Build. Just choose File->Open->Project/Solution and then select the .exe file from the bin/ directory or wherever it is. It will create a new temporary solution and project with nothing but the executable in it, you have to manually open files (I drag from a separate Explorer window). But you can set breakpoints and step and have full source code debugging.

On 4/20/16 9:10 AM, Vinnie Falco wrote:
Not sure about your specific situation, but I use Visual Studio 2015 Update 2 and I have no trouble at all debugging executables built with Boost.Build. Just choose File->Open->Project/Solution and then select the .exe file from the bin/ directory or wherever it is.
LOL - used VS for years and never knew I could do that! I would never have believed it could find the orginal source code in the library from the executable. Robert Ramey

On 21/04/2016 05:54, Robert Ramey wrote:
On 4/20/16 9:10 AM, Vinnie Falco wrote:
Not sure about your specific situation, but I use Visual Studio 2015 Update 2 and I have no trouble at all debugging executables built with Boost.Build. Just choose File->Open->Project/Solution and then select the .exe file from the bin/ directory or wherever it is.
LOL - used VS for years and never knew I could do that! I would never have believed it could find the orginal source code in the library from the executable.
It doesn't -- it finds them from the .pdb files. But as long as you have these next to the .exes (or in your symbol path, though this is less common for things you've built yourself) then it works. This is also how you can step into code compiled in one solution while debugging another, which can be handy to keep the number of projects in a solution manageable, particularly for libraries that don't change often.

Le 20/04/16 17:49, Robert Ramey a écrit :
Inspired by the recent discussion regarding CMake, I've spend some more time looking at this. As a result, I've done the following.
Hi, Thanks for summarizing this very very (very) long thread.
a) I experimented with using CMake in the more customary manner in my canonical project Safe Numerics - (see Boost Library Incubator). The current version does distribute CMakeLists.txt around the subdirectories of the library root. My original objections to doing this were assuaged when I realized that boost build does the same thing. That is I typically have Jamfiles in side of test, example and sometimes other directories.
b) One of the features that I much like about Boost Build is the possibility of arbitrary nesting of Jamfiles. This permits one to build/test a portion of one's library (e.g. test, example) as well as the whole library. For a large library this is indispensable. In spite of what many people think, this is not easily supported with CMake. However, I have found that with some extra effort, it is possible to support this to the extent we need it. So in this project ONE CAN BUILD (actually create an IDE project or makefile that builds) ANY OF THE "SUBPROJECTS" (TEST, EXAMPLE) AS WELL AS THE LIBRARY "SUPERPROJECT"
If you think about nesting as being able to run something like "make doc" from any library, then yes, cmake is definitely lacking that. OTOH, I have the feeling that: 1- everything in the current boost is coupled to the build system. I read ppl wanting to be modular, but it is given the fact that there is an adequate build system 2- b2 is a bit cheating: it knows the full "namespace" (it flattens it) and can then eg. sort out the targets dependencies and build order. So even if you think that by typing "make doc" in your library, you are hitting only the build system (boostbook/doxygen toolchain, etc) plus your library, this is wrong: every target in b2 has the knowledge of the full DAG of dependencies, which makes it highly coupled to any other library, at least to the superproject. In CMake this nesting of CMakeLists is more compartmented: one CMakeList is supposed to be more or less independent from its siblings (but not from the parent). This is also what would make the transition of the super project to CMake very difficult: for instance dependencies have to be explicitly stated in a main orchestrating CMakeLists.txt (in b2 this is I believe implicitly done, certainly is several parsing passes). 3- b2 imposes a structure of directories as well: for instance, if I do """ using quickbook ; using doxygen ; using boostbook ; """ those features should be in files relative to some paths of the b2 location wrt. the superproject (please correct me if I am wrong). Also, when I "make" a library, it goes magically to the bin.v2 folder of the root of the superproject. I have the feeling that some behaviour of b2 in terms of relative paths are hard-coded somewhere. That is to say that: "cd doc; b2" is not at all independent from the superproject folder structure, and the apparent modularity is not a real one. But again, I agree that is just works for the purpose of boost, it is just tightly coupled to hidden things. 4- It is in fact - I believe - possible to do a "cd doc; make" with CMake, but not from the source tree, but from a subfolder of the build tree. You have to generate the superproject (or part of it) first though. OTOH, this first generation of the superproject is done intrinsically by b2 anyway: b2 does it in memory right before executing your command (look at the time it takes before starting processing the "cd doc; b2"). So if we think about nesting: CMake is better in the sense that it has a hierarchy of project, so the dependencies imposed to be a tree, while b2 fakes (to my opinion) the nesting (apparent hierarchy) and flattens everything to extract a DAG.
c) CMake is indispensable to me:
* it creates IDE projects for "any?" platform. I use IDE alot. * everyone else uses it - so it makes it easier to promote my work to others. * it's easier to make work - still a pain though * There is lots of information, around the net about how to use CMake, how easy it is etc. Although they help when you're looking for an answer (which is all the time) - they really betray how complex, arbitrary and complex the system is. * It has an almost of idiot proof GUI version which I use a lot. I really like this. * CMake is well maintained and supported by it's promoters.
So what I do now is also that: I am maintaining my CMakeLists.txt, for the purpose of having a proper development environment, but it has no other purpose at all. Also I have to say that it does not work well with this horrible "b2 header": in my cmake, I am hitting the headers of the library (in libX/include), and not of the main superproject. My IDE shows me 2 different files because of that. But apart from that, it is good enough (and yes, I have one CMakeLists.txt for the build+doc+test, I do not see any good reason for splitting that).
d) Boost Build * either just works (great!) or doesn't. If it doesn't it's almost impossible fix without getting help * I've never run across anyone outside of boost who uses it. It makes it harder to promote my work. * It's natural to compose projects into "super projects" * it's almost impossible to integrate with and IDE. At one time, I had things setup so I could debug executables created with boost build with the Visual Studio IDE and debugger. But it was just too fragile and time consuming to keep everything in sync. * it has a lot of "automatic" behavior which can be really, really confusion. A small example: you've got multiple compilers on your system. When it discovers this, it just picks the "best" one and you don't know which one you got until the project builds (or not). I'm sure this was implemented this way to make usage of boost build "simple" but it has the opposite effect. Much better would be fail immediately with a message "multiple compilers found:... use toolset=<compiler name> to select desired one."
I do not like the current state of b2 for many reasons (even though I think it could be really a good build system), but CMake is not covering many features that are currently required by the boost superproject. Until the point where we can consistently build the lib (including the possibly many flavor of the same library - STATIC/SHARED at the same time, etc), run the tests, and generate the documentation (including the possibility to have the boostdoc/quickbook/doxygen toolchain), I do not see any *good* reason to move to cmake.
Some Conclusions - I'm trying to not make this a rant
a) The ideal, platform independent build system does not yet exist. I guessing it never will. I'm sure it won't happen in my life time - but then I'm 68 - maybe you'll get lucky.
b) Both systems are much more fragile, complicated and opaque than their promoters try to make you believe. It's not that they're lying, they truely believe in their own stuff. There is much re-invention of the wheel - The each created their own (half-assed) little language for goodness sake!!!
c) Neither has really resolved the issue of nested projects in a clear way. Boost Build probably does or can do this. CMake has a system of "packages" and a whole 'nuther layer about "finding" them. Assuming it can be made to work - given the amount of time I've invested in CMake, I should know how to do this by now.
At least, one is more or less working right now, and the other has not proven to be working in all variety of cases.
d) I think it's time for Boost to be a little more open about tolerating/encouraging alternative build systems. I think our documentation approach is a model. Yeah it's a hodgepodge. But the various ways of doing pretty much work and generally don't stop working and we don't have to constantly spend effort bringing things up to the latest greatest system (which we couldn't agree upon anyway). We have libraries which have been going strong 15 years - and people can still read the docs.
I have to say this does not scale at all, especially wrt. the resources (your point e/ below). I do not know if the documentation is a good example also: we have many different systems, I like quickbook, I believe it is properly serving the purpose of a "documentation", where Doxygen fails. What is the cost of maintaining many tools compatibility? I would promote the opposite in fact: some methods proved to be good for boost, if someone wants to integrate with a currently unsupported tool, it should not so much impact the ppl that are eg. maintaining the travis.yml (or they should do the work). Also the tools are at the core of boost, we should not neglect them and rather promote them as we do for boost libraries, and we should not integrate technologies that are weak, "just because" it makes boost more appealing for a few developers.
e) We should find some way to recognize those who have made the system work as well it has. Doug Gregor (boost book), Eric Niebler, Joel Guzman (quickbook). Vladimir Prus, Rene Rivera, Steve Watanabe. I know there others but these come to mind immediately.
Note that I have only addressed the issue of library development which is my main interest. I'm really not seeing this issues related to users of libraries. In particular, CMake has the whole "find" thing which I'm still not even seeing the need for. If I want to use a library, I can build the library and move it to a common place with the headers, specify the include directory and I'm on my way. I'm sure someone will step up to enlighten me on this.
Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Wed, Apr 20, 2016 at 4:39 PM, Raffi Enficiaud < raffi.enficiaud@mines-paris.org> wrote:
Le 20/04/16 17:49, Robert Ramey a écrit :
Inspired by the recent discussion regarding CMake, I've spend some more time looking at this. As a result, I've done the following.
Hi, Thanks for summarizing this very very (very) long thread.
a) I experimented with using CMake in the more customary manner in my canonical project Safe Numerics - (see Boost Library Incubator). The current version does distribute CMakeLists.txt around the subdirectories of the library root. My original objections to doing this were assuaged when I realized that boost build does the same thing. That is I typically have Jamfiles in side of test, example and sometimes other directories.
b) One of the features that I much like about Boost Build is the possibility of arbitrary nesting of Jamfiles. This permits one to build/test a portion of one's library (e.g. test, example) as well as the whole library. For a large library this is indispensable. In spite of what many people think, this is not easily supported with CMake. However, I have found that with some extra effort, it is possible to support this to the extent we need it. So in this project ONE CAN BUILD (actually create an IDE project or makefile that builds) ANY OF THE "SUBPROJECTS" (TEST, EXAMPLE) AS WELL AS THE LIBRARY "SUPERPROJECT"
If you think about nesting as being able to run something like "make doc" from any library, then yes, cmake is definitely lacking that. OTOH, I have the feeling that: 1- everything in the current boost is coupled to the build system. I read ppl wanting to be modular, but it is given the fact that there is an adequate build system
I don't know what you mean by everything. Technically, you don't need BB when *using* Boost. Most libraries work header only which means just adding to your include path. And others should be build-able by adding their source to your project. And some libraries already support modular use. 2- b2 is a bit cheating: it knows the full "namespace" (it flattens it) and
can then eg. sort out the targets dependencies and build order. So even if you think that by typing "make doc" in your library, you are hitting only the build system (boostbook/doxygen toolchain, etc) plus your library, this is wrong: every target in b2 has the knowledge of the full DAG of dependencies, which makes it highly coupled to any other library, at least to the superproject.
Ah, I don't think so. BB ingests as many targets as it's told to. It so happens that the integrated Boost distribution reads all the sublibraries. And hence reads all the build-able targets. But this is *only* an aspect of the integrated Boost.
In CMake this nesting of CMakeLists is more compartmented: one CMakeList is supposed to be more or less independent from its siblings (but not from the parent). This is also what would make the transition of the super project to CMake very difficult: for instance dependencies have to be explicitly stated in a main orchestrating CMakeLists.txt (in b2 this is I believe implicitly done, certainly is several parsing passes).
Which is actually the same as b2.. With the exception that we've put extra logic to make managing all those dependencies as easy as possible for library authors in the integrated Boost.
3- b2 imposes a structure of directories as well: for instance, if I do """ using quickbook ; using doxygen ; using boostbook ; """ those features should be in files relative to some paths of the b2 location wrt. the superproject (please correct me if I am wrong).
You are wrong :-) Those modules are searched for in a variety of predetermined and user configurable set of directories. One of those directories happens to be automatically configured to include the sources of b2 as included in the integrated Boost. Again, to make it easier for library authors and end users.
Also, when I "make" a library, it goes magically to the bin.v2 folder of the root of the superproject.
Yes, this is a directory specified by the integrated Boost build files (< https://github.com/boostorg/boost/blob/master/Jamroot#L173>).
I have the feeling that some behaviour of b2 in terms of relative paths are hard-coded somewhere.
Many behaviors in b2, and in cmake, and in just about all build systems, are "hard-coded". For b2 they should already be documented. But you probably mean something other than what I'm understanding by "hard-coded" :-)
That is to say that: "cd doc; b2" is not at all independent from the superproject folder structure, and the apparent modularity is not a real one. But again, I agree that is just works for the purpose of boost, it is just tightly coupled to hidden things.
It's actual modularity. And is independent of the superproject. If you want an example check the Predef library. I build the documentation outside of a Boost superproject. But obviously I need some form of Boost distribution to build the documentation tools. But it doesn't need to be a full distribution (and yes I've done it with a minimal distro). And it's also possible to build the Predef docs without any Boost distribution (which I've also done), as long as you set up b2 to inform it of pre-built doc tools.
I do not like the current state of b2 for many reasons (even though I think it could be really a good build system), but CMake is not covering many features that are currently required by the boost superproject. Until the point where we can consistently build the lib (including the possibly many flavor of the same library - STATIC/SHARED at the same time, etc), run the tests, and generate the documentation (including the possibility to have the boostdoc/quickbook/doxygen toolchain), I do not see any *good* reason to move to cmake.
That's essentially the conclusion that the community ended up with years ago on the first attempt to implement cmake support. And I would like to know what the reasons for not liking b2 are. But on the b2 list ;-)
Some Conclusions - I'm trying to not make this a rant
a) The ideal, platform independent build system does not yet exist. I guessing it never will. I'm sure it won't happen in my life time - but then I'm 68 - maybe you'll get lucky.
b) Both systems are much more fragile, complicated and opaque than their promoters try to make you believe. It's not that they're lying, they truely believe in their own stuff. There is much re-invention of the wheel - The each created their own (half-assed) little language for goodness sake!!!
c) Neither has really resolved the issue of nested projects in a clear way. Boost Build probably does or can do this. CMake has a system of "packages" and a whole 'nuther layer about "finding" them. Assuming it can be made to work - given the amount of time I've invested in CMake, I should know how to do this by now.
At least, one is more or less working right now, and the other has not proven to be working in all variety of cases.
d) I think it's time for Boost to be a little more open about tolerating/encouraging alternative build systems. I think our documentation approach is a model. Yeah it's a hodgepodge. But the various ways of doing pretty much work and generally don't stop working and we don't have to constantly spend effort bringing things up to the latest greatest system (which we couldn't agree upon anyway). We have libraries which have been going strong 15 years - and people can still read the docs.
I should point out that we do "tolerate" other build systems. After all we've had libraries support make and VisualStudio in the present and past. We just require that libraries support b2. As supporting differing build systems at the same time would be insanity, and likely impossible, for infrastructure support. I have to say this does not scale at all, especially wrt. the resources
(your point e/ below). I do not know if the documentation is a good example also: we have many different systems, I like quickbook, I believe it is properly serving the purpose of a "documentation", where Doxygen fails. What is the cost of maintaining many tools compatibility? I would promote the opposite in fact: some methods proved to be good for boost, if someone wants to integrate with a currently unsupported tool, it should not so much impact the ppl that are eg. maintaining the travis.yml (or they should do the work). Also the tools are at the core of boost, we should not neglect them and rather promote them as we do for boost libraries, and we should not integrate technologies that are weak, "just because" it makes boost more appealing for a few developers.
+1 -- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net -- rrivera/acm.org (msn) - grafikrobot/aim,yahoo,skype,efnet,gmail

-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Rene Rivera Sent: 20 April 2016 23:26 To: boost@lists.boost.org Subject: Re: [boost] CMake - one more time
On Wed, Apr 20, 2016 at 4:39 PM, Raffi Enficiaud < raffi.enficiaud@mines-paris.org> wrote:
Le 20/04/16 17:49, Robert Ramey a écrit :
Inspired by the recent discussion regarding CMake, I've spend some more time looking at this. As a result, I've done the following.
Thanks for summarizing this very very (very) long thread.
+1
I do not like the current state of b2 for many reasons (even though I think it could be really a good build system), but CMake is not covering many features that are currently required by the boost superproject. Until the point where we can consistently build the lib (including the possibly many flavor of the same library - STATIC/SHARED at the same time, etc), run the tests, and generate the documentation (including the possibility to have the boostdoc/quickbook/doxygen toolchain), I do not see any *good* reason to move to cmake.
+1 Despite its unpopularity, I see bjam/b2 having power to do things that other build tools are much less good at. There has been many years of very helpful support from too few experts. It's the devil many of us know and it *can be made to work powerfully*. I've come from hate to love b2 (well - a grudging respect).
That's essentially the conclusion that the community ended up with years ago on the first attempt to implement Cmake support.
So still +1 for that conclusion. However, we haven't explained this well enough, especially to would-be authors. I've provided some antenatal help to authors who are surprised at the implications of portability, especially if they come from an Visual Studio IDE or Linux environment. Being required to support multiple platforms and multiple compilers and multiple versions, and static and dynamic linking, and multiple libraries, comes as a bit of a shock. We need to explain why b2 is still the best tool for *this task*.
And I would like to know what the reasons for not liking b2 are. But on the b2 list ;-)
I've aired my views on the b2 list before, so I'd like to do it here as well. 1 The syntax - As Paul Dirac might have said "It isn't even daft!". But given enough examples and warnings about the need/wrong for spaces, we can cope with that. 2 It tries to be too clever. Fine when it succeeds, and baffling when it doesn’t. But given more *examples* of what works, and more important, what commonly doesn't work, showing how to decode the inscrutable messages, again we can cope. 3 But most of all, I struggle with the documentation. * It doesn't say enough on 'why' Boost is using b2. * What there is looks nice, but assumes far, far too much and misses far too much. * It doesn't give *worked examples* of the main things one commonly does. * It hasn't got an index or other aids to *find* what one wants to know. I see the same questions appearing again and again - and taking the time of the long-suffering and ever-helpful experts. We need to minimize wasting their really valuable time (and hair-tearing by users). I'd like to see a complete rewrite of documentation. Paradoxically, I want it written by someone who is *not* an author or expert. (Obviously it will have to be edited by several people who *are* experts). For future library authors, a template dummy project with all the folders and files assembled to copy and modify for their library would mean that the learning curve is less steep - at present, it's an overhang! Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830

AMDG On 04/22/2016 07:20 AM, Paul A. Bristow wrote:
* It hasn't got an index or other aids to *find* what one wants to know.
http://www.boost.org/build/doc/html/ix01.html In Christ, Steven Watanabe

On Apr 22, 2016, at 8:20 AM, Paul A. Bristow <pbristow@hetp.u-net.com> wrote:
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Rene Rivera Sent: 20 April 2016 23:26 To: boost@lists.boost.org Subject: Re: [boost] CMake - one more time
On Wed, Apr 20, 2016 at 4:39 PM, Raffi Enficiaud < raffi.enficiaud@mines-paris.org> wrote:
Le 20/04/16 17:49, Robert Ramey a écrit :
Inspired by the recent discussion regarding CMake, I've spend some more time looking at this. As a result, I've done the following.
Thanks for summarizing this very very (very) long thread.
+1
I do not like the current state of b2 for many reasons (even though I think it could be really a good build system), but CMake is not covering many features that are currently required by the boost superproject. Until the point where we can consistently build the lib (including the possibly many flavor of the same library - STATIC/SHARED at the same time, etc), run the tests, and generate the documentation (including the possibility to have the boostdoc/quickbook/doxygen toolchain), I do not see any *good* reason to move to cmake.
+1
Despite its unpopularity, I see bjam/b2 having power to do things that other build tools are much less good at.
There has been many years of very helpful support from too few experts.
It's the devil many of us know and it *can be made to work powerfully*.
I've come from hate to love b2 (well - a grudging respect).
That's essentially the conclusion that the community ended up with years ago on the first attempt to implement Cmake support.
So still +1 for that conclusion.
However, we haven't explained this well enough, especially to would-be authors. I've provided some antenatal help to authors who are surprised at the implications of portability, especially if they come from an Visual Studio IDE or Linux environment. Being required to support multiple platforms and multiple compilers and multiple versions, and static and dynamic linking, and multiple libraries, comes as a bit of a shock.
We need to explain why b2 is still the best tool for *this task*.
The problem with BB is not entirely technical. It is very capable(as well as cmake). However, cmake has much larger community support. If I need to figure out how to do something in cmake, I can google it and quickly find an answer. This is because cmake is used by a very larger community. Outside of boost, Boost.Build is rarely used, and those that have used it end up abandoning it because of issues like this: https://github.com/boostorg/build/issues/106
And I would like to know what the reasons for not liking b2 are. But on the b2 list ;-)
I've aired my views on the b2 list before, so I'd like to do it here as well.
1 The syntax - As Paul Dirac might have said "It isn't even daft!". But given enough examples and warnings about the need/wrong for spaces, we can cope with that.
2 It tries to be too clever. Fine when it succeeds, and baffling when it doesn’t. But given more *examples* of what works, and more important, what commonly doesn't work, showing how to decode the inscrutable messages, again we can cope.
3 But most of all, I struggle with the documentation. * It doesn't say enough on 'why' Boost is using b2. * What there is looks nice, but assumes far, far too much and misses far too much. * It doesn't give *worked examples* of the main things one commonly does. * It hasn't got an index or other aids to *find* what one wants to know.
I see the same questions appearing again and again - and taking the time of the long-suffering and ever-helpful experts.
We need to minimize wasting their really valuable time (and hair-tearing by users).
I'd like to see a complete rewrite of documentation. Paradoxically, I want it written by someone who is *not* an author or expert. (Obviously it will have to be edited by several people who *are* experts).
For future library authors, a template dummy project with all the folders and files assembled to copy and modify for their library would mean that the learning curve is less steep - at present, it's an overhang!
Paul
--- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

"Paul A. Bristow" wrote in message news:003701d19c99$ce09e7a0$6a1db6e0$@hetp.u-net.com...
[...] I'd like to see a complete rewrite of documentation. Paradoxically, I want it written by someone who is *not* an author or expert. (Obviously it will have to be edited by several people who *are* experts).
Does https://github.com/boostcon/2011_presentations/raw/master/mon/Boost.Build.pd... somewhat help? When I worked on the presentation, I wasn't an expert (I exchanged countless of emails with Vladimir back then in 2011 who gave me a lot of background information). I tried to present all the information in a way that it made sense to me and hopefully to others who find the build system to be a mystery. While the presentation is out there on GitHub for years, I feel hardly anybody knows about it (or maybe people know but the presentation isn't that helpful?). Boris

-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Boris Schäling Sent: 23 April 2016 11:45 To: boost@lists.boost.org Subject: Re: [boost] CMake - one more time
"Paul A. Bristow" wrote in message news:003701d19c99$ce09e7a0$6a1db6e0$@hetp.u-net.com...
[...] I'd like to see a complete rewrite of documentation. Paradoxically, I want it written by someone who is *not* an author or expert. (Obviously it will have to be edited by several people who *are* experts).
Does https://github.com/boostcon/2011_presentations/raw/master/mon/Boost.Build.pd... somewhat help? When I worked on the presentation, I wasn't an expert (I exchanged countless of emails with Vladimir back then in 2011 who gave me a lot of background information). I tried to present all the information in a way that it made sense to me and hopefully to others who find the build system to be a mystery. While the presentation is out there on GitHub for years, I feel hardly anybody knows about it (or maybe people know but the presentation isn't that helpful?).
It certainly does help a lot - but it isn't what users and authors will get 'by default'. As often with computing, the cognoscenti just can't see what those at the shallower end of the gene pool find so difficult. And *why* Boost uses an unfamiliar build tool. Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830

On 4/22/2016 4:20 PM, Paul A. Bristow wrote:
I'd like to see a complete rewrite of documentation. Paradoxically, I want it written by someone who is *not* an author or expert. (Obviously it will have to be edited by several people who *are* experts).
There sadly does not appears to line of people willing to help with documentation (in any open-source project I ever worked with). Boris was the only one recently, having authored all of http://www.boost.org/build/tutorial.html It would seem the only practical approach at this point would be if each interested party spend send me a list of things they want changed in docs, or spend some time over Skype call to literally walk through documentation as if for the first time, and point out issues.
For future library authors, a template dummy project with all the folders and files assembled to copy and modify for their library would mean that the learning curve is less steep - at present, it's an overhang!
That's a good idea, and will be easy to do for me. -- Vladimir Prus http://vladimirprus.com

On 4/23/16 12:52 PM, Vladimir Prus wrote:
On 4/22/2016 4:20 PM, Paul A. Bristow wrote:
I'd like to see a complete rewrite of documentation. Paradoxically, I want it written by someone who is *not* an author or expert. (Obviously it will have to be edited by several people who *are* experts).
There sadly does not appears to line of people willing to help with documentation (in any open-source project I ever worked with). Boris was the only one recently, having authored all of
http://www.boost.org/build/tutorial.html
It would seem the only practical approach at this point would be if each interested party spend send me a list of things they want changed in docs, or spend some time over Skype call to literally walk through documentation as if for the first time, and point out issues.
For future library authors, a template dummy project with all the folders and files assembled to copy and modify for their library would mean that the learning curve is less steep - at present, it's an overhang!
That's a good idea, and will be easy to do for me.
One idea might be worth experimenting with would be user update-able documentation like php has. Robert Ramey

-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Vladimir Prus Sent: 23 April 2016 20:53 To: boost@lists.boost.org Subject: [boost] Boost.Build documentation (Was:: CMake - one more time)
On 4/22/2016 4:20 PM, Paul A. Bristow wrote:
I'd like to see a complete rewrite of documentation. Paradoxically, I want it written by someone who is *not* an author or expert. (Obviously it will have to be edited by several people who *are* experts).
There sadly does not appears to line of people willing to help with documentation (in any open-source project I ever worked with). Boris was the only one recently, having authored all of
http://www.boost.org/build/tutorial.html
It would seem the only practical approach at this point would be if each interested party spend send me a list of things they want changed in docs, or spend some time over Skype call to literally walk through documentation as if for the first time, and point out issues.
I might find time to make some concrete suggestions. (As everyone knows, there are always more exciting projects ;-)
For future library authors, a template dummy project with all the folders and files assembled to copy and modify for their library would mean that the learning curve is less steep - at present, it's an overhang!
That's a good idea, and will be easy to do for me.
That would be a real help to wannabe authors. Over times, I looked at the jamfiles in lots of libraries. Everyone seems to do it differently, often for reasons that are not clear to me. A clear recommendation (preferably with some comments on what and why) would reduce entropy here. Thanks Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830

4- It is in fact - I believe - possible to do a "cd doc; make" with CMake, but not from the source tree,
<snip> I believe that thatis what've done with the safe numerics library.
So what I do now is also that: I am maintaining my CMakeLists.txt, for the purpose of having a proper development environment, but it has no other purpose at all. Also I have to say that it does not work well with this horrible "b2 header": in my cmake, I am hitting the headers of the library (in libX/include), and not of the main superproject. My IDE shows me 2 different files because of that.
I don't have this problem - I just use the cmake command - include directory to refer to the include files of the library I'm working on. As far as CMake is concerned, the superproject doesn't exist. When I switch to boost build to run comprehensive tests while I take 10 hours off to catchup on my sleep - it creates the headers it needs I think automatically. Or maybe I run b2 headers before I launch it. Now I don't remember.
I do not like the current state of b2 for many reasons (even though I think it could be really a good build system), but CMake is not covering many features that are currently required by the boost superproject. Until the point where we can consistently build the lib (including the possibly many flavor of the same library - STATIC/SHARED at the same time, etc),run the tests, and generate the documentation (including the possibility to have the boostdoc/quickbook/doxygen toolchain),
I should say that my boost build does all those things. My complaint is that: a) It's harder to setup than I would hope it to be. b) Once set up, it's pretty reliable. But when it breaks it's an easter egg hunt to make it work again.
I do not see any *good* reason to move to cmake.
I'm not proposing abandoning support for CMake. I'm proposing that we officially tolerate the existence of CMake files in boost projects - perhaps with some restrictions.
I have to say this does not scale at all, especially wrt. <snip>
I think that boost works better as a "federation" rather than a "republic". Our loose rules have permitted things like quickbook to be born in the first place. We'll never agree on certain things: a) this topic b) a documentation system c) other stuff. We just keep moving on. Robert Ramey

On Apr 20, 2016, at 4:39 PM, Raffi Enficiaud <raffi.enficiaud@mines-paris.org> wrote:
Le 20/04/16 17:49, Robert Ramey a écrit :
Inspired by the recent discussion regarding CMake, I've spend some more time looking at this. As a result, I've done the following.
Hi, Thanks for summarizing this very very (very) long thread.
a) I experimented with using CMake in the more customary manner in my canonical project Safe Numerics - (see Boost Library Incubator). The current version does distribute CMakeLists.txt around the subdirectories of the library root. My original objections to doing this were assuaged when I realized that boost build does the same thing. That is I typically have Jamfiles in side of test, example and sometimes other directories.
b) One of the features that I much like about Boost Build is the possibility of arbitrary nesting of Jamfiles. This permits one to build/test a portion of one's library (e.g. test, example) as well as the whole library. For a large library this is indispensable. In spite of what many people think, this is not easily supported with CMake. However, I have found that with some extra effort, it is possible to support this to the extent we need it. So in this project ONE CAN BUILD (actually create an IDE project or makefile that builds) ANY OF THE "SUBPROJECTS" (TEST, EXAMPLE) AS WELL AS THE LIBRARY "SUPERPROJECT"
If you think about nesting as being able to run something like "make doc" from any library, then yes, cmake is definitely lacking that. OTOH, I have the feeling that: 1- everything in the current boost is coupled to the build system. I read ppl wanting to be modular, but it is given the fact that there is an adequate build system
2- b2 is a bit cheating: it knows the full "namespace" (it flattens it) and can then eg. sort out the targets dependencies and build order. So even if you think that by typing "make doc" in your library, you are hitting only the build system (boostbook/doxygen toolchain, etc) plus your library, this is wrong: every target in b2 has the knowledge of the full DAG of dependencies, which makes it highly coupled to any other library, at least to the superproject. In CMake this nesting of CMakeLists is more compartmented: one CMakeList is supposed to be more or less independent from its siblings (but not from the parent). This is also what would make the transition of the super project to CMake very difficult: for instance dependencies have to be explicitly stated in a main orchestrating CMakeLists.txt (in b2 this is I believe implicitly done, certainly is several parsing passes).
3- b2 imposes a structure of directories as well: for instance, if I do """ using quickbook ; using doxygen ; using boostbook ; """ those features should be in files relative to some paths of the b2 location wrt. the superproject (please correct me if I am wrong). Also, when I "make" a library, it goes magically to the bin.v2 folder of the root of the superproject. I have the feeling that some behaviour of b2 in terms of relative paths are hard-coded somewhere. That is to say that: "cd doc; b2" is not at all independent from the superproject folder structure, and the apparent modularity is not a real one. But again, I agree that is just works for the purpose of boost, it is just tightly coupled to hidden things.
4- It is in fact - I believe - possible to do a "cd doc; make" with CMake, but not from the source tree, but from a subfolder of the build tree. You have to generate the superproject (or part of it) first though. OTOH, this first generation of the superproject is done intrinsically by b2 anyway: b2 does it in memory right before executing your command (look at the time it takes before starting processing the "cd doc; b2"). So if we think about nesting: CMake is better in the sense that it has a hierarchy of project, so the dependencies imposed to be a tree, while b2 fakes (to my opinion) the nesting (apparent hierarchy) and flattens everything to extract a DAG.
c) CMake is indispensable to me:
* it creates IDE projects for "any?" platform. I use IDE alot. * everyone else uses it - so it makes it easier to promote my work to others. * it's easier to make work - still a pain though * There is lots of information, around the net about how to use CMake, how easy it is etc. Although they help when you're looking for an answer (which is all the time) - they really betray how complex, arbitrary and complex the system is. * It has an almost of idiot proof GUI version which I use a lot. I really like this. * CMake is well maintained and supported by it's promoters.
So what I do now is also that: I am maintaining my CMakeLists.txt, for the purpose of having a proper development environment, but it has no other purpose at all. Also I have to say that it does not work well with this horrible "b2 header": in my cmake, I am hitting the headers of the library (in libX/include), and not of the main superproject. My IDE shows me 2 different files because of that. But apart from that, it is good enough (and yes, I have one CMakeLists.txt for the build+doc+test, I do not see any good reason for splitting that).
d) Boost Build * either just works (great!) or doesn't. If it doesn't it's almost impossible fix without getting help * I've never run across anyone outside of boost who uses it. It makes it harder to promote my work. * It's natural to compose projects into "super projects" * it's almost impossible to integrate with and IDE. At one time, I had things setup so I could debug executables created with boost build with the Visual Studio IDE and debugger. But it was just too fragile and time consuming to keep everything in sync. * it has a lot of "automatic" behavior which can be really, really confusion. A small example: you've got multiple compilers on your system. When it discovers this, it just picks the "best" one and you don't know which one you got until the project builds (or not). I'm sure this was implemented this way to make usage of boost build "simple" but it has the opposite effect. Much better would be fail immediately with a message "multiple compilers found:... use toolset=<compiler name> to select desired one."
I do not like the current state of b2 for many reasons (even though I think it could be really a good build system), but CMake is not covering many features that are currently required by the boost superproject. Until the point where we can consistently build the lib (including the possibly many flavor of the same library - STATIC/SHARED at the same time, etc), run the tests, and generate the documentation (including the possibility to have the boostdoc/quickbook/doxygen toolchain), I do not see any *good* reason to move to cmake.
Cmake can do all that you listed there. In addition, I found its easy to generate targets at configure time. So the Fit library actually adds additional tests to test examples and header files. Plus, I can create configuration headers based on whether a something compiles or runs. I believe BB can do the same using virtual targets or something, but I haven’t clearly figured it out. Finally, for boost, it could provide some high-level cmake functions so all of these things can happen consistently across libraries.
Some Conclusions - I'm trying to not make this a rant
a) The ideal, platform independent build system does not yet exist. I guessing it never will. I'm sure it won't happen in my life time - but then I'm 68 - maybe you'll get lucky.
b) Both systems are much more fragile, complicated and opaque than their promoters try to make you believe. It's not that they're lying, they truely believe in their own stuff. There is much re-invention of the wheel - The each created their own (half-assed) little language for goodness sake!!!
c) Neither has really resolved the issue of nested projects in a clear way. Boost Build probably does or can do this. CMake has a system of "packages" and a whole 'nuther layer about "finding" them. Assuming it can be made to work - given the amount of time I've invested in CMake, I should know how to do this by now.
At least, one is more or less working right now, and the other has not proven to be working in all variety of cases.
d) I think it's time for Boost to be a little more open about tolerating/encouraging alternative build systems. I think our documentation approach is a model. Yeah it's a hodgepodge. But the various ways of doing pretty much work and generally don't stop working and we don't have to constantly spend effort bringing things up to the latest greatest system (which we couldn't agree upon anyway). We have libraries which have been going strong 15 years - and people can still read the docs.
I have to say this does not scale at all, especially wrt. the resources (your point e/ below). I do not know if the documentation is a good example also: we have many different systems, I like quickbook, I believe it is properly serving the purpose of a "documentation", where Doxygen fails. What is the cost of maintaining many tools compatibility? I would promote the opposite in fact: some methods proved to be good for boost, if someone wants to integrate with a currently unsupported tool, it should not so much impact the ppl that are eg. maintaining the travis.yml (or they should do the work). Also the tools are at the core of boost, we should not neglect them and rather promote them as we do for boost libraries, and we should not integrate technologies that are weak, "just because" it makes boost more appealing for a few developers.
e) We should find some way to recognize those who have made the system work as well it has. Doug Gregor (boost book), Eric Niebler, Joel Guzman (quickbook). Vladimir Prus, Rene Rivera, Steve Watanabe. I know there others but these come to mind immediately.
Note that I have only addressed the issue of library development which is my main interest. I'm really not seeing this issues related to users of libraries. In particular, CMake has the whole "find" thing which I'm still not even seeing the need for. If I want to use a library, I can build the library and move it to a common place with the headers, specify the include directory and I'm on my way. I'm sure someone will step up to enlighten me on this.
Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Le 22/04/16 à 19:42, Paul Fultz II a écrit :
On Apr 20, 2016, at 4:39 PM, Raffi Enficiaud <raffi.enficiaud@mines-paris.org> wrote: <snip> I do not like the current state of b2 for many reasons (even though I think it could be really a good build system), but CMake is not covering many features that are currently required by the boost superproject. Until the point where we can consistently build the lib (including the possibly many flavor of the same library - STATIC/SHARED at the same time, etc), run the tests, and generate the documentation (including the possibility to have the boostdoc/quickbook/doxygen toolchain), I do not see any *good* reason to move to cmake.
Cmake can do all that you listed there. In addition, I found its easy to generate targets at configure time. So the Fit library actually adds additional tests to test examples and header files. Plus, I can create configuration headers based on whether a something compiles or runs. I believe BB can do the same using virtual targets or something, but I haven’t clearly figured it out.
Certainly, CMake can do everything with the appropriate effort. But so far, although I am a CMake user, I do not know how to do this: - having the same target name with different target properties: like set(PROJECT_SRC ....) add_library(myproject SHARED ${PROJECT_SRC}) add_library(myproject STATIC ${PROJECT_SRC}) how do you do that? how do you refer to the appropriate variant? - having a set of dependencies that is not driven by a high level CMakeLists.txt. You advocate the solution of packaging, but this does not target all platforms, and just translates the same problem to another layer to my opinion. As a developer, in order to work on a library X that depends on Y, you should install Y, and this information should appear in X (so this is an implicit knowledge). What this process does is that it put X and Y in some same level of knowledge: a flatten set of packages. This is done by BJam already the same, but at compilation/build step, and without the burden of the extra management of packages (update upstream Y for instance, when Y can be a set of many packages, and obviously in a confined, repeatable and isolated development environment). But maybe you think of something else. To me this is a highly non trivial task to do with CMake, and ends up in half backed solutions like ROS/Catkin (http://wiki.ros.org/catkin/conceptual_overview), which is really not CMake and is just making things harder for everyone. - I can continue... such as targets subset selection. It is doable with CMake with, "I think" some umbrella projects, but again this is hard to maintain and requires a high level orchestration. Only for the tests for instance: suppose I do not want to compile them in my first step, and then I change my mind, I want to run a subset of them. What I also want is not wasting my time in waiting for a billion of files to compile, I just want the minimal compilation. So it comes to my mind that EXCLUDE_FROM_ALL might be used, but when I run ctest -R something*, I get an error... Maybe you know a good way of doing that in cmake?
Finally, for boost, it could provide some high-level cmake functions so all of these things can happen consistently across libraries.
Sure. Or ... BJam should be given some more care and visibility, like a GSoC (bis) track?

On 4/22/16 2:56 PM, Raffi Enficiaud wrote:
Certainly, CMake can do everything with the appropriate effort. But so far, although I am a CMake user, I do not know how to do this:
- having the same target name with different target properties: like set(PROJECT_SRC ....) add_library(myproject SHARED ${PROJECT_SRC}) add_library(myproject STATIC ${PROJECT_SRC})
how do you do that? how do you refer to the appropriate variant?
I know it's beside the point of your post, but I can't resist. I do this in the following way. a) I set up a cached boolean variable USE_STATIC b) I use the CMake script if(USE_STATIC) add_library(myproject STATIC ${PROJECT_SRC}) else() add_library(myproject SHARED ${PROJECT_SRC}) elseif() Then I generate two different versions and can switch back and forth between them. You see this in the serialization library CMake files Robert Ramey

On Apr 22, 2016, at 5:41 PM, Robert Ramey <ramey@rrsd.com> wrote:
On 4/22/16 2:56 PM, Raffi Enficiaud wrote:
Certainly, CMake can do everything with the appropriate effort. But so far, although I am a CMake user, I do not know how to do this:
- having the same target name with different target properties: like set(PROJECT_SRC ....) add_library(myproject SHARED ${PROJECT_SRC}) add_library(myproject STATIC ${PROJECT_SRC})
how do you do that? how do you refer to the appropriate variant?
I know it's beside the point of your post, but I can't resist.
I do this in the following way.
a) I set up a cached boolean variable USE_STATIC b) I use the CMake script
if(USE_STATIC) add_library(myproject STATIC ${PROJECT_SRC}) else() add_library(myproject SHARED ${PROJECT_SRC}) elseif()
This is not necessary at all. Cmake provides the `BUILD_SHARED_LIBS` variable that can be set to do this same purpose: https://cmake.org/cmake/help/v3.5/variable/BUILD_SHARED_LIBS.html
Then I generate two different versions and can switch back and forth between them.
You see this in the serialization library CMake files
Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 23 April 2016 at 02:36, Paul Fultz II <pfultz2@yahoo.com> wrote:
On Apr 22, 2016, at 5:41 PM, Robert Ramey <ramey@rrsd.com> wrote:
On 4/22/16 2:56 PM, Raffi Enficiaud wrote:
Certainly, CMake can do everything with the appropriate effort. But so far, although I am a CMake user, I do not know how to do this:
- having the same target name with different target properties: like set(PROJECT_SRC ....) add_library(myproject SHARED ${PROJECT_SRC}) add_library(myproject STATIC ${PROJECT_SRC})
how do you do that? how do you refer to the appropriate variant?
I know it's beside the point of your post, but I can't resist.
I do this in the following way.
a) I set up a cached boolean variable USE_STATIC b) I use the CMake script
if(USE_STATIC) add_library(myproject STATIC ${PROJECT_SRC}) else() add_library(myproject SHARED ${PROJECT_SRC}) elseif()
This is not necessary at all. Cmake provides the `BUILD_SHARED_LIBS` variable that can be set to do this same purpose:
https://cmake.org/cmake/help/v3.5/variable/BUILD_SHARED_LIBS.html
It forces all the targets defined after setting it to use the defined linking mode. Most repositories of tools (outside boost at least) provide several libraries or executables targets. In my experience I end up using different linking modes for different libraries which are in the same repository. Or the target still need to be built and distributed in several linking modes. Therefore, I can never use BUILD_SHARED_LIBS because it is not fined-grained enough. In my opinion, having multiple targets, one for each linking mode that make sense for the library usage, works better.

On Apr 23, 2016, at 5:40 AM, Klaim - Joël Lamotte <mjklaim@gmail.com> wrote:
On 23 April 2016 at 02:36, Paul Fultz II <pfultz2@yahoo.com> wrote:
On Apr 22, 2016, at 5:41 PM, Robert Ramey <ramey@rrsd.com> wrote:
On 4/22/16 2:56 PM, Raffi Enficiaud wrote:
Certainly, CMake can do everything with the appropriate effort. But so far, although I am a CMake user, I do not know how to do this:
- having the same target name with different target properties: like set(PROJECT_SRC ....) add_library(myproject SHARED ${PROJECT_SRC}) add_library(myproject STATIC ${PROJECT_SRC})
how do you do that? how do you refer to the appropriate variant?
I know it's beside the point of your post, but I can't resist.
I do this in the following way.
a) I set up a cached boolean variable USE_STATIC b) I use the CMake script
if(USE_STATIC) add_library(myproject STATIC ${PROJECT_SRC}) else() add_library(myproject SHARED ${PROJECT_SRC}) elseif()
This is not necessary at all. Cmake provides the `BUILD_SHARED_LIBS` variable that can be set to do this same purpose:
https://cmake.org/cmake/help/v3.5/variable/BUILD_SHARED_LIBS.html
It forces all the targets defined after setting it to use the defined linking mode.
There is only one library target.
Most repositories of tools (outside boost at least) provide several libraries or executables targets.
I don’t see why a library would build more than one library, and it wouldn’t make sense to change the linking mode for an executable(that is if you build the library static then the executable will need to link against the static version of the library, since its the only one available).
In my experience I end up using different linking modes for different libraries which are in the same repository.
Sounds way too complicated.
Or the target still need to be built and distributed in several linking modes.
Yes, but you can just build it twice.
Therefore, I can never use BUILD_SHARED_LIBS because it is not fined-grained enough.
In my opinion, having multiple targets, one for each linking mode that make sense for the library usage, works better.
I don’t see how that makes sense at all. A library could provide a flag to build both as an optimization, otherwise it should just fall back on the default.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Le 23/04/16 à 14:00, Paul Fultz II a écrit :
On Apr 23, 2016, at 5:40 AM, Klaim - Joël Lamotte <mjklaim@gmail.com> wrote:
<snip>
This is not necessary at all. Cmake provides the `BUILD_SHARED_LIBS` variable that can be set to do this same purpose:
https://cmake.org/cmake/help/v3.5/variable/BUILD_SHARED_LIBS.html
It forces all the targets defined after setting it to use the defined linking mode.
There is only one library target.
I agree with Joël (please excuse me if it is "Klaim - Joël"). The "BUILD_SHARED_LIBS" is a global that is applied by default to all targets. It does not offer the granularity we are talking about here.
Most repositories of tools (outside boost at least) provide several libraries or executables targets.
I don’t see why a library would build more than one library, and it wouldn’t make sense to change the linking mode for an executable(that is if you build the library static then the executable will need to link against the static version of the library, since its the only one available).
The fact that you do not see why does not mean that this is not a practice in boost. We have that in boost.test for instance: we need to test several variants. Also the fact that you do not use this feature is also because of the limitation of cmake (means "if cmake was able to do that, you would maybe see a good reason to do that"). This is not covered by CMake, *by design*, while it is in BJam, *by design*. The *by design* is important here: all attempts to cover the design limitations needs an effort.
In my experience I end up using different linking modes for different libraries which are in the same repository.
Sounds way too complicated.
Why?
Or the target still need to be built and distributed in several linking modes.
Yes, but you can just build it twice.
I believe you are missing some important element: each target may be build in eg. different link type, say "M" (static, shared, whatever). Say you have N targets: we have M^N combinations of variants if we take the full set of targets. Of course, I can build it M^N (worst case) times, with the same order of switches in the CMakeLists.txt ...
Therefore, I can never use BUILD_SHARED_LIBS because it is not fined-grained enough.
In my opinion, having multiple targets, one for each linking mode that make sense for the library usage, works better.
I don’t see how that makes sense at all. A library could provide a flag to build both as an optimization, otherwise it should just fall back on the default.
Could? Should? CMake did not enforce that at least, so it is not up to you to decide neither. But the build variants are just one aspect of the problem, there are many others. Raffi

On Apr 23, 2016, at 8:34 AM, Raffi Enficiaud <raffi.enficiaud@mines-paris.org> wrote:
Le 23/04/16 à 14:00, Paul Fultz II a écrit :
On Apr 23, 2016, at 5:40 AM, Klaim - Joël Lamotte <mjklaim@gmail.com> wrote:
<snip>
This is not necessary at all. Cmake provides the `BUILD_SHARED_LIBS` variable that can be set to do this same purpose:
https://cmake.org/cmake/help/v3.5/variable/BUILD_SHARED_LIBS.html
It forces all the targets defined after setting it to use the defined linking mode.
There is only one library target.
I agree with Joël (please excuse me if it is "Klaim - Joël"). The "BUILD_SHARED_LIBS" is a global that is applied by default to all targets. It does not offer the granularity we are talking about here.
It doesn’t offer the granularity. The granularity comes from the modularity of libraries. If that doesn’t give enough granularity then most likely a library needs to be split up into smaller libraries.
Most repositories of tools (outside boost at least) provide several libraries or executables targets.
I don’t see why a library would build more than one library, and it wouldn’t make sense to change the linking mode for an executable(that is if you build the library static then the executable will need to link against the static version of the library, since its the only one available).
The fact that you do not see why does not mean that this is not a practice in boost.
And because it is a practice in boost doesn’t mean it is good practice.
We have that in boost.test for instance: we need to test several variants. Also the fact that you do not use this feature is also because of the limitation of cmake (means "if cmake was able to do that, you would maybe see a good reason to do that”).
I understand testing all variants. I don’t see why one library is providing that many library targets.
This is not covered by CMake, *by design*, while it is in BJam, *by design*. The *by design* is important here: all attempts to cover the design limitations needs an effort.
That is not true. The object library was created to handle optimizing building both static/shared variants.
In my experience I end up using different linking modes for different libraries which are in the same repository.
Sounds way too complicated.
Why?
When linking static and shared libraries together it can lead to a lot of complicated problems. You have to make sure the static code is built PIC depending on how it needs to be linked together.
Or the target still need to be built and distributed in several linking modes.
Yes, but you can just build it twice.
I believe you are missing some important element: each target may be build in eg. different link type, say "M" (static, shared, whatever). Say you have N targets: we have M^N combinations of variants if we take the full set of targets. Of course, I can build it M^N (worst case) times, with the same order of switches in the CMakeLists.txt ...
Therefore, I can never use BUILD_SHARED_LIBS because it is not fined-grained enough.
In my opinion, having multiple targets, one for each linking mode that make sense for the library usage, works better.
I don’t see how that makes sense at all. A library could provide a flag to build both as an optimization, otherwise it should just fall back on the default.
Could? Should? CMake did not enforce that at least, so it is not up to you to decide neither.
Yes, but a cmake module could take care of that under the hood.
But the build variants are just one aspect of the problem, there are many others.
Other build variants require building again, the static/shared variant is the only one that offers an optimization.
Raffi
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 23 April 2016 at 18:47, Paul Fultz II <pfultz2@yahoo.com> wrote:
When linking static and shared libraries together it can lead to a lot of complicated problems. You have to make sure the static code is built PIC depending on how it needs to be linked together.
That's the least problematic issue you can have indeed, there are others related to static objects etc. Nonetheless, in big projects it can be necessary to have some parts as shared library and others as static library. If you want to use all boost libraries as static except boost.log because of it's singleton core object, you need to build boost in both static and dynamic and then make sure you link with the right version of each boost library. This is a concrete example of a project I worked on, where this setup was necessary to achieve the features of the project.

On 23 April 2016 at 14:00, Paul Fultz II <pfultz2@yahoo.com> wrote:
On Apr 23, 2016, at 5:40 AM, Klaim - Joël Lamotte <mjklaim@gmail.com> wrote:
On 23 April 2016 at 02:36, Paul Fultz II <pfultz2@yahoo.com> wrote:
On Apr 22, 2016, at 5:41 PM, Robert Ramey <ramey@rrsd.com> wrote:
On 4/22/16 2:56 PM, Raffi Enficiaud wrote:
Certainly, CMake can do everything with the appropriate effort. But so far, although I am a CMake user, I do not know how to do this:
- having the same target name with different target properties: like set(PROJECT_SRC ....) add_library(myproject SHARED ${PROJECT_SRC}) add_library(myproject STATIC ${PROJECT_SRC})
how do you do that? how do you refer to the appropriate variant?
I know it's beside the point of your post, but I can't resist.
I do this in the following way.
a) I set up a cached boolean variable USE_STATIC b) I use the CMake script
if(USE_STATIC) add_library(myproject STATIC ${PROJECT_SRC}) else() add_library(myproject SHARED ${PROJECT_SRC}) elseif()
This is not necessary at all. Cmake provides the `BUILD_SHARED_LIBS` variable that can be set to do this same purpose:
https://cmake.org/cmake/help/v3.5/variable/BUILD_SHARED_LIBS.html
It forces all the targets defined after setting it to use the defined linking mode.
There is only one library target.
Most repositories of tools (outside boost at least) provide several libraries or executables targets.
I don’t see why a library would build more than one library, and it wouldn’t make sense to change the linking mode for an executable(that is if you build the library static then the executable will need to link against the static version of the library, since its the only one available).
I said one repository, several libraries. Of course I was talking about the linking mode of the several libraries. If you think about boost libraries, then only one library is in one repository indeed. But if you build all boost once, then you still have to set a variable for each library you want static or dynamic. So it's still not fined grained enough.
In my experience I end up using different linking modes for different libraries which are in the same repository.
Sounds way too complicated.
Yep, tha'ts the problem with real world project. Sometime you don't have a choice.
Or the target still need to be built and distributed in several linking modes.
Yes, but you can just build it twice.
That is indeed an alternative. Which produce a lot of data for no good reasons, but still an alternative.
Therefore, I can never use BUILD_SHARED_LIBS because it is not fined-grained enough.
In my opinion, having multiple targets, one for each linking mode that make sense for the library usage, works better.
I don’t see how that makes sense at all. A library could provide a flag to build both as an optimization, otherwise it should just fall back on the default.
If you don't want to have to configure the library several times, then it's easier to configure once and then make have your target link to the static or dynamic target at will. It makes things very simple in particular with projects that need to evolve quickly. Anyway, what I am saying is that maybe your experience don't match all the actual use case of projects, which leads to false assumptions about what is enough and what is not. On this particular point, I totally disagree that BUILD_SHARED_LIBS is useful. It's like a hack to not help solve the issue.
_______________________________________________ Unsubscribe & other changes:
http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

This is not necessary at all. Cmake provides the `BUILD_SHARED_LIBS` variable that can be set to do this same purpose:
https://cmake.org/cmake/help/v3.5/variable/BUILD_SHARED_LIBS.html
Of course I saw this. This is a great example of the kind of thing that drives me nuts. looking at the CMake documentation a) Its not clear how the variable is supposed got get set in the first place. Is one expected to include in the CMakeLists.txt file a statement like set(BUILD_SHARED_LIBS TRUE) ? But wouldn't that constraint the build to shared or static library? What if you want to choose? b) What is the default? - oh it might depend upon whether "option()" is used somewhere. Instead of providing an answer it just gives you another question. c) So it is not clear on its face whether the statement add_library(target_name ....) builds a static or shared library. It depends upon some other "global" variable. That is you cannot know what a statement will do by looking at the statement. Worse yet, the behavior of the statement may depend upon some higher level CMakeLists.txt file that you might not even be aware of. d) is BUILD_SHARED_LIBS a cached variable? I could go on, but you get the idea. And this is just one single variable. There are dozens with similar behavior and implications. As far as I can tell, all other build systems suffer from the same problems. And the documentation actually makes it worse because it suggests that the system is simple to use when it's actually not. It makes naive users feel like they're stupid - whether they are or not. And this is made worse by the fact that there are lots of people who have made simple build scripts for small projects with limited requirements. This work the first time - now they think it IS easy and they think there's nothing to it. It's depressing. One cannot make a build script without deep trolling of the net to find he cause of arbitrary surprises. You can't just read a build script and know what it does. Robert Ramey

On Apr 23, 2016, at 12:08 PM, Robert Ramey <ramey@rrsd.com> wrote:
This is not necessary at all. Cmake provides the `BUILD_SHARED_LIBS` variable that can be set to do this same purpose:
https://cmake.org/cmake/help/v3.5/variable/BUILD_SHARED_LIBS.html
Of course I saw this. This is a great example of the kind of thing that drives me nuts. looking at the CMake documentation
a) Its not clear how the variable is supposed got get set in the first place. Is one expected to include in the CMakeLists.txt file a statement like set(BUILD_SHARED_LIBS TRUE) ? But wouldn't that constraint the build to shared or static library? What if you want to choose?
I would always set it at configuration `cmake -DBUILD_SHARED_LIBS=On` or in my toolchain file, which would be how I would set your `USE_STATIC` variable as well.
b) What is the default? - oh it might depend upon whether "option()" is used somewhere. Instead of providing an answer it just gives you another question.
You are right they don’t make clear what the default value is. The `option()` part is so that the user could set it in cmake-gui or ccmake, which is not the clearest in the documentation either.
c) So it is not clear on its face whether the statement add_library(target_name ....) builds a static or shared library. It depends upon some other "global" variable. That is you cannot know what a statement will do by looking at the statement. Worse yet, the behavior of the statement may depend upon some higher level CMakeLists.txt file that you might not even be aware of.
The idea is that the user would set this(not the library author), so it shouldn’t be set in a CMakeLists.txt file.
d) is BUILD_SHARED_LIBS a cached variable?
I could go on, but you get the idea. And this is just one single variable. There are dozens with similar behavior and implications. As far as I can tell, all other build systems suffer from the same problems.
Yes, but cmake has a much larger community that after a little googling I can find a solution to the problem due to the fact that there is a wealth of examples and tutorials out there.
And the documentation actually makes it worse because it suggests that the system is simple to use when it's actually not. It makes naive users feel like they're stupid - whether they are or not.
And this is made worse by the fact that there are lots of people who have made simple build scripts for small projects with limited requirements. This work the first time - now they think it IS easy and they think there's nothing to it. It's depressing.
I don’t know what you are referring to here.
One cannot make a build script without deep trolling of the net to find he cause of arbitrary surprises. You can't just read a build script and know what it does.
Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

I would always set it at configuration `cmake -DBUILD_SHARED_LIBS=On` or in my toolchain file, which would be how I would set your `USE_STATIC` variable as well.
LOL - I use the GUI
b) What is the default? - oh it might depend upon whether "option()" is used somewhere. Instead of providing an answer it just gives you another question.
You are right they don’t make clear what the default value is. The `option()` part is so that the user could set it in cmake-gui or ccmake, which is not the clearest in the documentation either.
c) So it is not clear on its face whether the statement add_library(target_name ....) builds a static or shared library. It depends upon some other "global" variable. That is you cannot know what a statement will do by looking at the statement. Worse yet, the behavior of the statement may depend upon some higher level CMakeLists.txt file that you might not even be aware of.
The idea is that the user would set this(not the library author), so it shouldn’t be set in a CMakeLists.txt file.
This statement is not about the particular CMake variable. It's about the fact that the "language" has many, many builtin ambiguities which make it impossible to know or agree on how to use it.
d) is BUILD_SHARED_LIBS a cached variable?
I could go on, but you get the idea. And this is just one single variable. There are dozens with similar behavior and implications. As far as I can tell, all other build systems suffer from the same problems.
Yes, but cmake has a much larger community that after a little googling I can find a solution to the problem due to the fact that there is a wealth of examples and tutorials out there.
Ahhh yes. That's the real problem. What you characterize as a solution/feather, I characterize as a symptom of fundamental fault. Do have have to troll the whole net to differentiate a function? Of course not. That fact that this is now an acceptable answer is testament to the sad state of modern software development!
And the documentation actually makes it worse because it suggests that the system is simple to use when it's actually not. It makes naive users feel like they're stupid - whether they are or not.
And this is made worse by the fact that there are lots of people who have made simple build scripts for small projects with limited requirements. This work the first time - now they think it IS easy and they think there's nothing to it. It's depressing.
I don’t know what you are referring to here.
LOL - I'm referring to discussions such as this one. The fact that everytime a question is raised, someone has an answer for some specific scenario. This is deemed to be support that the system is a good one. This the exact wrong conclusion! It seems that it never occurs to anyone that the fact that such a question has to be ask in the first place is an indicator that something is fundamentally wrong with the concept and/or implementation. Robert Ramey

On Apr 23, 2016, at 1:09 PM, Robert Ramey <ramey@rrsd.com> wrote:
I would always set it at configuration `cmake -DBUILD_SHARED_LIBS=On` or in my toolchain file, which would be how I would set your `USE_STATIC` variable as well.
LOL - I use the GUI
But don’t you still need to run cmake first before running the GUI?
b) What is the default? - oh it might depend upon whether "option()" is used somewhere. Instead of providing an answer it just gives you another question.
You are right they don’t make clear what the default value is. The `option()` part is so that the user could set it in cmake-gui or ccmake, which is not the clearest in the documentation either.
c) So it is not clear on its face whether the statement add_library(target_name ....) builds a static or shared library. It depends upon some other "global" variable. That is you cannot know what a statement will do by looking at the statement. Worse yet, the behavior of the statement may depend upon some higher level CMakeLists.txt file that you might not even be aware of.
The idea is that the user would set this(not the library author), so it shouldn’t be set in a CMakeLists.txt file.
This statement is not about the particular CMake variable. It's about the fact that the "language" has many, many builtin ambiguities which make it impossible to know or agree on how to use it.
What ambiguities? Cmake has several variables that the user can set to control how to build a project such as BUILD_SHARED_LIBS, CMAKE_PREFIX_PATH, CMAKE_INSTALL_PATH, CMAKE_CXX_FLAGS, CMAKE_CXX_COMPILER, etc. Cmake utilizes these variables when it builds targets, searches for dependencies, and installs components. These variables are documented as well. Furthermore, cmake is setup so you don’t have to think about these variables or implement the same infrastructure. For example, I can build a library like this: find_library(FOO_LIBRARY_LIBS foo) add_library(MyLib ${SOURCES}) target_link_libraries(MyLib ${FOO_LIBRARY_LIBS}) install(MyLib DESTINATION lib) Then users can use cmake variables to build the library shared or static, set the prefix directory to search for the library, set the path where the library will be installed, set the compiler or compiler flags. Plus the above can be crosscompiled as well.
d) is BUILD_SHARED_LIBS a cached variable?
I could go on, but you get the idea. And this is just one single variable. There are dozens with similar behavior and implications. As far as I can tell, all other build systems suffer from the same problems.
Yes, but cmake has a much larger community that after a little googling I can find a solution to the problem due to the fact that there is a wealth of examples and tutorials out there.
Ahhh yes. That's the real problem. What you characterize as a solution/feather, I characterize as a symptom of fundamental fault. Do have have to troll the whole net to differentiate a function? Of course not. That fact that this is now an acceptable answer is testament to the sad state of modern software development!
And the documentation actually makes it worse because it suggests that the system is simple to use when it's actually not. It makes naive users feel like they're stupid - whether they are or not.
And this is made worse by the fact that there are lots of people who have made simple build scripts for small projects with limited requirements. This work the first time - now they think it IS easy and they think there's nothing to it. It's depressing.
I don’t know what you are referring to here.
LOL - I'm referring to discussions such as this one. The fact that everytime a question is raised, someone has an answer for some specific scenario. This is deemed to be support that the system is a good one. This the exact wrong conclusion! It seems that it never occurs to anyone that the fact that such a question has to be ask in the first place is an indicator that something is fundamentally wrong with the concept and/or implementation.
Nope, the concept is fine. I think its best to actual learn how the tools are supposed to work instead of fighting the system.
Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 4/23/16 4:04 PM, Paul Fultz II wrote:
On Apr 23, 2016, at 1:09 PM, Robert Ramey <ramey@rrsd.com> wrote:
I would always set it at configuration `cmake -DBUILD_SHARED_LIBS=On` or in my toolchain file, which would be how I would set your `USE_STATIC` variable as well.
LOL - I use the GUI
But don’t you still need to run cmake first before running the GUI?
No - I just startup the GUI, set the directory of the source and the directory of build. This is one think I really like about CMake. I did run the command line version once 4 years ago though.
b) What is the default? - oh it might depend upon whether "option()" is used somewhere. Instead of providing an answer it just gives you another question.
You are right they don’t make clear what the default value is. The `option()` part is so that the user could set it in cmake-gui or ccmake, which is not the clearest in the documentation either.
c) So it is not clear on its face whether the statement add_library(target_name ....) builds a static or shared library. It depends upon some other "global" variable. That is you cannot know what a statement will do by looking at the statement. Worse yet, the behavior of the statement may depend upon some higher level CMakeLists.txt file that you might not even be aware of.
The idea is that the user would set this(not the library author), so it shouldn’t be set in a CMakeLists.txt file.
This statement is not about the particular CMake variable. It's about the fact that the "language" has many, many builtin ambiguities which make it impossible to know or agree on how to use it.
What ambiguities? Cmake has several variables that the user can set to control how to build a project such as BUILD_SHARED_LIBS, CMAKE_PREFIX_PATH, CMAKE_INSTALL_PATH, CMAKE_CXX_FLAGS, CMAKE_CXX_COMPILER, etc. Cmake utilizes these variables when it builds targets, searches for dependencies, and installs components. These variables are documented as well. Furthermore, cmake is setup so you don’t have to think about these variables or implement the same infrastructure. For example, I can build a library like this:
find_library(FOO_LIBRARY_LIBS foo) add_library(MyLib ${SOURCES}) target_link_libraries(MyLib ${FOO_LIBRARY_LIBS}) install(MyLib DESTINATION lib)
Then users can use cmake variables to build the library shared or static, set the prefix directory to search for the library, set the path where the library will be installed, set the compiler or compiler flags. Plus the above can be crosscompiled as well.
There is one confusion here. THere is one type of person or task - building the library as either shared or static. Then there is the person depending on the library/headers in some other CMake project. My concern has mostly been as a developer creating and testing the library. I haven't really addressed the "user" or consumer. So haven't touched about implementing the "find" functionality for "mylib"
LOL - I'm referring to discussions such as this one. The fact that everytime a question is raised, someone has an answer for some specific scenario. This is deemed to be support that the system is a good one. This the exact wrong conclusion! It seems that it never occurs to anyone that the fact that such a question has to be ask in the first place is an indicator that something is fundamentally wrong with the concept and/or implementation.
Nope, the concept is fine. I think its best to actual learn how the tools are supposed to work instead of fighting the system.
LOL - I don't undertake to fight the system - the "system" actually fights me. Actually, the problem is that people think there's a "system" when there isn't. It's really a grab bag of disjoint features which can be made to work. This can be and actually is useful, but it's a system with limited conceptual integrity. This makes it difficult and unintuitive to extend and apply to anything other than the "example" cases. Attempts to extend the "system" end up layer even more confusion on top of whats already there. This looks like it "solves" problems but just makes thing more opaque, arbitrary and fragile. Quality has to be built in - it can't be added on.
Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Apr 24, 2016, at 12:56 PM, Robert Ramey <ramey@rrsd.com> wrote:
On 4/23/16 4:04 PM, Paul Fultz II wrote:
On Apr 23, 2016, at 1:09 PM, Robert Ramey <ramey@rrsd.com> wrote:
I would always set it at configuration `cmake -DBUILD_SHARED_LIBS=On` or in my toolchain file, which would be how I would set your `USE_STATIC` variable as well.
LOL - I use the GUI
But don’t you still need to run cmake first before running the GUI?
No - I just startup the GUI, set the directory of the source and the directory of build.
Does the GUI provide a way to set variables at configure time? How do you set the toolchain?
This is one think I really like about CMake. I did run the command line version once 4 years ago though.
I always use the command line. It seems like the GUI would be a pain setting up multiple build directories.
b) What is the default? - oh it might depend upon whether "option()" is used somewhere. Instead of providing an answer it just gives you another question.
You are right they don’t make clear what the default value is. The `option()` part is so that the user could set it in cmake-gui or ccmake, which is not the clearest in the documentation either.
c) So it is not clear on its face whether the statement add_library(target_name ....) builds a static or shared library. It depends upon some other "global" variable. That is you cannot know what a statement will do by looking at the statement. Worse yet, the behavior of the statement may depend upon some higher level CMakeLists.txt file that you might not even be aware of.
The idea is that the user would set this(not the library author), so it shouldn’t be set in a CMakeLists.txt file.
This statement is not about the particular CMake variable. It's about the fact that the "language" has many, many builtin ambiguities which make it impossible to know or agree on how to use it.
What ambiguities? Cmake has several variables that the user can set to control how to build a project such as BUILD_SHARED_LIBS, CMAKE_PREFIX_PATH, CMAKE_INSTALL_PATH, CMAKE_CXX_FLAGS, CMAKE_CXX_COMPILER, etc. Cmake utilizes these variables when it builds targets, searches for dependencies, and installs components. These variables are documented as well. Furthermore, cmake is setup so you don’t have to think about these variables or implement the same infrastructure. For example, I can build a library like this:
find_library(FOO_LIBRARY_LIBS foo) add_library(MyLib ${SOURCES}) target_link_libraries(MyLib ${FOO_LIBRARY_LIBS}) install(MyLib DESTINATION lib)
Then users can use cmake variables to build the library shared or static, set the prefix directory to search for the library, set the path where the library will be installed, set the compiler or compiler flags. Plus the above can be crosscompiled as well.
There is one confusion here. THere is one type of person or task - building the library as either shared or static. Then there is the person depending on the library/headers in some other CMake project.
Yes, but I was explaining how cmake has several variables setup so you as a build script writer don’t have to handle all the different ways a user might want to build or install the library.
My concern has mostly been as a developer creating and testing the library.
I haven't really addressed the "user" or consumer. So haven't touched about implementing the "find" functionality for “mylib”
Its just not how a user might want to find the library after its built. It also affects how the user will want your library to find its dependencies when they want to build it.
LOL - I'm referring to discussions such as this one. The fact that everytime a question is raised, someone has an answer for some specific scenario. This is deemed to be support that the system is a good one. This the exact wrong conclusion! It seems that it never occurs to anyone that the fact that such a question has to be ask in the first place is an indicator that something is fundamentally wrong with the concept and/or implementation.
Nope, the concept is fine. I think its best to actual learn how the tools are supposed to work instead of fighting the system.
LOL - I don't undertake to fight the system - the "system" actually fights me.
No, to start with you have chosen to put the CMakeLists.txt in a folder where its not found by cmake, and define your own system to handle shared/static libraries instead of using cmake’s.
Actually, the problem is that people think there's a "system" when there isn't. It's really a grab bag of disjoint features which can be made to work. This can be and actually is useful, but it's a system with limited conceptual integrity.
The problem is a build system has to handle a large amount of variability and complexity. Cmake has ironed-out a lot of this variability through out the years(and Boost.Build does handle this well also). Newer build systems tries to makes things simpler but usually require me to modify the build script(this includes Qt) to make things work which I think is a major fail, and shows the lack of knowledge the newer build systems have in regard to the full amount of complexity a build system has to manage.
This makes it difficult and unintuitive to extend and apply to anything other than the "example" cases.
Of course it makes it hard or unintuitive to extend a system when you are going against it.
Attempts to extend the "system" end up layer even more confusion on top of whats already there. This looks like it "solves" problems but just makes thing more opaque, arbitrary and fragile. Quality has to be built in - it can't be added on.
I don’t think building abstractions on top of a mature system is problematic.

On 4/24/16 2:49 PM, Paul Fultz II wrote:
On Apr 24, 2016, at 12:56 PM, Robert Ramey <ramey@rrsd.com> wrote: No - I just startup the GUI, set the directory of the source and the directory of build.
Does the GUI provide a way to set variables at configure time? How do you set the toolchain?
Here's how the GUI works a) it shows you a form with two fields on it. Project Source directory and desired destination build directory. b) You fill in these fields and hit "configure". The gui asks you a couple of questions - when toolset you want and a couple more. c) The CMakeList.txt in the source directory runs and produces output which includes a list of variables it couldn't resolve along with any message(STATUS ...) output. The variables are usually those variables marked "CACHE". d) In my case I have a "CACHED" variable I call "static_build" which is a boolean variable. This shows up as an unresolved variable with a checkbox (because it's a boolean type). Other variables are related to boost - these are pathnames or file names. e) Through the GUI I assign values to these variables then invoke "configure" again. This process is repeated until there are no unresolved variables f) Then I hit "generate" and the Build project is generated. In my case this is an IDE project on the system I'm using. Note that I put extra code into my CMakeLists.txt so that the IDE project includes links to the header files. CMake seems to try to track dependencies but it doesn't add these to the IDE project. I understand why it has to be this way. In any case it's not a big problem for me. g) This leads to the problem of the BUILD_STATIC_LIBRARY (or whatever it's called). If you include add_library(libname STATIC ...) in your CMake script - you can't create a shared library. Then there is the fact that it was really unclear where else this variable might get set (command line? - other CMakeList.txt). So my method is: 1) in the CMakeLists.txt - create a CACHED variable X 2) set X in the configure part of the GUI 3) make a small CMakeFunction which is like "if(X) set(LINK_TYPE "STATIC") elseif() set(LINK_TYPE, "SHARED") 4) then use add_library(library_name ${LINK_TYPE} ...) I'm aware you're going to see this as another example of "fighting the system" - and you're right. It's just that sometimes you have to fight to make the system work. The final result is a system which works pretty well. I don't have to remember anything. In fact it works much, much better than actually editing the IDE. To add new source file, I just tweak the CMakeLists.txt and regenerate the IDE. This could be seen as an endorsement of CMake. It's not - the above description doesn't really capture the fact that there is a lot of experimentation and trial and error involved. It's like training an ant to train a flee. One can just read the docs, know what to do, write an unambiguous script and expect it to work. All we can really do is "fix it up" by creating another level of abstraction on top. Unfortunately, Most of time, these higher level abstractions are made with the same type of thinking which leads to the original problem. Ambiguously defined rules, functions with side-effects, etc. etc. So in addressing the "problem" things are made better in the short run, but made much worse as the system get's bigger. This is why CMake - though it has many merits - is not a definitive solution and why I believe is not a good replacement for Boost Build. Now on to Boost Build .... LOL - just joking Robert Ramey

On Tuesday, April 26, 2016 at 11:03:26 AM UTC-5, Robert Ramey wrote:
On 4/24/16 2:49 PM, Paul Fultz II wrote:
On Apr 24, 2016, at 12:56 PM, Robert Ramey <ra...@rrsd.com
No - I just startup the GUI, set the directory of the source and the
<javascript:>> wrote: directory of build.
Does the GUI provide a way to set variables at configure time? How do
you set the toolchain?
Here's how the GUI works
a) it shows you a form with two fields on it. Project Source directory and desired destination build directory.
b) You fill in these fields and hit "configure". The gui asks you a couple of questions - when toolset you want and a couple more.
c) The CMakeList.txt in the source directory runs and produces output which includes a list of variables it couldn't resolve along with any message(STATUS ...) output. The variables are usually those variables marked "CACHE".
Ok, I tried out the GUI to see for myself. Its as horrible as you are describing. You can specify the compiler or a toolchain during the questioning step. You can also set variables, and this part is not clear in the gui. For example, if I want to build with warnings, from the command line I run `cmake .. -DCMAKE_CXX_FLAGS='-Wall -Werror'` and the project will be built with warning flags. To do the same with the GUI, you must first use the 'Add Entry' button to add the `CMAKE_CXX_FLAGS` variable before you configure. Then you can configure and generate, but the variable will no longer show in the gui because it is neither an option nor a cached variable. However, you can verify that the variable is set by running `ccmake` which will show the variable.
d) In my case I have a "CACHED" variable I call "static_build" which is a boolean variable. This shows up as an unresolved variable with a checkbox (because it's a boolean type). Other variables are related to boost - these are pathnames or file names.
e) Through the GUI I assign values to these variables then invoke "configure" again. This process is repeated until there are no unresolved variables
f) Then I hit "generate" and the Build project is generated. In my case this is an IDE project on the system I'm using. Note that I put extra code into my CMakeLists.txt so that the IDE project includes links to the header files. CMake seems to try to track dependencies but it doesn't add these to the IDE project. I understand why it has to be this way. In any case it's not a big problem for me.
g) This leads to the problem of the BUILD_STATIC_LIBRARY (or whatever it's called). If you include add_library(libname STATIC ...) in your CMake script - you can't create a shared library. Then there is the fact that it was really unclear where else this variable might get set (command line? - other CMakeList.txt). So my method is: 1) in the CMakeLists.txt - create a CACHED variable X 2) set X in the configure part of the GUI 3) make a small CMakeFunction which is like "if(X) set(LINK_TYPE "STATIC") elseif() set(LINK_TYPE, "SHARED") 4) then use add_library(library_name ${LINK_TYPE} ...)
Yes, but cmake provides `BUILD_SHARED_LIBS` to do the same thing, and it even gives a recommendation in the documentation on how to configure it. It says "This variable is often added to projects as an option()".
I'm aware you're going to see this as another example of "fighting the system" - and you're right. It's just that sometimes you have to fight to make the system work.
But how does `option(BUILD_SHARED_LIBS "Build shared")` not work?
The final result is a system which works pretty well. I don't have to remember anything. In fact it works much, much better than actually editing the IDE. To add new source file, I just tweak the CMakeLists.txt and regenerate the IDE.
I would say the exact opposite. The thing I like about cmake is that I can build my project with MSVC without ever needing to touch the IDE.
This could be seen as an endorsement of CMake. It's not - the above description doesn't really capture the fact that there is a lot of experimentation and trial and error involved. It's like training an ant to train a flee. One can just read the docs, know what to do, write an unambiguous script and expect it to work. All we can really do is "fix it up" by creating another level of abstraction on top. Unfortunately, Most of time, these higher level abstractions are made with the same type of thinking which leads to the original problem.
Usually when the abstraction level goes up, its because its managing more complexity.
Ambiguously defined rules, functions with side-effects, etc. etc.
I don't know what you are talking about. There is no ambiguously defined rules.

On 4/26/16 10:33 AM, Paul Fultz II wrote:
On Tuesday, April 26, 2016 at 11:03:26 AM UTC-5, Robert Ramey wrote:
Ok, I tried out the GUI to see for myself. Its as horrible as you are describing.
LOL - I don't mean to make the case that it's horrible - I was just trying to describe how it works. I gather that my description more or less refected your own experience.
You can also set variables, and this part is not clear in the gui. For example, if I want to build with warnings, from the command line I run `cmake .. -DCMAKE_CXX_FLAGS='-Wall -Werror'` and the project will be built with warning flags. To do the same with the GUI, you must first use the 'Add Entry'
Hmmm - this stuff I put in the CMakeList.txt itself since it's not something I want to change. I use add_definition so that it's compiler agnostic.
4) then use add_library(library_name ${LINK_TYPE} ...)
Yes, but cmake provides `BUILD_SHARED_LIBS` to do the same thing, and it even gives a recommendation in the documentation on how to configure it. It says "This variable is often added to projects as an option()".
LOL - I looked through that - but then I couldn't really figure out what "option" was really supposed to do. Also it was include how BUILD_SHARED_LIBS would be set If I didn't add_library(STATIC or SHARED...) It seems that the variable could be set in a number of different places (worse, in any CMakeLists.txt file which I might not know about). Then the question arose as to whether the STATIC / SHARED in the add_library was a string variable or a key word. Basically it's the old "global variable" distributed source problem. It's not that it can't be made to work it's just that it's opaque, fragile, and time consuming. My method was simple and bullet proof because it isn't global doesn't depend on anything it can't see. But I wasn't thinking of "theoretical" software design considerations. I was just making the damn think work reliably in the most expedient way possible.
I'm aware you're going to see this as another example of "fighting the system" - and you're right. It's just that sometimes you have to fight to make the system work.
But how does `option(BUILD_SHARED_LIBS "Build shared")` not work?
whoops see above. I don't know that it doesn't work - I couldn't figure out what it's really supposed to do.
The final result is a system which works pretty well. I don't have to remember anything. In fact it works much, much better than actually editing the IDE. To add new source file, I just tweak the CMakeLists.txt and regenerate the IDE.
I would say the exact opposite. The thing I like about cmake is that I can build my project with MSVC without ever needing to touch the IDE.
LOL - OK you you don't like using an IDE - fair enough. I'm sure you've got a good argument why using an IDE is something to avoid. Good luck on convincing the world that you're right about.
This could be seen as an endorsement of CMake. It's not - the above description doesn't really capture the fact that there is a lot of experimentation and trial and error involved. It's like training an ant to train a flee. One can just read the docs, know what to do, write an unambiguous script and expect it to work. All we can really do is "fix it up" by creating another level of abstraction on top. Unfortunately, Most of time, these higher level abstractions are made with the same type of thinking which leads to the original problem.
Usually when the abstraction level goes up, its because its managing more complexity.
That's certainly the intention. Whether that in fact happens or not depends upon the how the abstraction is designed and implemented. Abstraction alone is not sufficient to improve and interface. All libraries are and attempt at abstraction. How many times have you tried to use a library and found that it was so confusing that rather than making the task at hand easier, it makes it harder. It's a common occurence. It's not that the library author didn't have good intentions, it's not that designing a higher level abstraction is not the right idea, it's that it's just hard to design something simple and unambiguous and decoupled from something else.
Ambiguously defined rules, functions with side-effects, etc. etc.
I don't know what you are talking about. There is no ambiguously defined rules.
LOL - see my rant about option() above. it's not clear where one should put it, how it interacts with option placed somewhere else, whether the options are string variables or some sort of instrinsic, what is the default, how it interacts with someone placing the option on the command line. This is just a tiny simple example and a very simple one compared to others. One could supply pages and pages and pages of such examples - but thankfully one is sufficient. Robert Ramey FWIW - it's not just CMake, it's the whole software developement process and tools. We're swamped in tools which don't have enough formality to force us to be correct. We get stuff which we can write in 30 seconds and it usually works. But more often than not, it fails silently sometime later. The ones I know about are PHP, perl, javascript, basic, excel. Those are only the ones I know about of the top of my head. The world is grinding to a halt on this stuff.

On 27/04/2016 06:02, Robert Ramey wrote:
LOL - see my rant about option() above. it's not clear where one should put it, how it interacts with option placed somewhere else, whether the options are string variables or some sort of instrinsic, what is the default, how it interacts with someone placing the option on the command line. This is just a tiny simple example and a very simple one compared to others. One could supply pages and pages and pages of such examples - but thankfully one is sufficient.
https://cmake.org/cmake/help/v3.0/command/option.html It seems fairly obvious how they're defined from that. Usually custom options would be activated with -d and tested for with if (https://cmake.org/cmake/help/v3.0/command/if.html), but BUILD_SHARED_LIBS in particular (https://cmake.org/cmake/help/v3.0/variable/BUILD_SHARED_LIBS.html) alters the behaviour of add_library (https://cmake.org/cmake/help/v3.0/command/add_library.html) when defined (via any means). ie. add_library(name STATIC sources...) will always build a static library. add_library(name SHARED sources...) will always build a shared library. add_library(name sources...) will build a shared library if BUILD_SHARED_LIBS is ON, otherwise a static library. This information was not hard to find.

Mere moments ago, quoth I:
ie. add_library(name STATIC sources...) will always build a static library. add_library(name SHARED sources...) will always build a shared library. add_library(name sources...) will build a shared library if BUILD_SHARED_LIBS is ON, otherwise a static library.
Or to put it another way, you can consider: add_library(name sources...) To be shorthand for: if(BUILD_SHARED_LIBS) add_library(name SHARED sources...) else() add_library(name STATIC sources...) endif() ie. it's the same as your USE_STATIC (although inverted), but you're fighting the system instead of working with it. As previously noted, of course, this doesn't let you build both shared and static libraries from one build run (though you can do it from separate runs). To do that in one run, you need both add_library lines unconditionally with different names for each library type (as discussed elsewhere). This is less commonly useful when building applications but it's the way Boost Build prefers to build libraries (just in case).

On 4/27/16 6:47 PM, Gavin Lambert wrote:
On 27/04/2016 06:02, Robert Ramey wrote:
LOL - see my rant about option() above. it's not clear where one should put it, how it interacts with option placed somewhere else, whether the options are string variables or some sort of instrinsic, what is the default, how it interacts with someone placing the option on the command line. This is just a tiny simple example and a very simple one compared to others. One could supply pages and pages and pages of such examples - but thankfully one is sufficient.
https://cmake.org/cmake/help/v3.0/command/option.html
It seems fairly obvious how they're defined from that.
LOL - sorry it's not obvious to me. It still doesn't answer the questions that occured to me above.
Usually custom options would be activated with -d
You mean a command line switch? How would one do this with the GUI.
and tested for with if (https://cmake.org/cmake/help/v3.0/command/if.html), but BUILD_SHARED_LIBS in particular (https://cmake.org/cmake/help/v3.0/variable/BUILD_SHARED_LIBS.html) alters the behaviour of add_library (https://cmake.org/cmake/help/v3.0/command/add_library.html) when defined (via any means).
ie. add_library(name STATIC sources...) will always build a static library. add_library(name SHARED sources...) will always build a shared library. add_library(name sources...) will build a shared library if BUILD_SHARED_LIBS is ON, otherwise a static library.
This information was not hard to find.
LOL - it's not that it's hard to find - it's hard to make sense of when you do find it. Ambiguity is built in. Of course it's a matter of opinion. If it's clear to you, I won't dispute it. Robert Ramey

-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Robert Ramey Sent: 23 April 2016 19:10 To: boost@lists.boost.org Subject: Re: [boost] CMake - one more time
<snip>
The idea is that the user would set this(not the library author), so it shouldn’t be set in a CMakeLists.txt file.
This statement is not about the particular CMake variable. It's about the fact that the "language" has many, many builtin ambiguities which make it impossible to know or agree on how to use it.
Yes, but cmake has a much larger community that after a little googling I can find a solution to the problem due to the fact that there is a wealth of examples and tutorials out there.
Yes - and you can find a whole pile of b2/bjam queries and replies on lots of sites too. And very many of them are quite basic questions that should not need to be asked on the helpful sites.
Ahhh yes. That's the real problem. What you characterize as a solution/feather, I characterize as a symptom of fundamental fault. Do have have to troll the whole net to differentiate a function? Of course not. That fact that this is now an acceptable answer is testament to the sad state of modern software development!
Absolutely! (And sadly I must also include C/C++ in this - the world's greatest software disaster, but let's not digress...)
And the documentation actually makes it worse because it suggests that the system is simple to use when it's actually not. It makes naive users feel like they're stupid - whether they are or not.
At least I know I'm stupid - but I still want to get things to work.
And this is made worse by the fact that there are lots of people who have made simple build scripts for small projects with limited requirements. This work the first time - now they think it IS easy and they think there's nothing to it. It's depressing.
I don’t know what you are referring to here.
Many of the answers on the helpful sites.
LOL - I'm referring to discussions such as this one. The fact that everytime a question is raised, someone has an answer for some specific scenario. This is deemed to be support that the system is a good one. This the exact wrong conclusion! It seems that it never occurs to anyone that the fact that such a question has to be ask in the first place is an indicator that something is fundamentally wrong with the concept and/or implementation.
+1 Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830

On 4/24/16 10:07 AM, Paul A. Bristow wrote:
Ahhh yes. That's the real problem. What you characterize as a solution/feather, I characterize as a symptom of fundamental fault. Do have have to troll the whole net to differentiate a function? Of course not. That fact that this is now an acceptable answer is testament to the sad state of modern software development!
Absolutely!
(And sadly I must also include C/C++ in this - the world's greatest software disaster, but let's not digress...)
LOL - +1 but ...
And the documentation actually makes it worse because it suggests that the system is simple to use when it's actually not. It makes naive users feel like they're stupid - whether they are or not.
At least I know I'm stupid - but I still want to get things to work.
LOL - note that when one manages to make it to work, he feels smart - even though he might not be.
LOL - I'm referring to discussions such as this one. The fact that everytime a question is raised, someone has an answer for some specific scenario. This is deemed to be support that the system is a good one. This the exact wrong conclusion! It seems that it never occurs to anyone that the fact that such a question has to be ask in the first place is an indicator that something is fundamentally wrong with the concept and/or implementation.
+1
Thanks for that, it gets lonely being me. Robert Ramey

On 23/04/2016 19:08, Robert Ramey wrote:
This is not necessary at all. Cmake provides the `BUILD_SHARED_LIBS` variable that can be set to do this same purpose:
https://cmake.org/cmake/help/v3.5/variable/BUILD_SHARED_LIBS.html
Of course I saw this. This is a great example of the kind of thing that drives me nuts. looking at the CMake documentation
And this has to be said for boost build system, reading its documentation is not frustrating: I'm not even sure where to find it. Alain

On Apr 22, 2016, at 4:56 PM, Raffi Enficiaud <raffi.enficiaud@mines-paris.org> wrote:
Le 22/04/16 à 19:42, Paul Fultz II a écrit :
On Apr 20, 2016, at 4:39 PM, Raffi Enficiaud <raffi.enficiaud@mines-paris.org> wrote: <snip> I do not like the current state of b2 for many reasons (even though I think it could be really a good build system), but CMake is not covering many features that are currently required by the boost superproject. Until the point where we can consistently build the lib (including the possibly many flavor of the same library - STATIC/SHARED at the same time, etc), run the tests, and generate the documentation (including the possibility to have the boostdoc/quickbook/doxygen toolchain), I do not see any *good* reason to move to cmake.
Cmake can do all that you listed there. In addition, I found its easy to generate targets at configure time. So the Fit library actually adds additional tests to test examples and header files. Plus, I can create configuration headers based on whether a something compiles or runs. I believe BB can do the same using virtual targets or something, but I haven’t clearly figured it out.
Certainly, CMake can do everything with the appropriate effort. But so far, although I am a CMake user, I do not know how to do this:
- having the same target name with different target properties: like set(PROJECT_SRC ....) add_library(myproject SHARED ${PROJECT_SRC}) add_library(myproject STATIC ${PROJECT_SRC})
how do you do that? how do you refer to the appropriate variant?
In cmake, you can create object libraries: https://cmake.org/cmake/help/v3.5/command/add_library.html#object-libraries This help avoid the double compile, so you can write this: add_library(objlib OBJECT ${SOURCES}) # shared libraries need PIC set_property(TARGET objlib PROPERTY POSITION_INDEPENDENT_CODE 1) add_library(MyLib_shared SHARED $<TARGET_OBJECTS:objlib>) add_library(MyLib_static STATIC $<TARGET_OBJECTS:objlib>) Then to set each one to the same name you can use the OUTPUT_NAME property: set_target_properties(MyLib_shared PROPERTIES OUTPUT_NAME MyLib) set_target_properties(MyLib_static PROPERTIES OUTPUT_NAME MyLib)
- having a set of dependencies that is not driven by a high level CMakeLists.txt. You advocate the solution of packaging, but this does not target all platforms,
How does this not target all platforms?
and just translates the same problem to another layer to my opinion. As a developer, in order to work on a library X that depends on Y, you should install Y, and this information should appear in X (so this is an implicit knowledge). What this process does is that it put X and Y in some same level of knowledge: a flatten set of packages. This is done by BJam already the same, but at compilation/build step, and without the burden of the extra management of packages (update upstream Y for instance, when Y can be a set of many packages, and obviously in a confined, repeatable and isolated development environment). But maybe you think of something else.
I don’t follow this at all. For example, when I want to build the hmr library here: https://github.com/pfultz2/hmr All I have to do after cloning it is: `cget build`, then it will go and grab the dependencies because they have been listed in the requirements.txt file.
To me this is a highly non trivial task to do with CMake, and ends up in half backed solutions like ROS/Catkin (http://wiki.ros.org/catkin/conceptual_overview), which is really not CMake and is just making things harder for everyone.
Cmake already handles the packaging and finding dependencies, cget just provides the mechanism to retrieve the packages using the standard cmake process. This why you can use it to install zlib or even blas, as it doesn’t require an extra dependency management system.
- I can continue... such as targets subset selection. It is doable with CMake with, "I think" some umbrella projects, but again this is hard to maintain and requires a high level orchestration. Only for the tests for instance: suppose I do not want to compile them in my first step, and then I change my mind, I want to run a subset of them. What I also want is not wasting my time in waiting for a billion of files to compile, I just want the minimal compilation. So it comes to my mind that EXCLUDE_FROM_ALL might be used, but when I run ctest -R something*, I get an error... Maybe you know a good way of doing that in cmake?
I usually add the tests using this(I believe Boost.Hana does the same): add_custom_target(check COMMAND ${CMAKE_CTEST_COMMAND} -VV -C ${CMAKE_CFG_INTDIR}) function(add_test_executable TEST_NAME) add_executable (${TEST_NAME} EXCLUDE_FROM_ALL ${ARGN}) if(WIN32) add_test(NAME ${TEST_NAME} WORKING_DIRECTORY ${LIBRARY_OUTPUT_PATH} COMMAND ${TEST_NAME}${CMAKE_EXECUTABLE_SUFFIX}) else() add_test(NAME ${TEST_NAME} COMMAND ${TEST_NAME}) endif() add_dependencies(check ${TEST_NAME}) set_tests_properties(${TEST_NAME} PROPERTIES FAIL_REGULAR_EXPRESSION "FAILED") endfunction(add_test_executable) Then when I want to build the library I just run `cmake —build .` and then when I want to run the test, I can run `cmake —build . —target check`. Now if I want to run just one of the tests I can do `cmake —build . —target test_name && ./test_name` just as easy. I have not ever had the need to run subset of tests, this is usually the case when there is nested projects, but is easily avoided when the project is separated into separate components.
Finally, for boost, it could provide some high-level cmake functions so all of these things can happen consistently across libraries.
Sure. Or ... BJam should be given some more care and visibility, like a GSoC (bis) track?
But its not entirely technology that is missing, its the community that is missing, and I don’t think a GSoC will help create a large community for boost build.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Le 23/04/16 à 02:30, Paul Fultz II a écrit :
On Apr 22, 2016, at 4:56 PM, Raffi Enficiaud <raffi.enficiaud@mines-paris.org> wrote:
Le 22/04/16 à 19:42, Paul Fultz II a écrit :
On Apr 20, 2016, at 4:39 PM, Raffi Enficiaud <raffi.enficiaud@mines-paris.org> wrote: <snip> I do not like the current state of b2 for many reasons (even though I think it could be really a good build system), but CMake is not covering many features that are currently required by the boost superproject. Until the point where we can consistently build the lib (including the possibly many flavor of the same library - STATIC/SHARED at the same time, etc), run the tests, and generate the documentation (including the possibility to have the boostdoc/quickbook/doxygen toolchain), I do not see any *good* reason to move to cmake.
Cmake can do all that you listed there. In addition, I found its easy to generate targets at configure time. So the Fit library actually adds additional tests to test examples and header files. Plus, I can create configuration headers based on whether a something compiles or runs. I believe BB can do the same using virtual targets or something, but I haven’t clearly figured it out.
Certainly, CMake can do everything with the appropriate effort. But so far, although I am a CMake user, I do not know how to do this:
- having the same target name with different target properties: like set(PROJECT_SRC ....) add_library(myproject SHARED ${PROJECT_SRC}) add_library(myproject STATIC ${PROJECT_SRC})
how do you do that? how do you refer to the appropriate variant?
In cmake, you can create object libraries:
https://cmake.org/cmake/help/v3.5/command/add_library.html#object-libraries
This help avoid the double compile, so you can write this:
add_library(objlib OBJECT ${SOURCES}) # shared libraries need PIC set_property(TARGET objlib PROPERTY POSITION_INDEPENDENT_CODE 1) add_library(MyLib_shared SHARED $<TARGET_OBJECTS:objlib>) add_library(MyLib_static STATIC $<TARGET_OBJECTS:objlib>)
Then to set each one to the same name you can use the OUTPUT_NAME property:
set_target_properties(MyLib_shared PROPERTIES OUTPUT_NAME MyLib) set_target_properties(MyLib_static PROPERTIES OUTPUT_NAME MyLib)
Exactly, so you artificially make CMake think that 2 different targets should end up with the same name on the filesystem. It does not work for instance on Win because the import .lib of the shared get overwritten by the static. This is not exactly a solution, but rather a hack (or workaround). We can of course iterate further (set the output folder per type, etc).
- having a set of dependencies that is not driven by a high level CMakeLists.txt. You advocate the solution of packaging, but this does not target all platforms,
How does this not target all platforms?
Do I have a centralized (or virtualized like inside vagga/docker or virtualenv) and official packet manager on Win32 or OSX? I know tools exist (brew, chocolatey, etc). What about the other platforms (Android)? What about cross compilation?
and just translates the same problem to another layer to my opinion. As a developer, in order to work on a library X that depends on Y, you should install Y, and this information should appear in X (so this is an implicit knowledge). What this process does is that it put X and Y in some same level of knowledge: a flatten set of packages. This is done by BJam already the same, but at compilation/build step, and without the burden of the extra management of packages (update upstream Y for instance, when Y can be a set of many packages, and obviously in a confined, repeatable and isolated development environment). But maybe you think of something else.
I don’t follow this at all. For example, when I want to build the hmr library here: https://github.com/pfultz2/hmr
All I have to do after cloning it is: `cget build`, then it will go and grab the dependencies because they have been listed in the requirements.txt file.
Then I am dependent on another tool, cget, maintained by ... you :) Also from the previous thread, if my project has not the "standard" cget layout, then cget will not work (yet?). I also need another file, "requirements" that I need to maintain externally to the build system. I do that often for my python packages, and it is "easy" but difficult to stabilize sometimes, especially in complex dependency graph (and their can be conflicting versions, etc). I can see good things in cget, I can also see weak points. What I am saying is that you delegate the complexity to another layer, call it cget or Catkin, or pip. And developers should do also the packaging, and this is not easy (and needs a whole infrastructure to make it right, like PPAs or pypi). BTW, is cget able to work offline?
To me this is a highly non trivial task to do with CMake, and ends up in half backed solutions like ROS/Catkin (http://wiki.ros.org/catkin/conceptual_overview), which is really not CMake and is just making things harder for everyone.
Cmake already handles the packaging and finding dependencies, cget just provides the mechanism to retrieve the packages using the standard cmake process. This why you can use it to install zlib or even blas, as it doesn’t require an extra dependency management system.
Well, I really cannot tell for cget. CMake finds things that are installed in expected locations for instance, otherwise the FIND_PATHS should be indicated (and propagated to the dependency graph). What if for instance, it needs an updated/downgraded version of the upstream? How cget does manage that? Is there an equivalent to virtualenv? Right now for boost, I clone the superproject, and the artifacts and dependencies are confined withing this clone (up to doxygen, docbook etc).
- I can continue... such as targets subset selection. It is doable with CMake with, "I think" some umbrella projects, but again this is hard to maintain and requires a high level orchestration. Only for the tests for instance: suppose I do not want to compile them in my first step, and then I change my mind, I want to run a subset of them. What I also want is not wasting my time in waiting for a billion of files to compile, I just want the minimal compilation. So it comes to my mind that EXCLUDE_FROM_ALL might be used, but when I run ctest -R something*, I get an error... Maybe you know a good way of doing that in cmake?
I usually add the tests using this(I believe Boost.Hana does the same):
add_custom_target(check COMMAND ${CMAKE_CTEST_COMMAND} -VV -C ${CMAKE_CFG_INTDIR})
function(add_test_executable TEST_NAME) add_executable (${TEST_NAME} EXCLUDE_FROM_ALL ${ARGN}) if(WIN32) add_test(NAME ${TEST_NAME} WORKING_DIRECTORY ${LIBRARY_OUTPUT_PATH} COMMAND ${TEST_NAME}${CMAKE_EXECUTABLE_SUFFIX}) else() add_test(NAME ${TEST_NAME} COMMAND ${TEST_NAME}) endif() add_dependencies(check ${TEST_NAME}) set_tests_properties(${TEST_NAME} PROPERTIES FAIL_REGULAR_EXPRESSION "FAILED") endfunction(add_test_executable)
Then when I want to build the library I just run `cmake —build .` and then when I want to run the test, I can run `cmake —build . —target check`. Now if I want to run just one of the tests I can do `cmake —build . —target test_name && ./test_name` just as easy. I have not ever had the need to run subset of tests, this is usually the case when there is nested projects, but is easily avoided when the project is separated into separate components.
You are strengthening my point, you write an umbrella target for your purpose. My example with the tests was a trap: if you run "cmake —build . —target check" you end up building "all" the tests. To have a finer granularity, you should write "add_test_executable_PROJECTX" etc. BJam knows how to do that, also with a eg. STATIC version of some upstream library, defined at the point it is consumed (and not at the point it is declared/defined), and built only if needed, without the need to do some mumbo/jumbo with object files. What I am saying is that it is indeed possible, I also know solutions, but this is not native to cmake.
Finally, for boost, it could provide some high-level cmake functions so all of these things can happen consistently across libraries.
Sure. Or ... BJam should be given some more care and visibility, like a GSoC (bis) track?
But its not entirely technology that is missing, its the community that is missing, and I don’t think a GSoC will help create a large community for boost build.
That is true. I see it as an chicken and egg problem also, and we have to start somewhere. Where Bjam will always loose is the ability to generate IDE environments, natively, and this is a major reason why cmake will have a more lively community. I believe that a BJam to cmake is possible, but even in that case, Bjam will live in the shadow of cmake. I was proposing GSoC for, eg. start thinking about the syntax in bjam ... Raffi

On Apr 22, 2016, at 8:39 PM, Raffi Enficiaud <raffi.enficiaud@mines-paris.org> wrote:
Le 23/04/16 à 02:30, Paul Fultz II a écrit :
On Apr 22, 2016, at 4:56 PM, Raffi Enficiaud <raffi.enficiaud@mines-paris.org> wrote:
Le 22/04/16 à 19:42, Paul Fultz II a écrit :
On Apr 20, 2016, at 4:39 PM, Raffi Enficiaud <raffi.enficiaud@mines-paris.org> wrote: <snip> I do not like the current state of b2 for many reasons (even though I think it could be really a good build system), but CMake is not covering many features that are currently required by the boost superproject. Until the point where we can consistently build the lib (including the possibly many flavor of the same library - STATIC/SHARED at the same time, etc), run the tests, and generate the documentation (including the possibility to have the boostdoc/quickbook/doxygen toolchain), I do not see any *good* reason to move to cmake.
Cmake can do all that you listed there. In addition, I found its easy to generate targets at configure time. So the Fit library actually adds additional tests to test examples and header files. Plus, I can create configuration headers based on whether a something compiles or runs. I believe BB can do the same using virtual targets or something, but I haven’t clearly figured it out.
Certainly, CMake can do everything with the appropriate effort. But so far, although I am a CMake user, I do not know how to do this:
- having the same target name with different target properties: like set(PROJECT_SRC ....) add_library(myproject SHARED ${PROJECT_SRC}) add_library(myproject STATIC ${PROJECT_SRC})
how do you do that? how do you refer to the appropriate variant?
In cmake, you can create object libraries:
https://cmake.org/cmake/help/v3.5/command/add_library.html#object-libraries
This help avoid the double compile, so you can write this:
add_library(objlib OBJECT ${SOURCES}) # shared libraries need PIC set_property(TARGET objlib PROPERTY POSITION_INDEPENDENT_CODE 1) add_library(MyLib_shared SHARED $<TARGET_OBJECTS:objlib>) add_library(MyLib_static STATIC $<TARGET_OBJECTS:objlib>)
Then to set each one to the same name you can use the OUTPUT_NAME property:
set_target_properties(MyLib_shared PROPERTIES OUTPUT_NAME MyLib) set_target_properties(MyLib_static PROPERTIES OUTPUT_NAME MyLib)
Exactly, so you artificially make CMake think that 2 different targets should end up with the same name on the filesystem. It does not work for instance on Win because the import .lib of the shared get overwritten by the static. This is not exactly a solution, but rather a hack (or workaround).
Its neither, this is optimization because the user could just build the library twice, once for shared and another for static. Of course, this type of optimization mainly affects system maintainers and so everyday users of cmake don’t see this as a big problem.
We can of course iterate further (set the output folder per type, etc).
- having a set of dependencies that is not driven by a high level CMakeLists.txt. You advocate the solution of packaging, but this does not target all platforms,
How does this not target all platforms?
Do I have a centralized (or virtualized like inside vagga/docker or virtualenv) and official packet manager on Win32 or OSX? I know tools exist (brew, chocolatey, etc). What about the other platforms (Android)? What about cross compilation?
There is bpm, cget, conan, and hunter to name a few that is cross platform and targets all platforms.
and just translates the same problem to another layer to my opinion. As a developer, in order to work on a library X that depends on Y, you should install Y, and this information should appear in X (so this is an implicit knowledge). What this process does is that it put X and Y in some same level of knowledge: a flatten set of packages. This is done by BJam already the same, but at compilation/build step, and without the burden of the extra management of packages (update upstream Y for instance, when Y can be a set of many packages, and obviously in a confined, repeatable and isolated development environment). But maybe you think of something else.
I don’t follow this at all. For example, when I want to build the hmr library here: https://github.com/pfultz2/hmr
All I have to do after cloning it is: `cget build`, then it will go and grab the dependencies because they have been listed in the requirements.txt file.
Then I am dependent on another tool, cget, maintained by ... you :) Also from the previous thread, if my project has not the "standard" cget layout, then cget will not work (yet?).
There is no layout requirements. It just requires a CMakeLists.txt at the top level(which all cmake-supported libraries have), but the library can be organized however.
I also need another file, "requirements" that I need to maintain externally to the build system.
But building and installing the dependencies is external to the build system anyways. The requirements.txt just lets you automate this process.
I do that often for my python packages, and it is "easy" but difficult to stabilize sometimes, especially in complex dependency graph (and their can be conflicting versions, etc). I can see good things in cget, I can also see weak points.
Currently cget doesn’t handle versions. I plan to support channels in the future which can support versions and resolve dependencies using a SAT solver(which pip does not do).
What I am saying is that you delegate the complexity to another layer, call it cget or Catkin, or pip. And developers should do also the packaging, and this is not easy (and needs a whole infrastructure to make it right, like PPAs or pypi).
The complexity is there, which I hope tools like bpm or cget can help with. However, resolving the dependencies by putting everything in a superproject is more of a hack and doesn’t scale.
BTW, is cget able to work offline?
Yes.
To me this is a highly non trivial task to do with CMake, and ends up in half backed solutions like ROS/Catkin (http://wiki.ros.org/catkin/conceptual_overview), which is really not CMake and is just making things harder for everyone.
Cmake already handles the packaging and finding dependencies, cget just provides the mechanism to retrieve the packages using the standard cmake process. This why you can use it to install zlib or even blas, as it doesn’t require an extra dependency management system.
Well, I really cannot tell for cget. CMake finds things that are installed in expected locations for instance, otherwise the FIND_PATHS should be indicated (and propagated to the dependency graph).
It sets the CMAKE_PREFIX_PATH(and a few other variables), which cmake uses to find libraries.
What if for instance, it needs an updated/downgraded version of the upstream? How cget does manage that?
`cget -U` will replace the current version.
Is there an equivalent to virtualenv? Right now for boost, I clone the superproject, and the artifacts and dependencies are confined withing this clone (up to doxygen, docbook etc).
By default it installs everything in the local directory `cget`, but this can be changed by using the `—prefix` flag or setting the `CGET_PREFIX` environment variable.
- I can continue... such as targets subset selection. It is doable with CMake with, "I think" some umbrella projects, but again this is hard to maintain and requires a high level orchestration. Only for the tests for instance: suppose I do not want to compile them in my first step, and then I change my mind, I want to run a subset of them. What I also want is not wasting my time in waiting for a billion of files to compile, I just want the minimal compilation. So it comes to my mind that EXCLUDE_FROM_ALL might be used, but when I run ctest -R something*, I get an error... Maybe you know a good way of doing that in cmake?
I usually add the tests using this(I believe Boost.Hana does the same):
add_custom_target(check COMMAND ${CMAKE_CTEST_COMMAND} -VV -C ${CMAKE_CFG_INTDIR})
function(add_test_executable TEST_NAME) add_executable (${TEST_NAME} EXCLUDE_FROM_ALL ${ARGN}) if(WIN32) add_test(NAME ${TEST_NAME} WORKING_DIRECTORY ${LIBRARY_OUTPUT_PATH} COMMAND ${TEST_NAME}${CMAKE_EXECUTABLE_SUFFIX}) else() add_test(NAME ${TEST_NAME} COMMAND ${TEST_NAME}) endif() add_dependencies(check ${TEST_NAME}) set_tests_properties(${TEST_NAME} PROPERTIES FAIL_REGULAR_EXPRESSION "FAILED") endfunction(add_test_executable)
Then when I want to build the library I just run `cmake —build .` and then when I want to run the test, I can run `cmake —build . —target check`. Now if I want to run just one of the tests I can do `cmake —build . —target test_name && ./test_name` just as easy. I have not ever had the need to run subset of tests, this is usually the case when there is nested projects, but is easily avoided when the project is separated into separate components.
You are strengthening my point, you write an umbrella target for your purpose. My example with the tests was a trap: if you run "cmake —build . —target check" you end up building "all" the tests. To have a finer granularity, you should write "add_test_executable_PROJECTX" etc. BJam knows how to do that, also with a eg. STATIC version of some upstream library, defined at the point it is consumed (and not at the point it is declared/defined), and built only if needed, without the need to do some mumbo/jumbo with object files.
I don’t know see how that is something that cmake doesn’t do either.
What I am saying is that it is indeed possible, I also know solutions, but this is not native to cmake.
Yes, its possible, and a module would help make it possible in a simpler way, although, I don’t know how common it is to group tests. In general, I usually just focus on one test or all the tests.
Finally, for boost, it could provide some high-level cmake functions so all of these things can happen consistently across libraries.
Sure. Or ... BJam should be given some more care and visibility, like a GSoC (bis) track?
But its not entirely technology that is missing, its the community that is missing, and I don’t think a GSoC will help create a large community for boost build.
That is true. I see it as an chicken and egg problem also, and we have to start somewhere.
Where Bjam will always loose is the ability to generate IDE environments, natively, and this is a major reason why cmake will have a more lively community. I believe that a BJam to cmake is possible, but even in that case, Bjam will live in the shadow of cmake.
Yep, and instead of competing with cmake, boost could collaborate with cmake and would have a much larger impact.
I was proposing GSoC for, eg. start thinking about the syntax in bjam …
Yes the bjam syntax is the worst part.
Raffi
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 23 April 2016 at 07:34, Paul Fultz II <pfultz2@yahoo.com> wrote:
Currently cget doesn’t handle versions.
cppan (a tool I wrote) can handle versions. E.g.: https://cppan.org/pvt.cppan.demo.sqlite3/versions In general, cppan generates CMakeLists.txt file with information about building dependencies which is included into main CMakeLists.txt. Example of deps file (and demo project): https://github.com/cppan/demo_project/blob/master/cppan.yml https://github.com/cppan/cppan/wiki/Config-Commands#dependencies For more info about cppan see http://lists.boost.org/Archives/boost/2016/03/228419.php -- Egor Pugin

Le 23/04/16 à 06:34, Paul Fultz II a écrit :
[snip]
Then to set each one to the same name you can use the OUTPUT_NAME property:
set_target_properties(MyLib_shared PROPERTIES OUTPUT_NAME MyLib) set_target_properties(MyLib_static PROPERTIES OUTPUT_NAME MyLib)
Exactly, so you artificially make CMake think that 2 different targets should end up with the same name on the filesystem. It does not work for instance on Win because the import .lib of the shared get overwritten by the static. This is not exactly a solution, but rather a hack (or workaround).
Its neither, this is optimization because the user could just build the library twice, once for shared and another for static. Of course, this type of optimization mainly affects system maintainers and so everyday users of cmake don’t see this as a big problem.
Yet, having the same output name in case you build twice led to an undefined behaviour (.lib gets overwritten), and is not natively supported by CMake (using eg. CMAKE_ARCHIVE_OUTPUT_DIRECTORY for making the distinction does not work alone).
We can of course iterate further (set the output folder per type, etc).
- having a set of dependencies that is not driven by a high level CMakeLists.txt. You advocate the solution of packaging, but this does not target all platforms,
How does this not target all platforms?
Do I have a centralized (or virtualized like inside vagga/docker or virtualenv) and official packet manager on Win32 or OSX? I know tools exist (brew, chocolatey, etc). What about the other platforms (Android)? What about cross compilation?
There is bpm, cget, conan, and hunter to name a few that is cross platform and targets all platforms.
You missed the "official" and "centralized" parts. apt/dpkg or yum are official and centralized package manager, cget is not. Why should it be official and centralized? Because 1/ official is usually one, or at least all officials can work together (new Ubuntu for instance) 2/ centralized becase if we end up of having several package manager, then it is a mess (eg. apt for pip installed packages) as those do not communicate each other. Example: I have a pip python package compiled against openCV from the system, and then I update openCV on the system. Also I can definitely see a problem in supporting another tool. What would happen to boost if cget is "deprecated"? Example: Fink/MacPort/HomeBrew.
and just translates the same problem to another layer to my opinion. As a developer, in order to work on a library X that depends on Y, you should install Y, and this information should appear in X (so this is an implicit knowledge). What this process does is that it put X and Y in some same level of knowledge: a flatten set of packages. This is done by BJam already the same, but at compilation/build step, and without the burden of the extra management of packages (update upstream Y for instance, when Y can be a set of many packages, and obviously in a confined, repeatable and isolated development environment). But maybe you think of something else.
I don’t follow this at all. For example, when I want to build the hmr library here: https://github.com/pfultz2/hmr
All I have to do after cloning it is: `cget build`, then it will go and grab the dependencies because they have been listed in the requirements.txt file.
Then I am dependent on another tool, cget, maintained by ... you :) Also from the previous thread, if my project has not the "standard" cget layout, then cget will not work (yet?).
There is no layout requirements. It just requires a CMakeLists.txt at the top level(which all cmake-supported libraries have), but the library can be organized however.
The part "It just requires a CMakeLists.txt at the top level" is by definition a layout requirement, which is in contradiction with the part "There is no layout requirements". Also the "(which all cmake-supported libraries have)" is not a requirement of CMake itself, it is just a "good practice".
I also need another file, "requirements" that I need to maintain externally to the build system.
But building and installing the dependencies is external to the build system anyways. The requirements.txt just lets you automate this process.
I do that often for my python packages, and it is "easy" but difficult to stabilize sometimes, especially in complex dependency graph (and their can be conflicting versions, etc). I can see good things in cget, I can also see weak points.
Currently cget doesn’t handle versions. I plan to support channels in the future which can support versions and resolve dependencies using a SAT solver(which pip does not do).
SAT solver, interesting... why would I need that complexity for solving dependencies? I see versions as a "range of possible", which makes (an possibly empty) intersection of half spaces.
What I am saying is that you delegate the complexity to another layer, call it cget or Catkin, or pip. And developers should do also the packaging, and this is not easy (and needs a whole infrastructure to make it right, like PPAs or pypi).
The complexity is there, which I hope tools like bpm or cget can help with. However, resolving the dependencies by putting everything in a superproject is more of a hack and doesn’t scale.
Right now it scales pretty well with BJam.
BTW, is cget able to work offline?
Yes.
Good :)
To me this is a highly non trivial task to do with CMake, and ends up in half backed solutions like ROS/Catkin (http://wiki.ros.org/catkin/conceptual_overview), which is really not CMake and is just making things harder for everyone.
Cmake already handles the packaging and finding dependencies, cget just provides the mechanism to retrieve the packages using the standard cmake process. This why you can use it to install zlib or even blas, as it doesn’t require an extra dependency management system.
Well, I really cannot tell for cget. CMake finds things that are installed in expected locations for instance, otherwise the FIND_PATHS should be indicated (and propagated to the dependency graph).
It sets the CMAKE_PREFIX_PATH(and a few other variables), which cmake uses to find libraries.
What if we need conflicting CMAKE_PREFIX_PATH? eg one for openCV and another one for Qt?
What if for instance, it needs an updated/downgraded version of the upstream? How cget does manage that?
`cget -U` will replace the current version.
Does that downgrade as well?
Is there an equivalent to virtualenv? Right now for boost, I clone the superproject, and the artifacts and dependencies are confined withing this clone (up to doxygen, docbook etc).
By default it installs everything in the local directory `cget`, but this can be changed by using the `—prefix` flag or setting the `CGET_PREFIX` environment variable.
- I can continue... such as targets subset selection. It is doable with CMake with, "I think" some umbrella projects, but again this is hard to maintain and requires a high level orchestration. Only for the tests for instance: suppose I do not want to compile them in my first step, and then I change my mind, I want to run a subset of them. What I also want is not wasting my time in waiting for a billion of files to compile, I just want the minimal compilation. So it comes to my mind that EXCLUDE_FROM_ALL might be used, but when I run ctest -R something*, I get an error... Maybe you know a good way of doing that in cmake?
I usually add the tests using this(I believe Boost.Hana does the same):
add_custom_target(check COMMAND ${CMAKE_CTEST_COMMAND} -VV -C ${CMAKE_CFG_INTDIR})
function(add_test_executable TEST_NAME) add_executable (${TEST_NAME} EXCLUDE_FROM_ALL ${ARGN}) if(WIN32) add_test(NAME ${TEST_NAME} WORKING_DIRECTORY ${LIBRARY_OUTPUT_PATH} COMMAND ${TEST_NAME}${CMAKE_EXECUTABLE_SUFFIX}) else() add_test(NAME ${TEST_NAME} COMMAND ${TEST_NAME}) endif() add_dependencies(check ${TEST_NAME}) set_tests_properties(${TEST_NAME} PROPERTIES FAIL_REGULAR_EXPRESSION "FAILED") endfunction(add_test_executable)
Then when I want to build the library I just run `cmake —build .` and then when I want to run the test, I can run `cmake —build . —target check`. Now if I want to run just one of the tests I can do `cmake —build . —target test_name && ./test_name` just as easy. I have not ever had the need to run subset of tests, this is usually the case when there is nested projects, but is easily avoided when the project is separated into separate components.
You are strengthening my point, you write an umbrella target for your purpose. My example with the tests was a trap: if you run "cmake —build . —target check" you end up building "all" the tests. To have a finer granularity, you should write "add_test_executable_PROJECTX" etc. BJam knows how to do that, also with a eg. STATIC version of some upstream library, defined at the point it is consumed (and not at the point it is declared/defined), and built only if needed, without the need to do some mumbo/jumbo with object files.
I don’t know see how that is something that cmake doesn’t do either.
Let me (try to) explain my point with an "analogy" with templates vs overloads: What cmake can do is: -------- declare possibly N combinations targetA(variant1, compilation_options1); targetA(variant1, compilation_optionsM); ... targetA(variantN, compilation_optionM); -------- and then consume a subset of the declared combination: -------- targetA(variantX, compilation_optionsY); -------- with 1<= X <= N, 1 <= Y <= M. -------- What BJam can do is: -------- template <class variants, class compilation_options> targetA(variants, compilation_options); -------- and then consume any: targetA(variantX, compilation_optionsY); -------- with the same flexibility as templates: the instance of generating a version of targetA is defined at the point it is consumed. If you do not see in what extent it is useful, please compare the overload vs the template approach in C++.
What I am saying is that it is indeed possible, I also know solutions, but this is not native to cmake.
Yes, its possible, and a module would help make it possible in a simpler way, although, I don’t know how common it is to group tests. In general, I usually just focus on one test or all the tests.
Tests was an example, and sometimes we end up doing things that are not common. At least I know that CMake or BJam do not tell me what to do, they offer the tools/language, it is up to me to implement it the way I need it.
Finally, for boost, it could provide some high-level cmake functions so all of these things can happen consistently across libraries.
Sure. Or ... BJam should be given some more care and visibility, like a GSoC (bis) track?
But its not entirely technology that is missing, its the community that is missing, and I don’t think a GSoC will help create a large community for boost build.
That is true. I see it as an chicken and egg problem also, and we have to start somewhere.
Where Bjam will always loose is the ability to generate IDE environments, natively, and this is a major reason why cmake will have a more lively community. I believe that a BJam to cmake is possible, but even in that case, Bjam will live in the shadow of cmake.
Yep, and instead of competing with cmake, boost could collaborate with cmake and would have a much larger impact.
Maybe CMake ppl are interested, but I do not see in what extent. They are de facto limited by the capabilities of the IDEs.

On Apr 23, 2016, at 9:30 AM, Raffi Enficiaud <raffi.enficiaud@mines-paris.org> wrote:
Le 23/04/16 à 06:34, Paul Fultz II a écrit :
[snip]
Then to set each one to the same name you can use the OUTPUT_NAME property:
set_target_properties(MyLib_shared PROPERTIES OUTPUT_NAME MyLib) set_target_properties(MyLib_static PROPERTIES OUTPUT_NAME MyLib)
Exactly, so you artificially make CMake think that 2 different targets should end up with the same name on the filesystem. It does not work for instance on Win because the import .lib of the shared get overwritten by the static. This is not exactly a solution, but rather a hack (or workaround).
Its neither, this is optimization because the user could just build the library twice, once for shared and another for static. Of course, this type of optimization mainly affects system maintainers and so everyday users of cmake don’t see this as a big problem.
Yet, having the same output name in case you build twice led to an undefined behaviour (.lib gets overwritten), and is not natively supported by CMake (using eg. CMAKE_ARCHIVE_OUTPUT_DIRECTORY for making the distinction does not work alone).
Yes, thats just for windows, which would need special treatment.
We can of course iterate further (set the output folder per type, etc).
- having a set of dependencies that is not driven by a high level CMakeLists.txt. You advocate the solution of packaging, but this does not target all platforms,
How does this not target all platforms?
Do I have a centralized (or virtualized like inside vagga/docker or virtualenv) and official packet manager on Win32 or OSX? I know tools exist (brew, chocolatey, etc). What about the other platforms (Android)? What about cross compilation?
There is bpm, cget, conan, and hunter to name a few that is cross platform and targets all platforms.
You missed the "official" and "centralized" parts. apt/dpkg or yum are official and centralized package manager, cget is not. Why should it be official and centralized?
Bpm wouldn’t be official and centralized?
Because 1/ official is usually one, or at least all officials can work together (new Ubuntu for instance) 2/ centralized becase if we end up of having several package manager, then it is a mess (eg. apt for pip installed packages) as those do not communicate each other. Example: I have a pip python package compiled against openCV from the system, and then I update openCV on the system.
But thats the same problem with boost now. If a boost library depended on openCV and then the system updated openCV then the user would have to rebuild boost, however with some form of packaging system, it only needs to rebuild a small set of libraries.
Also I can definitely see a problem in supporting another tool. What would happen to boost if cget is "deprecated”?
Cget is open source. Also, its fairly non-intrusive, so it can be easily replaced by another tool if necessary.
Example: Fink/MacPort/HomeBrew.
and just translates the same problem to another layer to my opinion. As a developer, in order to work on a library X that depends on Y, you should install Y, and this information should appear in X (so this is an implicit knowledge). What this process does is that it put X and Y in some same level of knowledge: a flatten set of packages. This is done by BJam already the same, but at compilation/build step, and without the burden of the extra management of packages (update upstream Y for instance, when Y can be a set of many packages, and obviously in a confined, repeatable and isolated development environment). But maybe you think of something else.
I don’t follow this at all. For example, when I want to build the hmr library here: https://github.com/pfultz2/hmr
All I have to do after cloning it is: `cget build`, then it will go and grab the dependencies because they have been listed in the requirements.txt file.
Then I am dependent on another tool, cget, maintained by ... you :) Also from the previous thread, if my project has not the "standard" cget layout, then cget will not work (yet?).
There is no layout requirements. It just requires a CMakeLists.txt at the top level(which all cmake-supported libraries have), but the library can be organized however.
The part "It just requires a CMakeLists.txt at the top level" is by definition a layout requirement, which is in contradiction with the part "There is no layout requirements". Also the "(which all cmake-supported libraries have)" is not a requirement of CMake itself, it is just a "good practice”.
It is a requirement of cmake. If I call `cmake some-dir` then a CMakeLists.txt needs to be in ‘some-dir'. So then cget just clones the repository(or a unpacks a tar file or copies a directory on your computer) and calls cmake on that directory. There is no special layout requirements.
I also need another file, "requirements" that I need to maintain externally to the build system.
But building and installing the dependencies is external to the build system anyways. The requirements.txt just lets you automate this process.
I do that often for my python packages, and it is "easy" but difficult to stabilize sometimes, especially in complex dependency graph (and their can be conflicting versions, etc). I can see good things in cget, I can also see weak points.
Currently cget doesn’t handle versions. I plan to support channels in the future which can support versions and resolve dependencies using a SAT solver(which pip does not do).
SAT solver, interesting... why would I need that complexity for solving dependencies? I see versions as a "range of possible", which makes (an possibly empty) intersection of half spaces.
SAT solver is what most package managers use to resolve constraints(such as dpkg).
What I am saying is that you delegate the complexity to another layer, call it cget or Catkin, or pip. And developers should do also the packaging, and this is not easy (and needs a whole infrastructure to make it right, like PPAs or pypi).
The complexity is there, which I hope tools like bpm or cget can help with. However, resolving the dependencies by putting everything in a superproject is more of a hack and doesn’t scale.
Right now it scales pretty well with BJam.
The fact I need to download entire boost to build and test hana using bjam seems like it doesn’t scale at all.
BTW, is cget able to work offline?
Yes.
Good :)
To me this is a highly non trivial task to do with CMake, and ends up in half backed solutions like ROS/Catkin (http://wiki.ros.org/catkin/conceptual_overview), which is really not CMake and is just making things harder for everyone.
Cmake already handles the packaging and finding dependencies, cget just provides the mechanism to retrieve the packages using the standard cmake process. This why you can use it to install zlib or even blas, as it doesn’t require an extra dependency management system.
Well, I really cannot tell for cget. CMake finds things that are installed in expected locations for instance, otherwise the FIND_PATHS should be indicated (and propagated to the dependency graph).
It sets the CMAKE_PREFIX_PATH(and a few other variables), which cmake uses to find libraries.
What if we need conflicting CMAKE_PREFIX_PATH? eg one for openCV and another one for Qt?
CMAKE_PREFIX_PATH is a list.
What if for instance, it needs an updated/downgraded version of the upstream? How cget does manage that?
`cget -U` will replace the current version.
Does that downgrade as well?
Yes, if you give it an older version it will replace the library with that version.
Is there an equivalent to virtualenv? Right now for boost, I clone the superproject, and the artifacts and dependencies are confined withing this clone (up to doxygen, docbook etc).
By default it installs everything in the local directory `cget`, but this can be changed by using the `—prefix` flag or setting the `CGET_PREFIX` environment variable.
- I can continue... such as targets subset selection. It is doable with CMake with, "I think" some umbrella projects, but again this is hard to maintain and requires a high level orchestration. Only for the tests for instance: suppose I do not want to compile them in my first step, and then I change my mind, I want to run a subset of them. What I also want is not wasting my time in waiting for a billion of files to compile, I just want the minimal compilation. So it comes to my mind that EXCLUDE_FROM_ALL might be used, but when I run ctest -R something*, I get an error... Maybe you know a good way of doing that in cmake?
I usually add the tests using this(I believe Boost.Hana does the same):
add_custom_target(check COMMAND ${CMAKE_CTEST_COMMAND} -VV -C ${CMAKE_CFG_INTDIR})
function(add_test_executable TEST_NAME) add_executable (${TEST_NAME} EXCLUDE_FROM_ALL ${ARGN}) if(WIN32) add_test(NAME ${TEST_NAME} WORKING_DIRECTORY ${LIBRARY_OUTPUT_PATH} COMMAND ${TEST_NAME}${CMAKE_EXECUTABLE_SUFFIX}) else() add_test(NAME ${TEST_NAME} COMMAND ${TEST_NAME}) endif() add_dependencies(check ${TEST_NAME}) set_tests_properties(${TEST_NAME} PROPERTIES FAIL_REGULAR_EXPRESSION "FAILED") endfunction(add_test_executable)
Then when I want to build the library I just run `cmake —build .` and then when I want to run the test, I can run `cmake —build . —target check`. Now if I want to run just one of the tests I can do `cmake —build . —target test_name && ./test_name` just as easy. I have not ever had the need to run subset of tests, this is usually the case when there is nested projects, but is easily avoided when the project is separated into separate components.
You are strengthening my point, you write an umbrella target for your purpose. My example with the tests was a trap: if you run "cmake —build . —target check" you end up building "all" the tests. To have a finer granularity, you should write "add_test_executable_PROJECTX" etc. BJam knows how to do that, also with a eg. STATIC version of some upstream library, defined at the point it is consumed (and not at the point it is declared/defined), and built only if needed, without the need to do some mumbo/jumbo with object files.
I don’t know see how that is something that cmake doesn’t do either.
Let me (try to) explain my point with an "analogy" with templates vs overloads:
What cmake can do is: -------- declare possibly N combinations targetA(variant1, compilation_options1); targetA(variant1, compilation_optionsM); ... targetA(variantN, compilation_optionM); --------
and then consume a subset of the declared combination:
-------- targetA(variantX, compilation_optionsY); -------- with 1<= X <= N, 1 <= Y <= M.
-------- What BJam can do is:
-------- template <class variants, class compilation_options> targetA(variants, compilation_options);
-------- and then consume any: targetA(variantX, compilation_optionsY); --------
with the same flexibility as templates: the instance of generating a version of targetA is defined at the point it is consumed.
I do not follow this analogy at all.
If you do not see in what extent it is useful, please compare the overload vs the template approach in C++.
Cmake is a fairly dynamic language, so I don’t think it is as limited as you think.
What I am saying is that it is indeed possible, I also know solutions, but this is not native to cmake.
Yes, its possible, and a module would help make it possible in a simpler way, although, I don’t know how common it is to group tests. In general, I usually just focus on one test or all the tests.
Tests was an example, and sometimes we end up doing things that are not common. At least I know that CMake or BJam do not tell me what to do, they offer the tools/language, it is up to me to implement it the way I need it.
Yes and the nice thing about cmake, is it leads you to a simple more modular design to solve the problem instead of trying to link in 20 different library targets that are a variation of shared and static from the same library.
Finally, for boost, it could provide some high-level cmake functions so all of these things can happen consistently across libraries.
Sure. Or ... BJam should be given some more care and visibility, like a GSoC (bis) track?
But its not entirely technology that is missing, its the community that is missing, and I don’t think a GSoC will help create a large community for boost build.
That is true. I see it as an chicken and egg problem also, and we have to start somewhere.
Where Bjam will always loose is the ability to generate IDE environments, natively, and this is a major reason why cmake will have a more lively community. I believe that a BJam to cmake is possible, but even in that case, Bjam will live in the shadow of cmake.
Yep, and instead of competing with cmake, boost could collaborate with cmake and would have a much larger impact.
Maybe CMake ppl are interested, but I do not see in what extent. They are de facto limited by the capabilities of the IDEs.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Le 23/04/16 à 19:19, Paul Fultz II a écrit :
Yet, having the same output name in case you build twice led to an undefined behaviour (.lib gets overwritten), and is not natively supported by CMake (using eg. CMAKE_ARCHIVE_OUTPUT_DIRECTORY for making the distinction does not work alone).
Yes, thats just for windows, which would need special treatment.
Which is "just" one platform that is targeted, and "just" one use case not covered by what you propose.
You missed the "official" and "centralized" parts. apt/dpkg or yum are official and centralized package manager, cget is not. Why should it be official and centralized?
Bpm wouldn’t be official and centralized?
I do not know what BPM is.
Because 1/ official is usually one, or at least all officials can work together (new Ubuntu for instance) 2/ centralized becase if we end up of having several package manager, then it is a mess (eg. apt for pip installed packages) as those do not communicate each other. Example: I have a pip python package compiled against openCV from the system, and then I update openCV on the system.
But thats the same problem with boost now. If a boost library depended on openCV and then the system updated openCV then the user would have to rebuild boost, however with some form of packaging system, it only needs to rebuild a small set of libraries.
Yet, we do not let the developer/user with the fake impression that he installed something in a good manner.
Also I can definitely see a problem in supporting another tool. What would happen to boost if cget is "deprecated”?
Cget is open source. Also, its fairly non-intrusive, so it can be easily replaced by another tool if necessary.
There is a lot of open-source dead projects.
Example: Fink/MacPort/HomeBrew. [snip] The part "It just requires a CMakeLists.txt at the top level" is by definition a layout requirement, which is in contradiction with the part "There is no layout requirements". Also the "(which all cmake-supported libraries have)" is not a requirement of CMake itself, it is just a "good practice”.
It is a requirement of cmake. If I call `cmake some-dir` then a CMakeLists.txt needs to be in ‘some-dir'. So then cget just clones the repository(or a unpacks a tar file or copies a directory on your computer) and calls cmake on that directory. There is no special layout requirements.
I know how cmake works. From what I understood, your requirement is to have a top level CMakeLists.txt. This is not a CMake requirement (as I can have references to a parent dir in my CMakeLists.txt).
[snip] SAT solver, interesting... why would I need that complexity for solving dependencies? I see versions as a "range of possible", which makes (an possibly empty) intersection of half spaces.
SAT solver is what most package managers use to resolve constraints(such as dpkg).
I stay corrected. Yet this is something cget does not have.
What I am saying is that you delegate the complexity to another layer, call it cget or Catkin, or pip. And developers should do also the packaging, and this is not easy (and needs a whole infrastructure to make it right, like PPAs or pypi).
The complexity is there, which I hope tools like bpm or cget can help with. However, resolving the dependencies by putting everything in a superproject is more of a hack and doesn’t scale.
Right now it scales pretty well with BJam.
The fact I need to download entire boost to build and test hana using bjam seems like it doesn’t scale at all.
You made a point. But packaging is not the purpose of the boost superproject.
What if we need conflicting CMAKE_PREFIX_PATH? eg one for openCV and another one for Qt?
CMAKE_PREFIX_PATH is a list.
Right, it was not the case in 3.0 apparently.
[snip]
I don’t know see how that is something that cmake doesn’t do either.
Let me (try to) explain my point with an "analogy" with templates vs overloads:
What cmake can do is: -------- declare possibly N combinations targetA(variant1, compilation_options1); targetA(variant1, compilation_optionsM); ... targetA(variantN, compilation_optionM); --------
and then consume a subset of the declared combination:
-------- targetA(variantX, compilation_optionsY); -------- with 1<= X <= N, 1 <= Y <= M.
-------- What BJam can do is:
-------- template <class variants, class compilation_options> targetA(variants, compilation_options);
-------- and then consume any: targetA(variantX, compilation_optionsY); --------
with the same flexibility as templates: the instance of generating a version of targetA is defined at the point it is consumed.
I do not follow this analogy at all.
I felt smart when I made this analogy. And this is still the case :) BJam defines metatargets (or target functions) which is fundamentally different from simple targets: see here http://www.boost.org/build/doc/html/bbv2/overview/build_process.html. Properties associated to CMake targets are static. They may be associated with generating functions (https://cmake.org/cmake/help/v3.3/manual/cmake-generator-expressions.7.html) yet it is less powerful. I see it like targetA(f(variants, compilation_options)) which I believe BJam can do (maybe with a less sexy syntax...)
If you do not see in what extent it is useful, please compare the overload vs the template approach in C++.
Cmake is a fairly dynamic language, so I don’t think it is as limited as you think.
We have diverging opinions. I am using cmake for more than 10 years now, I do not feel like I am missing some big part of it. I feel more like I am yet in the learning curve of BJam instead (although it would be a risky choice for my other projects... but this is interesting).
What I am saying is that it is indeed possible, I also know solutions, but this is not native to cmake.
Yes, its possible, and a module would help make it possible in a simpler way, although, I don’t know how common it is to group tests. In general, I usually just focus on one test or all the tests.
Tests was an example, and sometimes we end up doing things that are not common. At least I know that CMake or BJam do not tell me what to do, they offer the tools/language, it is up to me to implement it the way I need it.
Yes and the nice thing about cmake, is it leads you to a simple more modular design to solve the problem instead of trying to link in 20 different library targets that are a variation of shared and static from the same library.
I do not see any problem for boost, which is the scope here. My opinion is this: *if* a CMake solution is "production ready" for boost, then let's continue the discussion. Right now, you exposed the "range of possible", while I tried to point out what is expected.

On Apr 23, 2016, at 2:12 PM, Raffi Enficiaud <raffi.enficiaud@mines-paris.org> wrote:
Le 23/04/16 à 19:19, Paul Fultz II a écrit :
Yet, having the same output name in case you build twice led to an undefined behaviour (.lib gets overwritten), and is not natively supported by CMake (using eg. CMAKE_ARCHIVE_OUTPUT_DIRECTORY for making the distinction does not work alone).
Yes, thats just for windows, which would need special treatment.
Which is "just" one platform that is targeted, and "just" one use case not covered by what you propose.
You missed the "official" and "centralized" parts. apt/dpkg or yum are official and centralized package manager, cget is not. Why should it be official and centralized?
Bpm wouldn’t be official and centralized?
I do not know what BPM is.
https://github.com/boostorg/bpm
Because 1/ official is usually one, or at least all officials can work together (new Ubuntu for instance) 2/ centralized becase if we end up of having several package manager, then it is a mess (eg. apt for pip installed packages) as those do not communicate each other. Example: I have a pip python package compiled against openCV from the system, and then I update openCV on the system.
But thats the same problem with boost now. If a boost library depended on openCV and then the system updated openCV then the user would have to rebuild boost, however with some form of packaging system, it only needs to rebuild a small set of libraries.
Yet, we do not let the developer/user with the fake impression that he installed something in a good manner.
I don’t how it is different with cget or bpm.
Also I can definitely see a problem in supporting another tool. What would happen to boost if cget is "deprecated”?
Cget is open source. Also, its fairly non-intrusive, so it can be easily replaced by another tool if necessary.
There is a lot of open-source dead projects.
Example: Fink/MacPort/HomeBrew. [snip] The part "It just requires a CMakeLists.txt at the top level" is by definition a layout requirement, which is in contradiction with the part "There is no layout requirements". Also the "(which all cmake-supported libraries have)" is not a requirement of CMake itself, it is just a "good practice”.
It is a requirement of cmake. If I call `cmake some-dir` then a CMakeLists.txt needs to be in ‘some-dir'. So then cget just clones the repository(or a unpacks a tar file or copies a directory on your computer) and calls cmake on that directory. There is no special layout requirements.
I know how cmake works. From what I understood, your requirement is to have a top level CMakeLists.txt. This is not a CMake requirement (as I can have references to a parent dir in my CMakeLists.txt).
Well cget doesn’t do anything fancy, it just calls cmake on the directory, if cmake finds CMakeLists.txt in another directory then it will be found(although I’ve never seen cmake do this before but its not very clearly documented either).
[snip] SAT solver, interesting... why would I need that complexity for solving dependencies? I see versions as a "range of possible", which makes (an possibly empty) intersection of half spaces.
SAT solver is what most package managers use to resolve constraints(such as dpkg).
I stay corrected. Yet this is something cget does not have.
What I am saying is that you delegate the complexity to another layer, call it cget or Catkin, or pip. And developers should do also the packaging, and this is not easy (and needs a whole infrastructure to make it right, like PPAs or pypi).
The complexity is there, which I hope tools like bpm or cget can help with. However, resolving the dependencies by putting everything in a superproject is more of a hack and doesn’t scale.
Right now it scales pretty well with BJam.
The fact I need to download entire boost to build and test hana using bjam seems like it doesn’t scale at all.
You made a point. But packaging is not the purpose of the boost superproject.
What if we need conflicting CMAKE_PREFIX_PATH? eg one for openCV and another one for Qt?
CMAKE_PREFIX_PATH is a list.
Right, it was not the case in 3.0 apparently.
its a list in cmake 2.8 and 3.5
[snip]
I don’t know see how that is something that cmake doesn’t do either.
Let me (try to) explain my point with an "analogy" with templates vs overloads:
What cmake can do is: -------- declare possibly N combinations targetA(variant1, compilation_options1); targetA(variant1, compilation_optionsM); ... targetA(variantN, compilation_optionM); --------
and then consume a subset of the declared combination:
-------- targetA(variantX, compilation_optionsY); -------- with 1<= X <= N, 1 <= Y <= M.
-------- What BJam can do is:
-------- template <class variants, class compilation_options> targetA(variants, compilation_options);
-------- and then consume any: targetA(variantX, compilation_optionsY); --------
with the same flexibility as templates: the instance of generating a version of targetA is defined at the point it is consumed.
I do not follow this analogy at all.
I felt smart when I made this analogy. And this is still the case :) BJam defines metatargets (or target functions) which is fundamentally different from simple targets: see here http://www.boost.org/build/doc/html/bbv2/overview/build_process.html. Properties associated to CMake targets are static. They may be associated with generating functions (https://cmake.org/cmake/help/v3.3/manual/cmake-generator-expressions.7.html) yet it is less powerful. I see it like
targetA(f(variants, compilation_options))
which I believe BJam can do (maybe with a less sexy syntax…)
I don’t see why you couldn’t create metatargets in cmake as well.
If you do not see in what extent it is useful, please compare the overload vs the template approach in C++.
Cmake is a fairly dynamic language, so I don’t think it is as limited as you think.
We have diverging opinions. I am using cmake for more than 10 years now, I do not feel like I am missing some big part of it. I feel more like I am yet in the learning curve of BJam instead (although it would be a risky choice for my other projects... but this is interesting).
What I am saying is that it is indeed possible, I also know solutions, but this is not native to cmake.
Yes, its possible, and a module would help make it possible in a simpler way, although, I don’t know how common it is to group tests. In general, I usually just focus on one test or all the tests.
Tests was an example, and sometimes we end up doing things that are not common. At least I know that CMake or BJam do not tell me what to do, they offer the tools/language, it is up to me to implement it the way I need it.
Yes and the nice thing about cmake, is it leads you to a simple more modular design to solve the problem instead of trying to link in 20 different library targets that are a variation of shared and static from the same library.
I do not see any problem for boost, which is the scope here.
My opinion is this: *if* a CMake solution is "production ready" for boost, then let's continue the discussion. Right now, you exposed the "range of possible", while I tried to point out what is expected.
Well, I don’t expect to move the entire boost to cmake, that would be a very big mountain to move. Rather, some core libraries could start supporting cmake, and little by little more libraries can move to cmake. Most newer libraries already support cmake. Perhaps in the future, it would be nice to allow cmake-only libraries in boost as well(although won’t be in the main distribution).
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Le 24/04/16 à 00:39, Paul Fultz II a écrit :
[snip]
Because 1/ official is usually one, or at least all officials can work together (new Ubuntu for instance) 2/ centralized becase if we end up of having several package manager, then it is a mess (eg. apt for pip installed packages) as those do not communicate each other. Example: I have a pip python package compiled against openCV from the system, and then I update openCV on the system.
But thats the same problem with boost now. If a boost library depended on openCV and then the system updated openCV then the user would have to rebuild boost, however with some form of packaging system, it only needs to rebuild a small set of libraries.
Yet, we do not let the developer/user with the fake impression that he installed something in a good manner.
I don’t how it is different with cget or bpm.
Maybe there is a misunderstanding in the notion of package then. For me package is something you install on the system/global environment, or an isolated environment that is able to remain consistent.
Also I can definitely see a problem in supporting another tool. What would happen to boost if cget is "deprecated”?
[snip]
It is a requirement of cmake. If I call `cmake some-dir` then a CMakeLists.txt needs to be in ‘some-dir'. So then cget just clones the repository(or a unpacks a tar file or copies a directory on your computer) and calls cmake on that directory. There is no special layout requirements.
I know how cmake works. From what I understood, your requirement is to have a top level CMakeLists.txt. This is not a CMake requirement (as I can have references to a parent dir in my CMakeLists.txt).
Well cget doesn’t do anything fancy, it just calls cmake on the directory, if cmake finds CMakeLists.txt in another directory then it will be found(although I’ve never seen cmake do this before but its not very clearly documented either).
If you think that cget is the way to go and is paving the way for a better use of boost, why not just proposing it for integration to boost? I believe it needs some work and a consistent way of integrating libraries to cget should also be explained.
[snip] CMAKE_PREFIX_PATH is a list.
Right, it was not the case in 3.0 apparently.
its a list in cmake 2.8 and 3.5
I was referring to the (wrong) doc from https://cmake.org/cmake/help/v3.0/variable/CMAKE_PREFIX_PATH.html
[snip]
I don’t know see how that is something that cmake doesn’t do either.
Let me (try to) explain my point with an "analogy" with templates vs overloads:
What cmake can do is: -------- declare possibly N combinations targetA(variant1, compilation_options1); targetA(variant1, compilation_optionsM); ... targetA(variantN, compilation_optionM); --------
and then consume a subset of the declared combination:
-------- targetA(variantX, compilation_optionsY); -------- with 1<= X <= N, 1 <= Y <= M.
-------- What BJam can do is:
-------- template <class variants, class compilation_options> targetA(variants, compilation_options);
-------- and then consume any: targetA(variantX, compilation_optionsY); --------
with the same flexibility as templates: the instance of generating a version of targetA is defined at the point it is consumed.
I do not follow this analogy at all.
I felt smart when I made this analogy. And this is still the case :) BJam defines metatargets (or target functions) which is fundamentally different from simple targets: see here http://www.boost.org/build/doc/html/bbv2/overview/build_process.html. Properties associated to CMake targets are static. They may be associated with generating functions (https://cmake.org/cmake/help/v3.3/manual/cmake-generator-expressions.7.html) yet it is less powerful. I see it like
targetA(f(variants, compilation_options))
which I believe BJam can do (maybe with a less sexy syntax…)
I don’t see why you couldn’t create metatargets in cmake as well.
Well, this is not CMake, but this would be a "boost2cmake" layer, that might be not as thin as I would like to see. You are and I am right: everything is possible up to the appropriate amount of effort.
If you do not see in what extent it is useful, please compare the overload vs the template approach in C++.
Cmake is a fairly dynamic language, so I don’t think it is as limited as you think.
We have diverging opinions. I am using cmake for more than 10 years now, I do not feel like I am missing some big part of it. I feel more like I am yet in the learning curve of BJam instead (although it would be a risky choice for my other projects... but this is interesting).
What I am saying is that it is indeed possible, I also know solutions, but this is not native to cmake.
Yes, its possible, and a module would help make it possible in a simpler way, although, I don’t know how common it is to group tests. In general, I usually just focus on one test or all the tests.
Tests was an example, and sometimes we end up doing things that are not common. At least I know that CMake or BJam do not tell me what to do, they offer the tools/language, it is up to me to implement it the way I need it.
Yes and the nice thing about cmake, is it leads you to a simple more modular design to solve the problem instead of trying to link in 20 different library targets that are a variation of shared and static from the same library.
I do not see any problem for boost, which is the scope here.
My opinion is this: *if* a CMake solution is "production ready" for boost, then let's continue the discussion. Right now, you exposed the "range of possible", while I tried to point out what is expected.
Well, I don’t expect to move the entire boost to cmake, that would be a very big mountain to move. Rather, some core libraries could start supporting cmake, and little by little more libraries can move to cmake. Most newer libraries already support cmake. Perhaps in the future, it would be nice to allow cmake-only libraries in boost as well(although won’t be in the main distribution).
Fine, but I think that letting some behind is not a good thing neither.

On Apr 24, 2016, at 3:45 AM, Raffi Enficiaud <raffi.enficiaud@mines-paris.org> wrote:
Le 24/04/16 à 00:39, Paul Fultz II a écrit :
[snip]
Because 1/ official is usually one, or at least all officials can work together (new Ubuntu for instance) 2/ centralized becase if we end up of having several package manager, then it is a mess (eg. apt for pip installed packages) as those do not communicate each other. Example: I have a pip python package compiled against openCV from the system, and then I update openCV on the system.
But thats the same problem with boost now. If a boost library depended on openCV and then the system updated openCV then the user would have to rebuild boost, however with some form of packaging system, it only needs to rebuild a small set of libraries.
Yet, we do not let the developer/user with the fake impression that he installed something in a good manner.
I don’t how it is different with cget or bpm.
Maybe there is a misunderstanding in the notion of package then. For me package is something you install on the system/global environment, or an isolated environment that is able to remain consistent.
That is what happens when I build and install boost, but I can’t easily remove the version(unless I installed it in a separate directory) if it later conflicts with another library. I could try to install over top of it with a new version, but there could be files from the previous version that don’t get removed and could cause problems. However, with a package manager I can remove the version of boost and completely replace it.
Also I can definitely see a problem in supporting another tool. What would happen to boost if cget is "deprecated”?
[snip]
It is a requirement of cmake. If I call `cmake some-dir` then a CMakeLists.txt needs to be in ‘some-dir'. So then cget just clones the repository(or a unpacks a tar file or copies a directory on your computer) and calls cmake on that directory. There is no special layout requirements.
I know how cmake works. From what I understood, your requirement is to have a top level CMakeLists.txt. This is not a CMake requirement (as I can have references to a parent dir in my CMakeLists.txt).
Well cget doesn’t do anything fancy, it just calls cmake on the directory, if cmake finds CMakeLists.txt in another directory then it will be found(although I’ve never seen cmake do this before but its not very clearly documented either).
If you think that cget is the way to go and is paving the way for a better use of boost, why not just proposing it for integration to boost?
Well, first, its written in python. This is mainly to simplify distribution. Since, I am doing this in my free time, I don’t have a lot of time to prepare distribution for deb, RPM, homebrew, and windows, and I would rather focus my efforts on the functionality. Although some time in the future, I would like to write a C++ version. Secondly, it only supports cmake. Maybe once boost starts supporting more cmake, a C++ version could be integrated into boost.
I believe it needs some work and a consistent way of integrating libraries to cget should also be explained.
Yes, and it needs more documentation. I have had issues opened on features that were already supported. I hope to get to that sometime soon. What are you referring to “a consistent way of integrating libraries”?
[snip] CMAKE_PREFIX_PATH is a list.
Right, it was not the case in 3.0 apparently.
its a list in cmake 2.8 and 3.5
I was referring to the (wrong) doc from https://cmake.org/cmake/help/v3.0/variable/CMAKE_PREFIX_PATH.html
Yes, the doc here is not clear, because in find_libraries: https://cmake.org/cmake/help/v3.0/command/find_library.html#command:find_lib... It clearly shows using it as a list. So they are improving documentation.
[snip]
I don’t know see how that is something that cmake doesn’t do either.
Let me (try to) explain my point with an "analogy" with templates vs overloads:
What cmake can do is: -------- declare possibly N combinations targetA(variant1, compilation_options1); targetA(variant1, compilation_optionsM); ... targetA(variantN, compilation_optionM); --------
and then consume a subset of the declared combination:
-------- targetA(variantX, compilation_optionsY); -------- with 1<= X <= N, 1 <= Y <= M.
-------- What BJam can do is:
-------- template <class variants, class compilation_options> targetA(variants, compilation_options);
-------- and then consume any: targetA(variantX, compilation_optionsY); --------
with the same flexibility as templates: the instance of generating a version of targetA is defined at the point it is consumed.
I do not follow this analogy at all.
I felt smart when I made this analogy. And this is still the case :) BJam defines metatargets (or target functions) which is fundamentally different from simple targets: see here http://www.boost.org/build/doc/html/bbv2/overview/build_process.html. Properties associated to CMake targets are static. They may be associated with generating functions (https://cmake.org/cmake/help/v3.3/manual/cmake-generator-expressions.7.html) yet it is less powerful. I see it like
targetA(f(variants, compilation_options))
which I believe BJam can do (maybe with a less sexy syntax…)
I don’t see why you couldn’t create metatargets in cmake as well.
Well, this is not CMake, but this would be a "boost2cmake" layer, that might be not as thin as I would like to see. You are and I am right: everything is possible up to the appropriate amount of effort.
If you do not see in what extent it is useful, please compare the overload vs the template approach in C++.
Cmake is a fairly dynamic language, so I don’t think it is as limited as you think.
We have diverging opinions. I am using cmake for more than 10 years now, I do not feel like I am missing some big part of it. I feel more like I am yet in the learning curve of BJam instead (although it would be a risky choice for my other projects... but this is interesting).
> What I am saying is that it is indeed possible, I also know solutions, > but this is not native to cmake.
Yes, its possible, and a module would help make it possible in a simpler way, although, I don’t know how common it is to group tests. In general, I usually just focus on one test or all the tests.
Tests was an example, and sometimes we end up doing things that are not common. At least I know that CMake or BJam do not tell me what to do, they offer the tools/language, it is up to me to implement it the way I need it.
Yes and the nice thing about cmake, is it leads you to a simple more modular design to solve the problem instead of trying to link in 20 different library targets that are a variation of shared and static from the same library.
I do not see any problem for boost, which is the scope here.
My opinion is this: *if* a CMake solution is "production ready" for boost, then let's continue the discussion. Right now, you exposed the "range of possible", while I tried to point out what is expected.
Well, I don’t expect to move the entire boost to cmake, that would be a very big mountain to move. Rather, some core libraries could start supporting cmake, and little by little more libraries can move to cmake. Most newer libraries already support cmake. Perhaps in the future, it would be nice to allow cmake-only libraries in boost as well(although won’t be in the main distribution).
Fine, but I think that letting some behind is not a good thing neither.
Nope its not, so I would hope the community would take up the effort to integrate those libraries as well.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Paul Fultz II wrote:
The fact I need to download entire boost to build and test hana using bjam seems like it doesn’t scale at all.
You don't need the entire Boost to test Hana with Boost.Build. There is no easy way at present to get just the subset you would need, or even determine what that subset is, but this is not a limitation of Boost.Build or of the current Boost structure. The modules are optional. The main test script does include everything but it should be easy to fix it to walk the tree instead. The main build script has already been made intelligent enough to build what's there and not build what isn't. To fully test Hana, by the way, you do need a number of other Boost libraries, mostly because of the ext/ directory. Or so Boostdep informs me.

On Apr 23, 2016, at 5:56 PM, Peter Dimov <lists@pdimov.com> wrote:
Paul Fultz II wrote:
The fact I need to download entire boost to build and test hana using bjam seems like it doesn’t scale at all.
You don't need the entire Boost to test Hana with Boost.Build. There is no easy way at present to get just the subset you would need, or even determine what that subset is, but this is not a limitation of Boost.Build or of the current Boost structure. The modules are optional. The main test script does include everything but it should be easy to fix it to walk the tree instead. The main build script has already been made intelligent enough to build what's there and not build what isn’t.
Yes, its capable. My point being that boost move in the direction of being able to be built and distributed in modules(perhaps using bpm) instead of putting everything in a superproject.
To fully test Hana, by the way, you do need a number of other Boost libraries, mostly because of the ext/ directory. Or so Boostdep informs me.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Le 24/04/16 à 01:12, Paul Fultz II a écrit :
You don't need the entire Boost to test Hana with Boost.Build. There is no easy way at present to get just the subset you would need, or even determine what that subset is, but this is not a limitation of Boost.Build or of the current Boost structure. The modules are optional. The main test script does include everything but it should be easy to fix it to walk the tree instead. The main build script has already been made intelligent enough to build what's there and not build what isn’t.
Yes, its capable. My point being that boost move in the direction of being able to be built and distributed in modules(perhaps using bpm) instead of putting everything in a superproject.
There are good things in having a superproject though, especially (from the top of my head): - a release is associated to a unique revision of the superproject (not as many revs as there are libraries) - library authors do not need to take care of releases, this is done by the (wonderful) release team - the set of module is scoped and its state is consistent. I believe this is harder to achieve if, eg. at some point libraries external to boost are referenced. To be honest, as a developer, I do not care so much of being forced to clone the whole superproject, it works quite well on all the platforms I tried so far. The number of libraries grows quite slowly as well, and I had to clone once per machine so far. I also rely on platform packager: even if they are sometime quite slow in delivering new versions (although Brew is quite reactive, Debian much less), they are quite good at their job. But as you point out, I believe that tools such as BPM can be augmented with some "intelligent cloning" of submodules for instance, which would avoid that. So then you would have to clone the superproject in a shallow manner, and then compile BPM to clone the subset of libraries you need. This will address the particular use-case you are referring to in the case of Hana, and I believe this is useful for many people as well. The DAG that BPM is maintaining can be updated manually, or in a commit oriented manner by the robot for instance (although I do not like robots being empowered with commit ability).
To fully test Hana, by the way, you do need a number of other Boost libraries, mostly because of the ext/ directory. Or so Boostdep informs me.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Raffi Enficiaud wrote:
But as you point out, I believe that tools such as BPM can be augmented with some "intelligent cloning" of submodules for instance, which would avoid that. So then you would have to clone the superproject in a shallow manner, and then compile BPM to clone the subset of libraries you need. This will address the particular use-case you are referring to in the case of Hana, and I believe this is useful for many people as well. The DAG that BPM is maintaining can be updated manually, or in a commit oriented manner by the robot for instance (although I do not like robots being empowered with commit ability).
My original idea for BPM was for the packages and the dependency file to be prepared by some kind of release script, to replace the monolithic release. But now I think that this is unnecessary; I plan to rework it to download the packages directly from Github, and to scan the dependencies in place, so as to eliminate the need for a bpm-specific packaging. The idea is to be able to say bpm -r <commit-or-tag> test filesystem and it would go and download filesystem and all its test dependencies from Github and then execute the equivalent of b2 <commit-or-tag>/libs/filesystem/test This would be very useful in .travis.yml, except that you'd need to somehow bootstrap bpm first. On Windows, one would be able to just download bpm.exe.

On Apr 24, 2016, at 5:56 AM, Peter Dimov <lists@pdimov.com> wrote:
Raffi Enficiaud wrote:
But as you point out, I believe that tools such as BPM can be augmented with some "intelligent cloning" of submodules for instance, which would avoid that. So then you would have to clone the superproject in a shallow manner, and then compile BPM to clone the subset of libraries you need. This will address the particular use-case you are referring to in the case of Hana, and I believe this is useful for many people as well. The DAG that BPM is maintaining can be updated manually, or in a commit oriented manner by the robot for instance (although I do not like robots being empowered with commit ability).
My original idea for BPM was for the packages and the dependency file to be prepared by some kind of release script, to replace the monolithic release.
What would be nice if BPM was extensible to support libraries in the incubator. That is the user had some form of channel or PPA that could be added, which would install these unofficial libraries as well.
But now I think that this is unnecessary; I plan to rework it to download the packages directly from Github, and to scan the dependencies in place, so as to eliminate the need for a bpm-specific packaging.
Wouldn’t it need to download all of boost to do that? Otherwise, how does it know which header belongs to which library?
The idea is to be able to say
bpm -r <commit-or-tag> test filesystem
and it would go and download filesystem and all its test dependencies from Github and then execute the equivalent of
b2 <commit-or-tag>/libs/filesystem/test
This would be very useful in .travis.yml, except that you'd need to somehow bootstrap bpm first.
And b2 as well, correct?

Paul Fultz II wrote:
What would be nice if BPM was extensible to support libraries in the incubator. That is the user had some form of channel or PPA that could be added, which would install these unofficial libraries as well.
I could in principle borrow a page from your book and make it treat bpm install pdimov:mp11 as referring to github.com/pdimov/mp11, placing it into libs/mp11, but the problem is then versioning. When everything is under the boostorg umbrella, the nice thing about the superproject is that it gives me a global version across all submodules. So when you install filesystem 1.60.0, it knows to get system 1.60.0 as well. (pdimov/mp11 doesn't work because we already have numeric/odeint.) Actually, now that I think about it, I'm not sure downloading the Github tarball of boostorg/boost would give me the information I need to download the correct revision of the submodules... and I rather not integrate git into bpm. :-)
But now I think that this is unnecessary; I plan to rework it to download the packages directly from Github, and to scan the dependencies in place, so as to eliminate the need for a bpm-specific packaging.
Wouldn’t it need to download all of boost to do that? Otherwise, how does it know which header belongs to which library?
My current plan is to autodetect the library from boost/{library}/... or boost/{library}.hpp and rely on a list with the headers that do not fit this form. I just added --list-exceptions to boostdep for this purpose; the main offenders are boost/archive, which belongs to serialization, boost/graph/distributed and boost/graph/parallel, which are in graph_parallel, and boost/detail/winapi which is the winapi module.
This would be very useful in .travis.yml, except that you'd need to somehow bootstrap bpm first.
And b2 as well, correct?
I could integrate the b2 engine into bpm. The idea being that bpm.exe should be all one needs to install and build everything else. Although I don't have the time at present to flesh all this out, so it could be that I've overlooked something.

On April 23, 2016 1:19:08 PM EDT, Paul Fultz II <pfultz2@yahoo.com> wrote:
On Apr 23, 2016, at 9:30 AM, Raffi Enficiaud <raffi.enficiaud@mines-paris.org> wrote:
Let me (try to) explain my point with an "analogy" with templates vs overloads:
What cmake can do is: -------- declare possibly N combinations targetA(variant1, compilation_options1); targetA(variant1, compilation_optionsM); ... targetA(variantN, compilation_optionM); --------
and then consume a subset of the declared combination:
-------- targetA(variantX, compilation_optionsY); -------- with 1<= X <= N, 1 <= Y <= M.
-------- What BJam can do is:
-------- template <class variants, class compilation_options> targetA(variants, compilation_options);
-------- and then consume any: targetA(variantX, compilation_optionsY); --------
with the same flexibility as templates: the instance of generating a version of targetA is defined at the point it is consumed.
I do not follow this analogy at all.
With CMake, you often have to add special cases and conditional logic to account for variations that might be chosen, and those variations have to be chosen in separate invocations, if not in separate build trees. With BB, you express things at a more abstract level and let the tool do the lower level work. ___ Rob (Sent from my portable computation engine)

On Apr 24, 2016, at 6:43 PM, Rob Stewart <rstewart@ptd.net> wrote:
On April 23, 2016 1:19:08 PM EDT, Paul Fultz II <pfultz2@yahoo.com> wrote:
On Apr 23, 2016, at 9:30 AM, Raffi Enficiaud <raffi.enficiaud@mines-paris.org> wrote:
Let me (try to) explain my point with an "analogy" with templates vs overloads:
What cmake can do is: -------- declare possibly N combinations targetA(variant1, compilation_options1); targetA(variant1, compilation_optionsM); ... targetA(variantN, compilation_optionM); --------
and then consume a subset of the declared combination:
-------- targetA(variantX, compilation_optionsY); -------- with 1<= X <= N, 1 <= Y <= M.
-------- What BJam can do is:
-------- template <class variants, class compilation_options> targetA(variants, compilation_options);
-------- and then consume any: targetA(variantX, compilation_optionsY); --------
with the same flexibility as templates: the instance of generating a version of targetA is defined at the point it is consumed.
I do not follow this analogy at all.
With CMake, you often have to add special cases and conditional logic to account for variations that might be chosen,
Most variations such as shared/static or debug/release are already supported by cmake, so the build script usually doesn’t need conditional logic. However,
those variations have to be chosen in separate invocations, if not in separate build trees.
Yes that is generally true, although I don’t consider it very problematic.

On Apr 20, 2016, at 10:49 AM, Robert Ramey <ramey@rrsd.com> wrote:
Inspired by the recent discussion regarding CMake, I've spend some more time looking at this. As a result, I've done the following.
a) I experimented with using CMake in the more customary manner in my canonical project Safe Numerics - (see Boost Library Incubator). The current version does distribute CMakeLists.txt around the subdirectories of the library root. My original objections to doing this were assuaged when I realized that boost build does the same thing. That is I typically have Jamfiles in side of test, example and sometimes other directories.
b) One of the features that I much like about Boost Build is the possibility of arbitrary nesting of Jamfiles. This permits one to build/test a portion of one's library (e.g. test, example) as well as the whole library. For a large library this is indispensable. In spite of what many people think, this is not easily supported with CMake. However, I have found that with some extra effort, it is possible to support this to the extent we need it. So in this project ONE CAN BUILD (actually create an IDE project or makefile that builds) ANY OF THE "SUBPROJECTS" (TEST, EXAMPLE) AS WELL AS THE LIBRARY “SUPERPROJECT"
You can create high-level targets to build a subset of a library. Its not automatic, but still possible.
c) CMake is indispensable to me:
* it creates IDE projects for "any?" platform. I use IDE alot. * everyone else uses it - so it makes it easier to promote my work to others. * it's easier to make work - still a pain though * There is lots of information, around the net about how to use CMake, how easy it is etc. Although they help when you're looking for an answer (which is all the time) - they really betray how complex, arbitrary and complex the system is. * It has an almost of idiot proof GUI version which I use a lot. I really like this. * CMake is well maintained and supported by it's promoters.
d) Boost Build * either just works (great!) or doesn't. If it doesn't it's almost impossible fix without getting help * I've never run across anyone outside of boost who uses it. It makes it harder to promote my work. * It's natural to compose projects into "super projects" * it's almost impossible to integrate with and IDE. At one time, I had things setup so I could debug executables created with boost build with the Visual Studio IDE and debugger. But it was just too fragile and time consuming to keep everything in sync. * it has a lot of "automatic" behavior which can be really, really confusion. A small example: you've got multiple compilers on your system. When it discovers this, it just picks the "best" one and you don't know which one you got until the project builds (or not). I'm sure this was implemented this way to make usage of boost build "simple" but it has the opposite effect. Much better would be fail immediately with a message "multiple compilers found:... use toolset=<compiler name> to select desired one."
Some Conclusions - I'm trying to not make this a rant
a) The ideal, platform independent build system does not yet exist. I guessing it never will. I'm sure it won't happen in my life time - but then I'm 68 - maybe you'll get lucky.
b) Both systems are much more fragile, complicated and opaque than their promoters try to make you believe. It's not that they're lying, they truely believe in their own stuff. There is much re-invention of the wheel - The each created their own (half-assed) little language for goodness sake!!!
Yes, and boost could provide a cmake module to provide high-level utilities for some of these common tasks.
c) Neither has really resolved the issue of nested projects in a clear way. Boost Build probably does or can do this. CMake has a system of "packages" and a whole 'nuther layer about "finding" them. Assuming it can be made to work - given the amount of time I've invested in CMake, I should know how to do this by now.
I think nested projects are bad, and its made worse because tooling doesn’t support it well. I think the packaging approach is better. Dependencies can be found in a build-independent way using pkg-config. Eventually, the super boost project with submodulues would go away and people would build and install boost libraries using either bpm or cget. There may still be a boost super project that depends on all boost libraries, so all boost libraries can be easily installed using bpm or cget, but it wouldn’t rely on git submodules to bring the libraries together as that brings in its own set of problems.
d) I think it's time for Boost to be a little more open about tolerating/encouraging alternative build systems. I think our documentation approach is a model. Yeah it's a hodgepodge. But the various ways of doing pretty much work and generally don't stop working and we don't have to constantly spend effort bringing things up to the latest greatest system (which we couldn't agree upon anyway). We have libraries which have been going strong 15 years - and people can still read the docs.
e) We should find some way to recognize those who have made the system work as well it has. Doug Gregor (boost book), Eric Niebler, Joel Guzman (quickbook). Vladimir Prus, Rene Rivera, Steve Watanabe. I know there others but these come to mind immediately.
Note that I have only addressed the issue of library development which is my main interest. I'm really not seeing this issues related to users of libraries. In particular, CMake has the whole "find" thing which I'm still not even seeing the need for. If I want to use a library, I can build the library and move it to a common place with the headers, specify the include directory and I'm on my way.
But if that library has dependencies then you need to find the dependencies as well. Its like when I want to use Boost.Filesystem, I need to link in Boost.System as well. If boost provided pkg-config files then I would just say I want Boost.Filesystem and Boost.System would be linked in automatically.
I'm sure someone will step up to enlighten me on this.
Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 4/22/16 10:25 AM, Paul Fultz II wrote:
Note that I have only addressed the issue of library development which is my main interest. I'm really not seeing this issues related to users of libraries. In particular, CMake has the whole "find" thing which I'm still not even seeing the need for. If I want to use a library, I can build the library and move it to a common place with the headers, specify the include directory and I'm on my way.
But if that library has dependencies then you need to find the dependencies as well. Its like when I want to use Boost.Filesystem, I need to link in Boost.System as well. If boost provided pkg-config files then I would just say I want Boost.Filesystem and Boost.System would be linked in automatically.
FWIW - the way handle this is: use boost build to build the whole of modular boost - and stage the libraries to some known directory. a) Create a CMake project with my project b) use CMake "find" to find boost libraries c) hit configure then generate - and I have an IDE project that builds. I use the CMake GUI to do this. It's pretty brain dead simple and no command line switches etc, etc. This is why I don't really understand what problem the "package" functionality is designed to solve. I should note that I rarely study any of these things in a systematic, exhaustive way - I'm only there because I've some other monkey on my back and I just experiment with the examples and maybe some cheat sheet I find on the web util it works. I know, I know ... But If I really learn it - I've forgotten it by the time I do it the next time so I still have to go into hack mode. I like to see stuff have the hacker friendly mode be the built in default. Robert Ramey
participants (15)
-
alainm
-
Boris Schäling
-
Egor Pugin
-
Gavin Lambert
-
Klaim - Joël Lamotte
-
Paul A. Bristow
-
Paul Fultz II
-
Peter Dimov
-
Raffi Enficiaud
-
Rene Rivera
-
Rob Stewart
-
Robert Ramey
-
Steven Watanabe
-
Vinnie Falco
-
Vladimir Prus