Hi All, So I would like to start work with adding support for cmake to boost libraries. To make the boilerplate needed to write some of the cmake, I wrote BCM(Boost Cmake Modules), here: https://boost-cmake.github.io/bcm/doc/html/ Of course, the documentation is still a WIP. The core part is bcm_boost_package. So to write cmake for a library, you would just write something like this: bcm_boost_package(core VERSION 1.0 DEPENDS config ) It also provides a mechanism to add tests as well. My goal at first would be to update libraries on per-module level(one PR at a time). So to use cmake, the user wold build and install each library individually with cmake. Later, we can update the cmake modules to support superproject building(which may be necessary for libraries that have circular dependencies). Right now, there are no modules for building documentation. This can be added later, as well. So does anyone have any feedback on this setup? If you had ideas, or would like to help update boost libraries, contributions are welcome. Thanks, Paul
Hi, I'm working in the same area, but more globally - dependency management. See [1]. There're a lot of packages already, not only boost. I did a small announcement about CPPAN here (in boost ML) some time ago. Cppan tries to manage C/C++ libs with simple declarative syntax (YAML) with ability to include and insert cmake scripts of any complexity.
From such description or declaration a complex CMakeLists.txt is generated and then used by CMake. See example of generated file for boost::log [6]. Of course cppan supports including selected deps to your usual CMakeLists.txt project.
You can find all boost packages here [2]. They are currently is under 'pvt.cppan.demo.boost' namespace - 'private.username.my_demo_directory'. If boost is willing to go to cppan officially, it could create and use organization 'org.boost' namespace ('com.???') is also available. Such naming came from other langs (java, c# and also from future C++Modules proposal). For example, names could be 'org.boost.hana', 'org.boost.asio', 'org.google.protobuf', 'com.intel.tbb', 'com.ibm.whatever', 'org.qt.multimedia', 'org.kde.*' etc. I'm adding boost there since 1.61.0, see [3]. Dependencies is not the strongest boost part because many of libs are header only, so people don't track them properly. I wrote a program to track them automatically [4]. (It builds with cppan only already in script style 'cppan --build main.cpp'.) Cppan specification example is on the page [5]. [1] https://cppan.org/ [2] https://cppan.org/projects/pvt.cppan.demo.boost [3] https://cppan.org/pvt.cppan.demo.boost.asio [4] https://github.com/egorpugin/boost_dependencies/blob/master/main.cpp [5] https://cppan.org/pvt.cppan.demo.boost.asio/version/1.63.0/specification [6] http://pastebin.com/wVwTQJFP -- Egor Pugin
On Thu, 2017-01-12 at 02:55 +0300, Egor Pugin wrote:
Hi,
I'm working in the same area, but more globally - dependency management. See [1]. There're a lot of packages already, not only boost. I did a small announcement about CPPAN here (in boost ML) some time ago.
The goal of the cmake modules is to help libraries support the standard cmake process(build, installing, and testing). Once this is setup other third-party tools can take care of installing the packages, or users can install each module manually if they choose.
Cppan tries to manage C/C++ libs with simple declarative syntax (YAML) with ability to include and insert cmake scripts of any complexity. From such description or declaration a complex CMakeLists.txt is generated and then used by CMake. See example of generated file for boost::log [6]. Of course cppan supports including selected deps to your usual CMakeLists.txt project.
It does have a quite extensive list of packages, but one problem I find with most package management tools built for C++ is that it doesn't support the common cmake libraries. I see it has support for Boost.Fit, however, I don't see how I could install my other libraries such as prove or args, without needing extra configuration(I could be wrong about this). With cget, I can install these libraries, plus unrelated libraries like cmark or tensile(which is not supported in CPPAN either), all because it supports the standard cmake flow.
You can find all boost packages here [2]. They are currently is under 'pvt.cppan.demo.boost' namespace - 'private.username.my_demo_directory'. If boost is willing to go to cppan officially, it could create and use organization 'org.boost' namespace ('com.???') is also available. Such naming came from other langs (java, c# and also from future C++Modules proposal). For example, names could be 'org.boost.hana', 'org.boost.asio', 'org.google.protobuf', 'com.intel.tbb', 'com.ibm.whatever', 'org.qt.multimedia', 'org.kde.*' etc. I'm adding boost there since 1.61.0, see [3]. Dependencies is not the strongest boost part because many of libs are header only, so people don't track them properly. I wrote a program to track them automatically [4]. (It builds with cppan only already in script style 'cppan --build main.cpp'.) Cppan specification example is on the page [5].
Actually, Peter Dimov already has tool to find the dependencies, and he has added recently to track the test dependencies. Does CPPAN support building and running the tests? I see it list the dependencies, but it doesn't seem to distiguish between usage, build and test dependencies. Paul
The goal of the cmake modules is to help libraries support the standard cmake process(build, installing, and testing). Once this is setup other third-party tools can take care of installing the packages, or users can install each module manually if they choose.
Standard cmake process works fine in the most cases, that's true. But it's not very strict and uniform.
... but one problem I find with most package management tools built for C++ is that it doesn't support the common cmake libraries.
What do you mean here? Qt? What are common cmake libs?
I don't see how I could install my other libraries such as prove or args, without needing extra configuration(I could be wrong about this).
For libraries that has some default layout (include, src) cppan tries to detect it and there's no really extra config needed. You just drop a repo url (+tag&ver. optionally) when adding a version to your library (after registration).
With cget, I can install these libraries, plus unrelated libraries like cmark or tensile(which is not supported in CPPAN either), all because it supports the standard cmake flow.
Yes, cppan requires yet another build config, it could not work or process existings cmake rules. But it supports include()s of other cmake files (e.g. checks or setup files).
... Boost.Fit, ... prove or args ... cmark or tensile(which is not supported in CPPAN either)
I've added some of your libs [1]. As for cmark/tensile - I'm adding new libs from time to time when I hear about them from different sources. Cmark is added now too, tensile is on todo list.
Does CPPAN support building and running the tests?
Not yet. They're on the roadmap with things like build matrices, installation, doc generation, ?sans and other useful things Niall described in his post. Cppan could bring a uniform way of doing this for every package added. I have some thoughts about these things, but investigating the best way to implement them. Cppan also aims for decentralization. Every client already downloading a database with info about all packages (dependencies only) [2]. And full specs for each project is available in [3] repo. So it will be very easy to create mirrors like with debian/ubuntu/... distro packages. [1] https://cppan.org/projects/pvt.cppan.demo.pfultz2 [2] https://github.com/cppan/database [3] https://github.com/cppan/specs -- Egor Pugin
So does anyone have any feedback on this setup? If you had ideas, or would like to help update boost libraries, contributions are welcome.
Given the other reply to this describing their boost cmake design, I guess I'd better mention boost-lite's: * 100% cmake3 based (and uses "modern cmake" throughout) with no other tooling needed * Autodiscovers any library with the boost directory layout * Autodiscovers any tests you have and sets them up with ctest * Provides out of the box CI ctest scripting which uploads results to a CDash for your project and updates your github website with the output from doxygen * Automatically uses C++ Modules and precompiled headers where available * Automatically configures clang-tidy, asan, tsan, msan and ubsan sanitiser targets * Automatically matches git SHA in dependent git subrepos in flat dependency configurations * Automatically merges any develop commit passing all tests on all platforms according to CDash into master branch * Automatically packages up your library and publishes it to tarball, vcpkg (with ubuntu launchpad and homebrew in progress right now) * Libraries based on this are 100% standalone, when you clone the git repo or unpack the tarball you are 100% ready to go. Nothing else needed, not even configure and build. No arcane command line programs to run. * 99% compatible with VS2017's new cmake project support (I have some bugs to file with them). It's very close to working perfectly. I wouldn't recommend that anyone else use it yet. It is very much a work in progress, but all the above is working, and you can see it in action in proposed Boost.Outcome. It also has nil documentation. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On Thu, 2017-01-12 at 15:19 +0000, Niall Douglas wrote:
So does anyone have any feedback on this setup? If you had ideas, or would like to help update boost libraries, contributions are welcome.
Given the other reply to this describing their boost cmake design, I guess I'd better mention boost-lite's:
* 100% cmake3 based (and uses "modern cmake" throughout) with no other tooling needed
These modules are cmake 3 based as well, without a need for extra tooling either.
* Autodiscovers any library with the boost directory layout
The `bcm_boost_package` function does rely on the layout to add and install headers, However, it doesn't for source files. I am concerned about boost libraries conditionally adding sources for some platforms. Also, some authors may not like globbing files(unless you found a way to workaround this problem). Perhaps, it could glob the sources in lib, by default, and then the author could override it with the `SOURCES` arg.
* Autodiscovers any tests you have and sets them up with ctest
This is interesting. But I don't think it scales. Some tests require linking in multiple sources, or require certain flags to be enabled. I really don't see how to autodiscover tests that will work for most boost libraries. Perhaps a `bcm_auto_test` function could be called by the author to do that(I would like to know what kind of conventions you follow when discovering the tests), and for libraries that have more complicated testing infastructure and add the tests manually with `bcm_add_test`.
* Provides out of the box CI ctest scripting which uploads results to a CDash for your project and updates your github website with the output from doxygen
This is probably useful for libraries that use doxygen. Using sphinx or mkdocs, I don't need to push changes out to github, as ReadTheDocs will update the documentation on push. Plus it will store the documentation for different tagged versions as well. This scales much nicer than using github pages. I haven't added any support for documentation yet, as the first support for documentation will need to be for the boost documentation toolchain.
* Automatically uses C++ Modules and precompiled headers where available * Automatically configures clang-tidy, asan, tsan, msan and ubsan sanitiser targets
Support for these things would be a nice addition.
* Automatically matches git SHA in dependent git subrepos in flat dependency configurations
I am not a fan of git submodules, as it breaks downloading the source tarball files from github.
* Automatically merges any develop commit passing all tests on all platforms according to CDash into master branch * Automatically packages up your library and publishes it to tarball, vcpkg (with ubuntu launchpad and homebrew in progress right now)
Adding support for CPack to create tarballs, debian, and fedora packages would be nice to add. However, mapping dependecies names between different package managers can be handled through convention for boost-only libraries, however, external dependencies(such as zlib) is not so easy. Also, as the library would support standard cmake install flow, it can easily be installed with cget(and dependencies can be installed with a requirements.txt file). I find this flow preferable over trying to update system-level package managers like homebrew or vcpkg. Although, from what I've seen from vcpkg, it works very similiar to cget except it is windows-centric.
* Libraries based on this are 100% standalone, when you clone the git repo or unpack the tarball you are 100% ready to go. Nothing else needed, not even configure and build. No arcane command line programs to run.
I don't understand this. My focus of these modules is to support the standard configure, build and install flow in cmake. Trying to hack cmake with a different conventional flow seems problematic. If users don't like this flow, or scared of typing, then external tools can be created to automate this. However, creating a different flow in cmake will just cause a dissonance with other cmake libraries.
* 99% compatible with VS2017's new cmake project support (I have some bugs to file with them). It's very close to working perfectly.
I wouldn't recommend that anyone else use it yet. It is very much a work in progress, but all the above is working, and you can see it in action in proposed Boost.Outcome. It also has nil documentation.
So I tried to install your Boost.Outcome library with no luck. First, I did `cget install ned14/boost.outcome`. And that didn't work because of missing the git submodules. So, I cloned it locally with its submodules, and then did `cget install boost.outcome`. It still didn't work. However, it looks like you have done a lot of awesome work here, and it would be great to integrate those into the cmake modules so other boost libraries could take advantage of it. Paul
* Autodiscovers any tests you have and sets them up with ctest
This is interesting. But I don't think it scales. Some tests require linking in multiple sources, or require certain flags to be enabled. I really don't see how to autodiscover tests that will work for most boost libraries. Perhaps a `bcm_auto_test` function could be called by the author to do that(I would like to know what kind of conventions you follow when discovering the tests), and for libraries that have more complicated testing infastructure and add the tests manually with `bcm_add_test`.
The way it scales is that it makes use of directory structure. So if you fire .cpp files into /test, each is considered a ctest target, but if you put them into /test/somedir they get new semantics. In particular, you'll see Boost.AFIO v2 uses a /test structure which says to use Boost.KernelTest which is a new meta-test infrastructure.
* Provides out of the box CI ctest scripting which uploads results to a CDash for your project and updates your github website with the output from doxygen
This is probably useful for libraries that use doxygen. Using sphinx or mkdocs, I don't need to push changes out to github, as ReadTheDocs will update the documentation on push. Plus it will store the documentation for different tagged versions as well. This scales much nicer than using github pages.
I don't anybody who has ever used doxygen for anything serious has ever been happy with it. The good folks over at DoxyPress did an amazing job at refactoring doxygen, but in the end the fundamental design is just broke. Problem is, and I think most would also agree here, there isn't anything better than doxygen for C++ reference docs. ReadTheDocs + Breathe generates what I find to be unusable reference docs. Formatting which suits Python well suits C++ terribly. Many years ago Stefan (along with Dave Abrahams) championed a new C++ docs tool which was much better than doxygen, but in the end the effort required to finish it proved difficult to make happen. I'm sure most would agree what a shame. If anybody knows of a tool which can understand doxygen markup but generates much better reference docs, I would be *extremely* interested. The really key part is that new C++ docs tooling *needs* to grok doxygen markup. So many new tools don't, and therefore get no traction because so many C++ codebases are locked into doxygen markup.
* Automatically matches git SHA in dependent git subrepos in flat dependency configurations
I am not a fan of git submodules, as it breaks downloading the source tarball files from github.
That's a long standing bug on github. And the fault of github, not of anyone else. I really wish they let you disable the tarball download and let you supply your own tarball URL. The current broken system is very confusing for users.
* Automatically merges any develop commit passing all tests on all platforms according to CDash into master branch * Automatically packages up your library and publishes it to tarball, vcpkg (with ubuntu launchpad and homebrew in progress right now)
Adding support for CPack to create tarballs, debian, and fedora packages would be nice to add. However, mapping dependecies names between different package managers can be handled through convention for boost-only libraries, however, external dependencies(such as zlib) is not so easy.
I bailed out on that question and simply have each boost-lite library maintain the metadata for each package repo. i.e. it's the long way round.
Also, as the library would support standard cmake install flow, it can easily be installed with cget(and dependencies can be installed with a requirements.txt file). I find this flow preferable over trying to update system-level package managers like homebrew or vcpkg. Although, from what I've seen from vcpkg, it works very similiar to cget except it is windows-centric.
I'd call cget an external tool dependency personally. I certainly had never heard of it before you mentioning it, and I would have no idea how to install it on Windows. I am assuming it is this: https://linux.die.net/man/1/cget I think this stuff comes back to David Sankel's notion of libraries being anti-social. If you're anti-social, you force library users up this hill of preconfig and build just to test out your library. Bjarne's been railing against that for years, and it is one of his biggest bugbears with Boost, yet trying out Boost on all platforms except Windows is a simple install from that platform's package repos and therefore is a very low hill to climb. I'm therefore fond of package repositories, end users like them too.
* Libraries based on this are 100% standalone, when you clone the git repo or unpack the tarball you are 100% ready to go. Nothing else needed, not even configure and build. No arcane command line programs to run.
I don't understand this. My focus of these modules is to support the standard configure, build and install flow in cmake. Trying to hack cmake with a different conventional flow seems problematic. If users don't like this flow, or scared of typing, then external tools can be created to automate this. However, creating a different flow in cmake will just cause a dissonance with other cmake libraries.
Sorry you misunderstood me. What I meant above is that the cmake is ready to go. You don't need to run cmake generators, or run some python master cmake control script etc. The libraries themselves are header only currently, but sometime this year I'm going to write a preprocessor stage for cmake which will have cmake at dev time convert a header only library which does preprocessor metaprogramming like Outcome does into a single large preexpanded include file. That should reduce the gap between C++ Module include times and non-C++ Module include times for users, plus it means I can provide an easy playpen on gcc.godbolt etc.
I wouldn't recommend that anyone else use it yet. It is very much a work in progress, but all the above is working, and you can see it in action in proposed Boost.Outcome. It also has nil documentation.
So I tried to install your Boost.Outcome library with no luck. First, I did `cget install ned14/boost.outcome`. And that didn't work because of missing the git submodules. So, I cloned it locally with its submodules, and then did `cget install boost.outcome`. It still didn't work.
I'm not sure about this cget tool, but cmake --build . --target install should work on all platforms after you've done a *recursive* git submodule checkout. By "should" I mean install is not being CI tested yet, and it could be broken after some changes I did earlier this week so caveat emptor. A lot of people only check out the first layer of git submodules. It's very important it's recursive as the recursion goes deep. Failing that just use the prebuilt tarball at https://dedi4.nedprod.com/static/files/boost.outcome-v1.0-source-latest.tar..... That tarball is what passed the tests on the CI, so it's complete and verified working.
However, it looks like you have done a lot of awesome work here, and it would be great to integrate those into the cmake modules so other boost libraries could take advantage of it.
A lot of the cool stuff mentioned above is because this build system imposes very strict orthodoxy on libraries using it. It is a classic cathedral design, there is exactly one way of doing things and libraries get no choice. This has the big advantage of all client libraries getting all the cool stuff for free plus stuff being added to them after without needing to change them, but it also has the big disadvantage that any design mistake is a showstopper for all which cannot be worked around easily. In other words, if I didn't think of some build use case, or if I didn't think that build use case important enough to support, you are hosed. That's why I don't recommend anyone else use it until I've fixed more of the systemic design flaws, and then put a year of maturity on it where it isn't being constantly chopped and changed. Only then would I recommend anyone else use it and indeed you'll probably see a website go up and an announcement here of a new collection of C++ standards aspiring libraries to complement Boost (be aware this is likely 2020 or later at current rates of progress). Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Niall Douglas Sent: 13 January 2017 08:35 To: boost@lists.boost.org Subject: Re: [boost] Boost Cmake Modules
* Autodiscovers any tests you have and sets them up with ctest
This is interesting. But I don't think it scales. Some tests require linking in multiple sources, or require certain flags to be enabled. I really don't see how to autodiscover tests that will work for most boost libraries. Perhaps a `bcm_auto_test` function could be called by the author to do that(I would like to know what kind of conventions you follow when discovering the tests), and for libraries that have more complicated testing infastructure and add the tests manually with `bcm_add_test`.
The way it scales is that it makes use of directory structure. So if you fire .cpp files into /test, each is considered a ctest target, but if you put them into /test/somedir they get new semantics. In particular, you'll see Boost.AFIO v2 uses a /test structure which says to use Boost.KernelTest which is a new meta-test infrastructure.
* Provides out of the box CI ctest scripting which uploads results to a CDash for your project and updates your github website with the output from doxygen
This is probably useful for libraries that use doxygen. Using sphinx or mkdocs, I don't need to push changes out to github, as ReadTheDocs will update the documentation on push. Plus it will store the documentation for different tagged versions as well. This scales much nicer than using github pages.
I don't anybody who has ever used doxygen for anything serious has ever been happy with it. The good folks over at DoxyPress did an amazing job at refactoring doxygen, but in the end the fundamental design is just broke.
Problem is, and I think most would also agree here, there isn't anything better than doxygen for C++ reference docs. ReadTheDocs + Breathe generates what I find to be unusable reference docs. Formatting which suits Python well suits C++ terribly.
Many years ago Stefan (along with Dave Abrahams) championed a new C++ docs tool which was much better than doxygen, but in the end the effort required to finish it proved difficult to make happen. I'm sure most would agree what a shame.
If anybody knows of a tool which can understand doxygen markup but generates much better reference docs, I would be *extremely* interested. The really key part is that new C++ docs tooling *needs* to grok doxygen markup. So many new tools don't, and therefore get no traction because so many C++ codebases are locked into doxygen markup.
I agree that I don't find the C++ output from Doxygen itself attractive (though it is partly a look'n'feel prejudice). But I've saying for years that the really important 'standard' is the Doxygen-comment-syntax or markup. It's fairly simple (but takes a lot of variants to meet the prejudice of many different languages) but it's the only thing that is anywhere near to a standard. So you are definitely right that any tool *must* grok Doxygen comment syntax. And that all new code needs to add its comments to the code in this format. I'd like to see this as a Boost requirement - the comments are useful to those just reduced to reading the source code, up to those using tools that produce a fancier output and indexes. The tool that is best suited to the first pass of the C++ comments is the compiler itself, and that is what the Doxygen author Dimitri van Heesch (from 1997 to date) has been doing for a year or so with Clang. How do we focus new tool builders attention on starting from the right place? Paul --- Paul A. Bristow Prizet Farmhouse Kendal UK LA8 8AB +44 (0) 1539 561830
If anybody knows of a tool which can understand doxygen markup but generates much better reference docs, I would be *extremely* interested. The really key part is that new C++ docs tooling *needs* to grok doxygen markup. So many new tools don't, and therefore get no traction because so many C++ codebases are locked into doxygen markup.
I agree that I don't find the C++ output from Doxygen itself attractive (though it is partly a look'n'feel prejudice).
It's also that the XML output is structured in a very unhelpful way for any other tool to consume. It's almost easier to write a libclang parser and work from the generated AST, that's the structure you actually need.
But I've saying for years that the really important 'standard' is the Doxygen-comment-syntax or markup. [snip] How do we focus new tool builders attention on starting from the right place?
As with a lot of other parts of the C++ ecosystem, a decent C++ reference documentation generator commensurate with what almost every other major programming language already has is too big a project for open source to deliver without commercial sponsorship. You need to bring together two or three very experienced (i.e. expensive) C++ developers who know exactly what needs to be done and employ them on this for a year or two until it's done and shipped with every major C++ compiler. This is exactly the kind of basic infrastructure stuff I think the C++ Standards Foundation needs to sponsor. In a conversation I had with Bjarne he agreed, but as much as $500k is a rounding error for most corporations, getting them to pony up for stuff which improves a whole ecosystem without a direct and immediate benefit to them has proven to be very hard, and again quite unlike in most other programming languages where a central guardianship enforces funds to be given to it to be spent on what's best for the language ecosystem, not on what's best for the donor. Sponsoring Eric with Ranges v3 was a lucky once off not expected currently to be repeated (unfortunately). Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On Fri, 2017-01-13 at 08:34 +0000, Niall Douglas wrote:
* Autodiscovers any tests you have and sets them up with ctest
This is interesting. But I don't think it scales. Some tests require linking in multiple sources, or require certain flags to be enabled. I really don't see how to autodiscover tests that will work for most boost libraries. Perhaps a `bcm_auto_test` function could be called by the author to do that(I would like to know what kind of conventions you follow when discovering the tests), and for libraries that have more complicated testing infastructure and add the tests manually with `bcm_add_test`.
The way it scales is that it makes use of directory structure. So if you fire .cpp files into /test, each is considered a ctest target, but if you put them into /test/somedir they get new semantics. In particular, you'll see Boost.AFIO v2 uses a /test structure which says to use Boost.KernelTest which is a new meta-test infrastructure.
That will work for the common case. I assume fail test when is based on if the name has fail in it. But how do you handle setting flags for certain tests? Compile-only tests?
* Provides out of the box CI ctest scripting which uploads results to a CDash for your project and updates your github website with the output from doxygen
This is probably useful for libraries that use doxygen. Using sphinx or mkdocs, I don't need to push changes out to github, as ReadTheDocs will update the documentation on push. Plus it will store the documentation for different tagged versions as well. This scales much nicer than using github pages.
I don't anybody who has ever used doxygen for anything serious has ever been happy with it. The good folks over at DoxyPress did an amazing job at refactoring doxygen, but in the end the fundamental design is just broke.
Problem is, and I think most would also agree here, there isn't anything better than doxygen for C++ reference docs. ReadTheDocs + Breathe generates what I find to be unusable reference docs. Formatting which suits Python well suits C++ terribly.
I don't use the doxygen at all because it can't generate good reference documentation for my libraries at all. So I just wrote it in markdown, which seems a lot easier than trying to get doxygen to generate boost- style documentation. I've never tried Breathe plugin for sphinx, but it would be nice if there was a plugin to use standardese.
Many years ago Stefan (along with Dave Abrahams) championed a new C++ docs tool which was much better than doxygen, but in the end the effort required to finish it proved difficult to make happen. I'm sure most would agree what a shame.
If anybody knows of a tool which can understand doxygen markup but generates much better reference docs, I would be *extremely* interested. The really key part is that new C++ docs tooling *needs* to grok doxygen markup. So many new tools don't, and therefore get no traction because so many C++ codebases are locked into doxygen markup.
Like I mentioned, there is standardese: https://github.com/foonathan/standardese It still a WIP, but it looks like it is shaping up nicely.
* Automatically matches git SHA in dependent git subrepos in flat dependency configurations
I am not a fan of git submodules, as it breaks downloading the source tarball files from github.
That's a long standing bug on github. And the fault of github, not of anyone else. I really wish they let you disable the tarball download and let you supply your own tarball URL. The current broken system is very confusing for users.
However, beyond superprojects, I don't think submodulues are a good way to manage dependencies. Its best to take an approach similiar to llvm. There can be a superproject that has all the components together to build, or each component can be built and installed individually.
* Automatically merges any develop commit passing all tests on all platforms according to CDash into master branch * Automatically packages up your library and publishes it to tarball, vcpkg (with ubuntu launchpad and homebrew in progress right now)
Adding support for CPack to create tarballs, debian, and fedora packages would be nice to add. However, mapping dependecies names between different package managers can be handled through convention for boost-only libraries, however, external dependencies(such as zlib) is not so easy.
I bailed out on that question and simply have each boost-lite library maintain the metadata for each package repo. i.e. it's the long way round.
Also, as the library would support standard cmake install flow, it can easily be installed with cget(and dependencies can be installed with a requirements.txt file). I find this flow preferable over trying to update system-level package managers like homebrew or vcpkg. Although, from what I've seen from vcpkg, it works very similiar to cget except it is windows-centric.
I'd call cget an external tool dependency personally. I certainly had never heard of it before you mentioning it, and I would have no idea how to install it on Windows. I am assuming it is this: https://linux.die.net/man/1/cget
No, its here: https://github.com/pfultz2/cget And can easily be installed with `pip install cget`. I believe on the latest windows, python is included by default so user won't have to install python. Furthermore, I plan on adding support for cget generating a cmake or bash script with the commands it would go throught to build and install dependencies. So if you consider python too much of an external dependecy then you could commit a script to your repo that will take care of everything.
I think this stuff comes back to David Sankel's notion of libraries being anti-social. If you're anti-social, you force library users up this hill of preconfig and build just to test out your library.
All libraries go through the steps of configure, build and install: cmake .. cmake --build . cmake --build . --target install Trying to support a non-conventional way to build the library would be anti-social. Furthermore, after I have installed a library I would expect to use it like this: find_package(YourLib) target_link_libraries(myLib ${YourLib_LIBRARIES}) Not supporting that I would consider anti-social as well. And this is the point of the cmake modules that I am writing is to be able to support this easily especially for boost libraries.
Bjarne's been railing against that for years, and it is one of his biggest bugbears with Boost, yet trying out Boost on all platforms except Windows is a simple install from that platform's package repos and therefore is a very low hill to climb. I'm therefore fond of package repositories, end users like them too.
* Libraries based on this are 100% standalone, when you clone the git repo or unpack the tarball you are 100% ready to go. Nothing else needed, not even configure and build. No arcane command line programs to run.
I don't understand this. My focus of these modules is to support the standard configure, build and install flow in cmake. Trying to hack cmake with a different conventional flow seems problematic. If users don't like this flow, or scared of typing, then external tools can be created to automate this. However, creating a different flow in cmake will just cause a dissonance with other cmake libraries.
Sorry you misunderstood me. What I meant above is that the cmake is ready to go. You don't need to run cmake generators, or run some python master cmake control script etc. The libraries themselves are header only currently, but sometime this year I'm going to write a preprocessor stage for cmake which will have cmake at dev time convert a header only library which does preprocessor metaprogramming like Outcome does into a single large preexpanded include file. That should reduce the gap between C++ Module include times and non-C++ Module include times for users, plus it means I can provide an easy playpen on gcc.godbolt etc.
I wouldn't recommend that anyone else use it yet. It is very much a work in progress, but all the above is working, and you can see it in action in proposed Boost.Outcome. It also has nil documentation.
So I tried to install your Boost.Outcome library with no luck. First, I did `cget install ned14/boost.outcome`. And that didn't work because of missing the git submodules. So, I cloned it locally with its submodules, and then did `cget install boost.outcome`. It still didn't work.
I'm not sure about this cget tool, but cmake --build . --target install should work on all platforms after you've done a *recursive* git submodule checkout. By "should" I mean install is not being CI tested yet, and it could be broken after some changes I did earlier this week so caveat emptor.
I get an error like this: -- Found boost-lite depended upon by boost--outcome at embedded include/boost/outcome/boost-lite -- CMAKE_BUILD_TYPE = CMake Error at include/boost/outcome/boost- lite/cmake/BoostLiteUtils.cmake:148 (file): file failed to open for reading (No such file or directory): /home/paul/tmp/cget/cget/build/tmp- 48d80d9e2c734b86800806772ac60260/boost.outcome/include/boost/outcome/bo ost- lite//home/paul/tmp/boost.outcome/.git/modules/include/boost/outcome/bo ost-lite/HEAD Call Stack (most recent call first): include/boost/outcome/boost-lite/cmake/BoostLiteUtils.cmake:188 (git_revision_from_path) include/boost/outcome/boost-lite/CMakeLists.txt:18 (UpdateRevisionHppFromGit) This explains how cget calls cmake here: http://cget.readthedocs.io/en/latest/src/building.html Paul
The way it scales is that it makes use of directory structure. So if you fire .cpp files into /test, each is considered a ctest target, but if you put them into /test/somedir they get new semantics. In particular, you'll see Boost.AFIO v2 uses a /test structure which says to use Boost.KernelTest which is a new meta-test infrastructure.
That will work for the common case. I assume fail test when is based on if the name has fail in it. But how do you handle setting flags for certain tests? Compile-only tests?
KernelTest takes care of runtime failure testing (indeed, that's its main purpose, it systematically checks all failures fail as documented). I haven't found a need for compile fail tests yet. But then I tend to expose as static constexpr bool all the logic the metaprogramming uses and those can be static asserted and Expression SFINAE checked as part of a compile. So, in other words, if the compile succeeds then all the compile failure causes were checked (or at least the ones I thought of). I'm not ruling out adding fail tests later, but until now I haven't found a need. In terms of setting flags for certain tests, because it's a 100% target based cmake design, the CMakeLists.txt for the project can very easily set very specific settings for any given individual target. Indeed I just recently mastered creating custom cmake properties which have global, directory and target level overrides. That technique is not well documented, anyone interested please do lift from https://github.com/ned14/boost-lite/blob/master/cmake/BoostLiteSetupProject.... where you'll see me adding the new CXX properties: * CXX_EXCEPTIONS * CXX_RTTI * CXX_STATIC_RUNTIME
Many years ago Stefan (along with Dave Abrahams) championed a new C++ docs tool which was much better than doxygen, but in the end the effort required to finish it proved difficult to make happen. I'm sure most would agree what a shame.
If anybody knows of a tool which can understand doxygen markup but generates much better reference docs, I would be *extremely* interested. The really key part is that new C++ docs tooling *needs* to grok doxygen markup. So many new tools don't, and therefore get no traction because so many C++ codebases are locked into doxygen markup.
Like I mentioned, there is standardese:
https://github.com/foonathan/standardese
It still a WIP, but it looks like it is shaping up nicely.
Ah I remember reading this on his blog a good while back. I'm glad it's "gained legs" judging from its commit history. However, I find the ISO C++ standard not a great way of documenting reference documentation for most libraries. For example, with Outcome it's way overkill, Outcome behaves like the STL, you don't need to explicitly and labouriously say so per API. Just assume it does, and if it doesn't then it's a bug. But for AFIO standardese is a great fit. AFIO needs to specify in very great detail the exact semantics of say read() and write() else users will write erroneous code. Thanks for reminding me of that tool.
However, beyond superprojects, I don't think submodulues are a good way to manage dependencies. Its best to take an approach similiar to llvm. There can be a superproject that has all the components together to build, or each component can be built and installed individually.
git submodules (as of the very latest git) give you wide flexibility in the .gitmodules file to specify all sorts of useful semantics about dependencies. It's not 100% there yet, but getting ever closer to becoming the de facto dependency management system for any git based program. The new shallow clone subrepo facility is particularly useful.
I think this stuff comes back to David Sankel's notion of libraries being anti-social. If you're anti-social, you force library users up this hill of preconfig and build just to test out your library.
All libraries go through the steps of configure, build and install:
cmake .. cmake --build . cmake --build . --target install
Trying to support a non-conventional way to build the library would be anti-social.
Agreed. And that should work with my stuff (though there is one known bug which will be fixed before Outcome enters the review queue).
Furthermore, after I have installed a library I would expect to use it like this:
find_package(YourLib) target_link_libraries(myLib ${YourLib_LIBRARIES})
Not supporting that I would consider anti-social as well. And this is the point of the cmake modules that I am writing is to be able to support this easily especially for boost libraries.
I'm less convinced of the necessity of this when your library is installed as part of the platform's package repos and therefore is now a system library. But in general, I would agree, and it's definitely a nice-to-have.
I'm not sure about this cget tool, but cmake --build . --target install should work on all platforms after you've done a *recursive* git submodule checkout. By "should" I mean install is not being CI tested yet, and it could be broken after some changes I did earlier this week so caveat emptor.
I get an error like this:
-- Found boost-lite depended upon by boost--outcome at embedded include/boost/outcome/boost-lite -- CMAKE_BUILD_TYPE = CMake Error at include/boost/outcome/boost- lite/cmake/BoostLiteUtils.cmake:148 (file): file failed to open for reading (No such file or directory):
/home/paul/tmp/cget/cget/build/tmp- 48d80d9e2c734b86800806772ac60260/boost.outcome/include/boost/outcome/bo ost- lite//home/paul/tmp/boost.outcome/.git/modules/include/boost/outcome/bo ost-lite/HEAD Call Stack (most recent call first): include/boost/outcome/boost-lite/cmake/BoostLiteUtils.cmake:188 (git_revision_from_path) include/boost/outcome/boost-lite/CMakeLists.txt:18 (UpdateRevisionHppFromGit)
That's an unusual error. It's like the .git file has an absolute path in it. Can you tell me the contents of the file at /home/paul/tmp/cget/cget/build/tmp-48d80d9e2c734b86800806772ac60260/boost.outcome/include/boost/outcome/boost-lite/.git please? In the meantime, I'll make that bit of the cmake scripting check for a non sane path in .git and bail out more usefully. Thanks for the bug report. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
On Sat, 2017-01-14 at 11:00 +0000, Niall Douglas wrote:
The way it scales is that it makes use of directory structure. So if you fire .cpp files into /test, each is considered a ctest target, but if you put them into /test/somedir they get new semantics. In particular, you'll see Boost.AFIO v2 uses a /test structure which says to use Boost.KernelTest which is a new meta-test infrastructure.
That will work for the common case. I assume fail test when is based on if the name has fail in it. But how do you handle setting flags for certain tests? Compile-only tests?
KernelTest takes care of runtime failure testing (indeed, that's its main purpose, it systematically checks all failures fail as documented).
As documented? Where or how would the user specify that the test should fail?
I haven't found a need for compile fail tests yet. But then I tend to expose as static constexpr bool all the logic the metaprogramming uses and those can be static asserted and Expression SFINAE checked as part of a compile. So, in other words, if the compile succeeds then all the compile failure causes were checked (or at least the ones I thought of). I'm not ruling out adding fail tests later, but until now I haven't found a need.
In terms of setting flags for certain tests, because it's a 100% target based cmake design, the CMakeLists.txt for the project can very easily set very specific settings for any given individual target. Indeed I just recently mastered creating custom cmake properties which have global, directory and target level overrides. That technique is not well documented, anyone interested please do lift from https://github.com/ned14/boost-lite/blob/master/cmake/BoostLiteSetupP roject.cmake#L9 where you'll see me adding the new CXX properties:
* CXX_EXCEPTIONS * CXX_RTTI * CXX_STATIC_RUNTIME
Those custom properties look quite cool, I always envisioned them being targets in cmake. So when a test needed to disable RTTI, it could just do: target_link_libaries(my_test disable_rtti) Using custom properties looks like an interesting approach. I wonder what the pros and cons of using target vs property.
However, beyond superprojects, I don't think submodulues are a good way to manage dependencies. Its best to take an approach similiar to llvm. There can be a superproject that has all the components together to build, or each component can be built and installed individually.
git submodules (as of the very latest git) give you wide flexibility in the .gitmodules file to specify all sorts of useful semantics about dependencies. It's not 100% there yet, but getting ever closer to becoming the de facto dependency management system for any git based program. The new shallow clone subrepo facility is particularly useful.
I don't see how submodules can be used for dependency management at all. If library A and library B uses zlib, then using submodules you would need to download and build zlib twice to use both libraries. Plus, there is no way to use the system or a user's build of zlib either. Submodules work nice for superprojects, such as boost or llvm, but it is no solution to dependecy management.
I think this stuff comes back to David Sankel's notion of libraries being anti-social. If you're anti-social, you force library users up this hill of preconfig and build just to test out your library.
All libraries go through the steps of configure, build and install:
cmake .. cmake --build . cmake --build . --target install
Trying to support a non-conventional way to build the library would be anti-social.
Agreed. And that should work with my stuff (though there is one known bug which will be fixed before Outcome enters the review queue).
Furthermore, after I have installed a library I would expect to use it like this:
find_package(YourLib) target_link_libraries(myLib ${YourLib_LIBRARIES})
Not supporting that I would consider anti-social as well. And this is the point of the cmake modules that I am writing is to be able to support this easily especially for boost libraries.
I'm less convinced of the necessity of this when your library is installed as part of the platform's package repos and therefore is now a system library. But in general, I would agree, and it's definitely a nice-to-have.
I don't see how having it installed as a system library negates the need for `find_package`. If I say `find_package(boost_filesystem)` then it will automatically link its dependencies(like Boost.System) for me. Installing it with a system package manager without installing the find_package files, would require the user to manually figure out the dependencies. This is what the FindBoost module does currently in cmake. Since this is a manual process, it requires updating cmake to support newer versions of boost. Instead, libraries like boost should provide the cmake find_package files(even when its installed by the system package manager).
I'm not sure about this cget tool, but cmake --build . --target install should work on all platforms after you've done a *recursive* git submodule checkout. By "should" I mean install is not being CI tested yet, and it could be broken after some changes I did earlier this week so caveat emptor.
I get an error like this:
-- Found boost-lite depended upon by boost--outcome at embedded include/boost/outcome/boost-lite -- CMAKE_BUILD_TYPE = CMake Error at include/boost/outcome/boost- lite/cmake/BoostLiteUtils.cmake:148 (file): file failed to open for reading (No such file or directory):
/home/paul/tmp/cget/cget/build/tmp- 48d80d9e2c734b86800806772ac60260/boost.outcome/include/boost/outcom e/bo ost- lite//home/paul/tmp/boost.outcome/.git/modules/include/boost/outcom e/bo ost-lite/HEAD Call Stack (most recent call first): include/boost/outcome/boost-lite/cmake/BoostLiteUtils.cmake:188 (git_revision_from_path) include/boost/outcome/boost-lite/CMakeLists.txt:18 (UpdateRevisionHppFromGit)
That's an unusual error. It's like the .git file has an absolute path in it.
Can you tell me the contents of the file at /home/paul/tmp/cget/cget/build/tmp- 48d80d9e2c734b86800806772ac60260/boost.outcome/include/boost/outcome/ boost-lite/.git please?
$ cat /home/paul/tmp/cget/cget/build/tmp- 48d80d9e2c734b86800806772ac60260/boost.outcome/include/boost/outcome/bo ost-lite/.git gitdir: /home/paul/tmp/boost.outcome/.git/modules/include/boost/outcome/boost- lite
In the meantime, I'll make that bit of the cmake scripting check for a non sane path in .git and bail out more usefully. Thanks for the bug report.
Niall
The way it scales is that it makes use of directory structure. So if you fire .cpp files into /test, each is considered a ctest target, but if you put them into /test/somedir they get new semantics. In particular, you'll see Boost.AFIO v2 uses a /test structure which says to use Boost.KernelTest which is a new meta-test infrastructure. That will work for the common case. I assume fail test when is based on if the name has fail in it. But how do you handle setting flags for certain tests? Compile-only tests? KernelTest takes care of runtime failure testing (indeed, that's its main purpose, it systematically checks all failures fail as documented).
As documented? Where or how would the user specify that the test should fail?
https://github.com/ned14/boost.afio/blob/master/test/tests/file_handle_creat... Note the tables of initialiser lists of preconditions, API calls and postconditions. That's KernelTest.
In terms of setting flags for certain tests, because it's a 100% target based cmake design, the CMakeLists.txt for the project can very easily set very specific settings for any given individual target. Indeed I just recently mastered creating custom cmake properties which have global, directory and target level overrides. That technique is not well documented, anyone interested please do lift from https://github.com/ned14/boost-lite/blob/master/cmake/BoostLiteSetupP roject.cmake#L9 where you'll see me adding the new CXX properties:
* CXX_EXCEPTIONS * CXX_RTTI * CXX_STATIC_RUNTIME
Those custom properties look quite cool, I always envisioned them being targets in cmake. So when a test needed to disable RTTI, it could just do:
target_link_libaries(my_test disable_rtti)
Using custom properties looks like an interesting approach. I wonder what the pros and cons of using target vs property.
Well, properties are the method by which a target specialises its configuration. Properties have the advantage that you can set them globally, per directory and per target and the property will inherit the nearest setting to the point of use in any newly created targets. That saves a bit of typing and is more "cmakeish". I couldn't figure out how to get per source properties to work right, there was no example in the cmake source code :(
I don't see how submodules can be used for dependency management at all. If library A and library B uses zlib, then using submodules you would need to download and build zlib twice to use both libraries.
The platform's package repos take care of those dependencies. And their security updates.
Plus, there is no way to use the system or a user's build of zlib either. Submodules work nice for superprojects, such as boost or llvm, but it is no solution to dependecy management.
Dependencies aren't all in one category. Stuff like zlib are almost system libraries. You don't need to pin to specific commits of zlib. If you do need to pin to specific commits, git submodules are a great dependency management solution, indeed the best if you're already on git.
I'm not sure about this cget tool, but cmake --build . --target install should work on all platforms after you've done a *recursive* git submodule checkout. By "should" I mean install is not being CI tested yet, and it could be broken after some changes I did earlier this week so caveat emptor. I get an error like this:
-- Found boost-lite depended upon by boost--outcome at embedded include/boost/outcome/boost-lite -- CMAKE_BUILD_TYPE = CMake Error at include/boost/outcome/boost- lite/cmake/BoostLiteUtils.cmake:148 (file): file failed to open for reading (No such file or directory):
/home/paul/tmp/cget/cget/build/tmp- 48d80d9e2c734b86800806772ac60260/boost.outcome/include/boost/outcom e/bo ost- lite//home/paul/tmp/boost.outcome/.git/modules/include/boost/outcom e/bo ost-lite/HEAD Call Stack (most recent call first): include/boost/outcome/boost-lite/cmake/BoostLiteUtils.cmake:188 (git_revision_from_path) include/boost/outcome/boost-lite/CMakeLists.txt:18 (UpdateRevisionHppFromGit) That's an unusual error. It's like the .git file has an absolute path in it.
Can you tell me the contents of the file at /home/paul/tmp/cget/cget/build/tmp- 48d80d9e2c734b86800806772ac60260/boost.outcome/include/boost/outcome/ boost-lite/.git please?
$ cat /home/paul/tmp/cget/cget/build/tmp- 48d80d9e2c734b86800806772ac60260/boost.outcome/include/boost/outcome/bo ost-lite/.git gitdir: /home/paul/tmp/boost.outcome/.git/modules/include/boost/outcome/boost- lite
As suspected, you've got absolute paths in your .git file. I've never seen that in the wild before. I'll fix the cmake to inspect the path and that ought to fix that problem at least. Thanks for the bug report. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
Niall Douglas wrote
* Provides out of the box CI ctest scripting which uploads results to a CDash for your project and updates your github website with the output from doxygen
This is probably useful for libraries that use doxygen. Using sphinx or mkdocs, I don't need to push changes out to github, as ReadTheDocs will update the documentation on push. Plus it will store the documentation for different tagged versions as well. This scales much nicer than using github pages.
I don't anybody who has ever used doxygen for anything serious has ever been happy with it. The good folks over at DoxyPress did an amazing job at refactoring doxygen, but in the end the fundamental design is just broke.
Thank you very much for the compliment. We have worked hard on the refactoring and we are aware there are still internal issues we are working on redesigning. Our initial gripe was the inability to parse complex C++ templates. We have made a lot of progress on this front and soon will be switching over to using clang on the frontend. We encourage everyone to check out DoxyPress and let us know what you think. Ansel Sermersheim Cofounder, CopperSpice & Doxypress -- View this message in context: http://boost.2283326.n4.nabble.com/Boost-Cmake-Modules-tp4690949p4691113.htm... Sent from the Boost - Dev mailing list archive at Nabble.com.
participants (6)
-
agserm
-
Egor Pugin
-
Niall Douglas
-
paul
-
Paul A. Bristow
-
Paul Fultz II