Re: [boost] Boost Modularization: did we get it right?

The headers are moved into directory'include' - how does the build system handle includes (header from other boost libs)? At least I get error that header xyz is not found if I execute the tests using bjam. Oliver -- Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de

On 09/05/2012 08:16, Oliver Kowalke wrote:
The headers are moved into directory'include' - how does the build system handle includes (header from other boost libs)? At least I get error that header xyz is not found if I execute the tests using bjam.
You either use a build system setup that adds every directory to the include path or you run a script that generates flattened headers.

The headers are moved into directory'include' - how does the build system handle includes (header from other boost libs)? At least I get error that header xyz is not found if I execute the tests using bjam.
You either use a build system setup that adds every directory to the
so that means that the new build env does not work out of the box?! what about libs using boost.move or boost.utils ...
include path or you run a script that generates flattened headers.
nonsense -- NEU: FreePhone 3-fach-Flat mit kostenlosem Smartphone! Jetzt informieren: http://mobile.1und1.de/?ac=OM.PW.PW003K20328T7073a

On 09/05/2012 12:49, Oliver Kowalke wrote:
so that means that the new build env does not work out of the box?!
The new one does. The old one doesn't, and requires you to use the second approach.
include path or you run a script that generates flattened headers.
nonsense
Running a script to make the new setup compatible with the old one is nonsense?

so that means that the new build env does not work out of the box?!
The new one does. The old one doesn't, and requires you to use the second approach.
include path or you run a script that generates flattened headers.
nonsense
Running a script to make the new setup compatible with the old one is nonsense?
yes -- NEU: FreePhone 3-fach-Flat mit kostenlosem Smartphone! Jetzt informieren: http://mobile.1und1.de/?ac=OM.PW.PW003K20328T7073a

On 09/05/2012 13:05, Oliver Kowalke wrote:
Running a script to make the new setup compatible with the old one is nonsense?
yes
I'm afraid it makes perfect sense, not requiring to run a script would require including many redundant files in the repository that would be hard to maintain and waste resources. If you want to complain about it, please highlight actual problems with this approach and make concrete suggestions.

On Wed, May 9, 2012 at 7:05 AM, Oliver Kowalke <oliver.kowalke@gmx.de> wrote:
so that means that the new build env does not work out of the box?!
The new one does. The old one doesn't, and requires you to use the second approach.
include path or you run a script that generates flattened headers.
nonsense
Running a script to make the new setup compatible with the old one is nonsense?
yes
Oliver, It would be informative if you would expand your comments beyond one word replies. Otherwise It is easy to misunderstand what you mean. Most software packages require some kind of installation process. For Boost, it has been the "Getting started" process documented on the web site. With modularization, some of the details of "Getting Started" will change. Why are you concerned that some of those installation process details ensures that user and developer scripts and setups continue to work? --Beman

Most software packages require some kind of installation process. For Boost, it has been the "Getting started" process documented on the web site. With modularization, some of the details of "Getting Started" will change. Why are you concerned that some of those installation process details ensures that user and developer scripts and setups continue to work?
I've had a lot of trouble to get boost.build+bjam to compile the assembler code from boost.context. I'm annoyed that I get some additonal burden to re-write this stuff for the new build env again because the old one will not work with the modularized boost. I think this is not a good migration strategy. -- Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de

On 05/09/2012 02:34 PM, Oliver Kowalke wrote:
I'm annoyed that I get some additonal burden to re-write this stuff for the new build env again because the old one will not work with the modularized boost.
Once you have run the script, the usual Boost.Build system will work just fine.
I think this is not a good migration strategy.
Any migration strategy from Boost.Build to CMake would necessarily require re-writing the building logic.

On Wed, May 9, 2012 at 8:52 AM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
On 05/09/2012 02:34 PM, Oliver Kowalke wrote:
I'm annoyed that I get some additonal burden to re-write this stuff for the new build env again because the old one will not work with the modularized boost.
Once you have run the script, the usual Boost.Build system will work just fine.
And I've just done that and it did work and the test did pass. Windows 7 with VC++ 10.0. --Beman

I'm annoyed that I get some additonal burden to re-write this stuff for the new build env again because the old one will not work with the modularized boost.
Once you have run the script, the usual Boost.Build system will work just fine.
And I've just done that and it did work and the test did pass. Windows 7 with VC++ 10.0.
And why not provide a wrapper that let the other developers use the old build env with modularized boost? -- Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de

On Wed, May 9, 2012 at 9:17 AM, Oliver Kowalke <oliver.kowalke@gmx.de> wrote:
I'm annoyed that I get some additonal burden to re-write this stuff for the new build env again because the old one will not work with the modularized boost.
Once you have run the script, the usual Boost.Build system will work just fine.
And I've just done that and it did work and the test did pass. Windows 7 with VC++ 10.0.
And why not provide a wrapper that let the other developers use the old build env with modularized boost?
We do. That's what I used for testing. See forward_headers.cmake in the modularized root. Invoke it via "cmake -P forward_headers.cmake" The dependency on cmake will be eliminated eventually, but its fine for testing. --Beman

On 05/09/2012 03:17 PM, Oliver Kowalke wrote:
I'm annoyed that I get some additonal burden to re-write this stuff for the new build env again because the old one will not work with the modularized boost.
Once you have run the script, the usual Boost.Build system will work just fine.
And I've just done that and it did work and the test did pass. Windows 7 with VC++ 10.0.
And why not provide a wrapper that let the other developers use the old build env with modularized boost?
Boost.Build *is* "the old build env". I've suggested to the people in charge of modularization to integrate the script within Boost.Build itself, so that this step is done automatically.

AMDG On 05/09/2012 06:31 AM, Mathias Gaunard wrote:
On 05/09/2012 03:17 PM, Oliver Kowalke wrote:
I'm annoyed that I get some additonal burden to re-write this stuff for the new build env again because the old one will not work with the modularized boost.
Once you have run the script, the usual Boost.Build system will work just fine.
And I've just done that and it did work and the test did pass. Windows 7 with VC++ 10.0.
And why not provide a wrapper that let the other developers use the old build env with modularized boost?
Boost.Build *is* "the old build env".
I've suggested to the people in charge of modularization to integrate the script within Boost.Build itself, so that this step is done automatically.
It's a little bit tricky to make this work. If we hard link or copy each header, then Boost.Build can handle it trivially, but Jam's #include scanner currently has no way to search sym-linked directories that are created as part of the build. Actually, we could do a recursive glob of the target directory and make the all the headers in the directory outputs of the sym-link action. Then it should just work. In Christ, Steven Watanabe

On 05/09/2012 05:11 PM, Steven Watanabe wrote:
Actually, we could do a recursive glob of the target directory and make the all the headers in the directory outputs of the sym-link action. Then it should just work.
This is what the script should do IMO. This allows several moduls to put files in the same directory.

on Wed May 09 2012, Mathias Gaunard <mathias.gaunard-AT-ens-lyon.org> wrote:
On 05/09/2012 03:17 PM, Oliver Kowalke wrote:
I'm annoyed that I get some additonal burden to re-write this stuff for the new build env again because the old one will not work with the modularized boost.
Once you have run the script, the usual Boost.Build system will work just fine.
And I've just done that and it did work and the test did pass. Windows 7 with VC++ 10.0.
And why not provide a wrapper that let the other developers use the old build env with modularized boost?
Boost.Build *is* "the old build env".
I've suggested to the people in charge of modularization to integrate the script within Boost.Build itself, so that this step is done automatically.
However, the people working on modularization don't really want to touch Boost.Build, as we have enough on our plates trying to set up CMake as a potential replacement for Boost.Build and have forgotten more than we ever knew about how to work with it. We'd really appreciate it if someone else would volunteer to do this. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 09/05/2012 18:15, Dave Abrahams wrote:
However, the people working on modularization don't really want to touch Boost.Build, as we have enough on our plates trying to set up CMake as a potential replacement for Boost.Build and have forgotten more than we ever knew about how to work with it.
If even the original creators of Boost.Build can't do anything with it, I guess it's not just me who finds it complicated.

AMDG On 05/09/2012 09:15 AM, Dave Abrahams wrote:
on Wed May 09 2012, Mathias Gaunard <mathias.gaunard-AT-ens-lyon.org> wrote:
Boost.Build *is* "the old build env".
I've suggested to the people in charge of modularization to integrate the script within Boost.Build itself, so that this step is done automatically.
However, the people working on modularization don't really want to touch Boost.Build, as we have enough on our plates trying to set up CMake as a potential replacement for Boost.Build and have forgotten more than we ever knew about how to work with it. We'd really appreciate it if someone else would volunteer to do this.
I'm attaching an initial cut at this. Usage: import link ; symlink boost/units : libs/include/boost/units ; symlink boost/accumulators : libs/include/boost/accumulators ; ... Behavior: If symlinks are supported, creates a symbolic link to the directory. otherwise if hard links are supported, hard links all files. otherwise copies all files. This still needs some work, but basic usage seems okay on Linux and Windows 7. In Christ, Steven Watanabe

2012/5/9 Steven Watanabe <watanabesj@gmail.com>: <snip>
Behavior: If symlinks are supported, creates a symbolic link to the directory.
This is fragile. Imagine Graph generates a link at boost/graph and GraphParallel generates a link at boost/graph/parallel. GraphParallel's link will end up in Graph's source directory! There is also the case that multiple libraries provide files in the same directory (eg. boost/pending). They cannot all link the directory. The script should always link individual files. That is dead slow, yes. But it is the only safe approach. cheers, Daniel

On 9 May 2012 20:18, Daniel Pfeifer <daniel@pfeifer-mail.de> wrote:
2012/5/9 Steven Watanabe <watanabesj@gmail.com>:
<snip>
Behavior: If symlinks are supported, creates a symbolic link to the directory.
This is fragile. Imagine Graph generates a link at boost/graph and GraphParallel generates a link at boost/graph/parallel. GraphParallel's link will end up in Graph's source directory! There is also the case that multiple libraries provide files in the same directory (eg. boost/pending). They cannot all link the directory.
The script should always link individual files. That is dead slow, yes. But it is the only safe approach.
Have you seen GNU stow? It links directories, but if two packages clash, replaces the link with a new directory, and fills that with links.

2012/5/8 Dave Abrahams <dave@boostpro.com>:
Hi All,
As we head toward a modularized Boost, Daniel Pfeifer we on the Ryppl project would like confirmation that we've correctly (or at least sensibly, when there's no obvious "correct") identified the module boundaries in Boost's monolithic SVN repository. If library authors could take a few moments to examine the contents of your library's repo at https://github.com/boost-lib, and let us know, we'd be most grateful.
Conversion looks strange: There is no numeric_cast header, but there is numeric_cast`s tests. Lexical cast, implicit cast, and polymorphic casts look good. -- Best regards, Antony Polukhin

2012/5/10 Antony Polukhin <antoshkka@gmail.com>:
2012/5/8 Dave Abrahams <dave@boostpro.com>:
Hi All,
As we head toward a modularized Boost, Daniel Pfeifer we on the Ryppl project would like confirmation that we've correctly (or at least sensibly, when there's no obvious "correct") identified the module boundaries in Boost's monolithic SVN repository. If library authors could take a few moments to examine the contents of your library's repo at https://github.com/boost-lib, and let us know, we'd be most grateful.
Conversion looks strange: There is no numeric_cast header, but there is numeric_cast`s tests. Lexical cast, implicit cast, and polymorphic casts look good.
There is no numeric_cast.hpp header in SVN that is used by numeric_cast`s tests. 'boost/cast.hpp' contains this: // Revision History // 23 JUn 05 numeric_cast removed and redirected to the new verion (Fernando Cacciola) This looks strange, yes. But it is not caused by the modularization. Cheers, Daniel

AMDG On 05/09/2012 01:06 PM, Daniel James wrote:
On 9 May 2012 20:18, Daniel Pfeifer <daniel@pfeifer-mail.de> wrote:
2012/5/9 Steven Watanabe <watanabesj@gmail.com>:
<snip>
Behavior: If symlinks are supported, creates a symbolic link to the directory.
This is fragile. Imagine Graph generates a link at boost/graph and GraphParallel generates a link at boost/graph/parallel. GraphParallel's link will end up in Graph's source directory! There is also the case that multiple libraries provide files in the same directory (eg. boost/pending). They cannot all link the directory.
The script should always link individual files. That is dead slow, yes. But it is the only safe approach.
Have you seen GNU stow? It links directories, but if two packages clash, replaces the link with a new directory, and fills that with links.
That should work. I've tried to implement it in the attached. Usage is: import link ; symlink boost-utility : utility/include/boost : <location>. ; symlink boost-graph : graph/include/boost : <location>. ; ... project : requirements <include>. <implicit-dependency>boost-utility <implicit-dependency>boost-graph ... ; I haven't tested this beyond basic merging of two directories yet. Once I get it fully working, we should be able to just add an appropriate glob to Jamroot. In Christ, Steven Watanabe

AMDG On 05/09/2012 01:06 PM, Daniel James wrote:
On 9 May 2012 20:18, Daniel Pfeifer <daniel@pfeifer-mail.de> wrote:
2012/5/9 Steven Watanabe <watanabesj@gmail.com>:
Behavior: If symlinks are supported, creates a symbolic link to the directory.
This is fragile. Imagine Graph generates a link at boost/graph and GraphParallel generates a link at boost/graph/parallel. GraphParallel's link will end up in Graph's source directory! There is also the case that multiple libraries provide files in the same directory (eg. boost/pending). They cannot all link the directory.
The script should always link individual files. That is dead slow, yes. But it is the only safe approach.
Have you seen GNU stow? It links directories, but if two packages clash, replaces the link with a new directory, and fills that with links.
I think I have this working now. I've added a glob to Jamroot that finds all the include directories. running 'bjam headers' should create all the links. The only thing left to make it work seamlessly is to go through all the libs and add <implicit-dependency>/boost//headers to all the tests and compiled libraries. The algorithm now looks like: If symlinks are supported: Sym link the directory if there is a conflict, create a subdirectory and symlink all the members. Else if hardlinks are supported: Hard-link all leaves Else Copy all leaves Any thoughts on this? The attached patch has all the changes I made. In Christ, Steven Watanabe

On 05/11/2012 01:55 AM, Steven Watanabe wrote:
If symlinks are supported: Sym link the directory if there is a conflict, create a subdirectory and symlink all the members. Else if hardlinks are supported: Hard-link all leaves
Are there a rationale for preferring symbolic linked files over hard links. One point that comes too my mind is that the master/link relationship is more explicit, while a hard link is more like a shared pointer with common ownership. So hard links will not provide any effect on the forward header given that the master be renamed or deleted. Other than that, I have always thought of hard links as something that must be a bit more performant, but I may be dead wrong on that. -- Bjørn

On Fri, May 11, 2012 at 05:22:09AM +0200, Bjørn Roald wrote:
On 05/11/2012 01:55 AM, Steven Watanabe wrote:
If symlinks are supported: Sym link the directory if there is a conflict, create a subdirectory and symlink all the members. Else if hardlinks are supported: Hard-link all leaves
Are there a rationale for preferring symbolic linked files over hard links. One point that comes too my mind is that the master/link relationship is more explicit, while a hard link is more like a shared pointer with common ownership. So hard links will not provide any effect on the forward header given that the master be renamed or deleted.
Other than that, I have always thought of hard links as something that must be a bit more performant, but I may be dead wrong on that.
AFS doesn't support hardlinks between files in different directories. Some filesystems probably don't support them at all, too. -- Lars Viklund | zao@acc.umu.se

On 05/11/2012 01:55 AM, Steven Watanabe wrote:
AMDG
On 05/09/2012 01:06 PM, Daniel James wrote:
2012/5/9 Steven Watanabe<watanabesj@gmail.com>:
Behavior: If symlinks are supported, creates a symbolic link to the directory. This is fragile. Imagine Graph generates a link at boost/graph and GraphParallel generates a link at boost/graph/parallel. GraphParallel's link will end up in Graph's source directory! There is also the case that multiple libraries provide files in the same directory (eg. boost/pending). They cannot all link the directory.
The script should always link individual files. That is dead slow, yes. But it is the only safe approach. Have you seen GNU stow? It links directories, but if two packages clash, replaces the link with a new directory, and fills that with
On 9 May 2012 20:18, Daniel Pfeifer<daniel@pfeifer-mail.de> wrote: links.
I think I have this working now. I've added a glob to Jamroot that finds all the include directories. running 'bjam headers' should create all the links. The only thing left to make it work seamlessly is to go through all the libs and add <implicit-dependency>/boost//headers to all the tests and compiled libraries.
The algorithm now looks like:
If symlinks are supported: Sym link the directory if there is a conflict, create a subdirectory and symlink all the members. Else if hardlinks are supported: Hard-link all leaves Else Copy all leaves
Any thoughts on this?
Algorithm looks good to me as far as what to create when a missing link is detected. Whether you have captured the rest of what should be there it is not clear to me. My ignorance of the details of bjam and Boost.Build, and thus my understanding of the effect of your patches does not allow me to understand in a glance. So I ask instead. Does your solution provide proper functioning build graph including the derived forward header links or files? Symbolic and hard links are services of the file system and when set up correctly, they should work transparently to the build system as though it looks at the source files. bjorn@frodo2:/tmp/linkjunk$ touch a bjorn@frodo2:/tmp/linkjunk$ touch b bjorn@frodo2:/tmp/linkjunk$ ls --full-time total 0 -rw-rw-r-- 1 bjorn bjorn 0 2012-05-11 05:36:13.000000000 +0200 a -rw-rw-r-- 1 bjorn bjorn 0 2012-05-11 05:36:17.000000000 +0200 b bjorn@frodo2:/tmp/linkjunk$ ln a a_hard bjorn@frodo2:/tmp/linkjunk$ ln -s a a_symb bjorn@frodo2:/tmp/linkjunk$ ls --full-time total 0 -rw-rw-r-- 2 bjorn bjorn 0 2012-05-11 05:36:13.000000000 +0200 a -rw-rw-r-- 2 bjorn bjorn 0 2012-05-11 05:36:13.000000000 +0200 a_hard lrwxrwxrwx 1 bjorn bjorn 1 2012-05-11 05:37:17.000000000 +0200 a_symb -> a -rw-rw-r-- 1 bjorn bjorn 0 2012-05-11 05:36:17.000000000 +0200 b bjorn@frodo2:/tmp/linkjunk$ touch a bjorn@frodo2:/tmp/linkjunk$ ls --full-time total 0 -rw-rw-r-- 2 bjorn bjorn 0 2012-05-11 05:37:34.000000000 +0200 a -rw-rw-r-- 2 bjorn bjorn 0 2012-05-11 05:37:34.000000000 +0200 a_hard lrwxrwxrwx 1 bjorn bjorn 1 2012-05-11 05:37:17.000000000 +0200 a_symb -> a -rw-rw-r-- 1 bjorn bjorn 0 2012-05-11 05:36:17.000000000 +0200 b note the time of a_symb is older than the file a after last touch of a. Horror. Does the build graph analysis handle this correctly. Note also that hard links are OK. Cleanup of derived forwarding headers and links when source are moved or deleted is a bit more tricky than support to create them. Any hope of support for this? A side note: A really smart build system should detect changes in content of the source files, not just a forward motion of file time. But this is probably both costly and a very fundamental change. I think lack of such a feature may bite us more with modularized boost as it is simpler to change a parts in your build graph to an earlier versions without realizing the need for clean builds. -- Bjørn

AMDG On 05/10/2012 08:48 PM, Bjørn Roald wrote:
Algorithm looks good to me as far as what to create when a missing link is detected.
Whether you have captured the rest of what should be there it is not clear to me. My ignorance of the details of bjam and Boost.Build, and thus my understanding of the effect of your patches does not allow me to understand in a glance. So I ask instead.
Does your solution provide proper functioning build graph including the derived forward header links or files?
I haven't tested it, so it probably doesn't. It should just be a matter of adding any dependencies that I've forgotten. It looks like I need to add dependencies for hardlinks and copies. sym-links don't need the dependency, since they're guaranteed to be correct as long as they exist. The include scanner does work correctly with generated headers, so no source should be built before the headers that it depends on.
Symbolic and hard links are services of the file system and when set up correctly, they should work transparently to the build system as though it looks at the source files.
bjorn@frodo2:/tmp/linkjunk$ touch a <snip>
note the time of a_symb is older than the file a after last touch of a. Horror. Does the build graph analysis handle this correctly. Note also that hard links are OK.
stat returns information on the target of the link, so it should be okay. I haven't checked what happens on Windows, yet.
Cleanup of derived forwarding headers and links when source are moved or deleted is a bit more tricky than support to create them. Any hope of support for this?
Given that multiple directories are being merged, we have no way of knowing that any extra files aren't from a source that exists but didn't happen to be mentioned in the current build. With directory sym-links, this shouldn't be much of a problem, though. Anyway, the extra links should be harmless and it's easy enough to delete and regenerate the entire directory once in a while.
A side note: A really smart build system should detect changes in content of the source files, not just a forward motion of file time. But this is probably both costly and a very fundamental change. I think lack of such a feature may bite us more with modularized boost as it is simpler to change a parts in your build graph to an earlier versions without realizing the need for clean builds.
It's not as costly as you might think, since the include scanner already reads in most of the source files, so the IO cost is already incurred. I'll consider this after I get around to updating HCACHE to work better with Boost.Build. In Christ, Steven Watanabe

On 11/05/2012 01:55, Steven Watanabe wrote:
The algorithm now looks like:
If symlinks are supported: Sym link the directory if there is a conflict, create a subdirectory and symlink all the members. Else if hardlinks are supported: Hard-link all leaves Else Copy all leaves
You don't want to copy the leaves, but rather create a dummy file with a #include directive with the relative path to the original.

On Fri, May 11, 2012 at 3:25 PM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
On 11/05/2012 01:55, Steven Watanabe wrote:
The algorithm now looks like:
If symlinks are supported: Sym link the directory if there is a conflict, create a subdirectory and symlink all the members. Else if hardlinks are supported: Hard-link all leaves Else Copy all leaves
You don't want to copy the leaves, but rather create a dummy file with a #include directive with the relative path to the original.
"Copy the leaves" if all else fails has the advantage of preserving existing uses (like HTML links) that won't be followed with a dummy forwarding #include. It may well be faster to use. OTOH, I can imagine cases on a Boost developer's machine where the developer would prefer dummy forwarding #includes. So perhaps exactly what happens if all else fails should be an option. A bit hard to know without some real-life experience. --Beman

on Fri May 11 2012, Beman Dawes <bdawes-AT-acm.org> wrote:
On Fri, May 11, 2012 at 3:25 PM, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
On 11/05/2012 01:55, Steven Watanabe wrote:
The algorithm now looks like:
If symlinks are supported:
Sym link the directory if there is a conflict, create a subdirectory and symlink all the members. Else if hardlinks are supported: Hard-link all leaves Else Copy all leaves
You don't want to copy the leaves, but rather create a dummy file with a #include directive with the relative path to the original.
"Copy the leaves" if all else fails has the advantage of preserving existing uses (like HTML links) that won't be followed with a dummy forwarding #include. It may well be faster to use.
OTOH, I can imagine cases on a Boost developer's machine where the developer would prefer dummy forwarding #includes.
Like, always.
So perhaps exactly what happens if all else fails should be an option. A bit hard to know without some real-life experience.
If I were going to invest in this I'd use forwarding headers and a link rewriter for the HTML. But I advise not investing too much in the options here, as this whole monolithic arrangement should be short-lived. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

AMDG On 05/11/2012 07:36 PM, Dave Abrahams wrote:
on Fri May 11 2012, Beman Dawes <bdawes-AT-acm.org> wrote:
"Copy the leaves" if all else fails has the advantage of preserving existing uses (like HTML links) that won't be followed with a dummy forwarding #include. It may well be faster to use.
OTOH, I can imagine cases on a Boost developer's machine where the developer would prefer dummy forwarding #includes.
Like, always.
I disagree. I prefer copies for the following reasons: a) Disk space is not an issue for me. b) The copies can be used for any purpose, not just compiling. In particular, clicking through forwarding headers in the IDE is annoying. c) The behavior is more consistent. Using copies, the preprocessor output should be identical to the preprocessor output to the preprocessor output using links. I can see others preferring forwarding headers, but "always" is too strong. In Christ, Steven Watanabe

On 14/05/2012 23:35, Steven Watanabe wrote:
I disagree. I prefer copies for the following reasons:
a) Disk space is not an issue for me. b) The copies can be used for any purpose, not just compiling. In particular, clicking through forwarding headers in the IDE is annoying. c) The behavior is more consistent. Using copies, the preprocessor output should be identical to the preprocessor output to the preprocessor output using links.
I can see others preferring forwarding headers, but "always" is too strong.
Copies are not necessarily in sync with the original. Every time you modify the original, you need to regenerate the copies. That's not a very practical development environment.

AMDG On 05/15/2012 04:48 AM, Mathias Gaunard wrote:
On 14/05/2012 23:35, Steven Watanabe wrote:
I disagree. I prefer copies for the following reasons:
a) Disk space is not an issue for me. b) The copies can be used for any purpose, not just compiling. In particular, clicking through forwarding headers in the IDE is annoying. c) The behavior is more consistent. Using copies, the preprocessor output should be identical to the preprocessor output to the preprocessor output using links.
I can see others preferring forwarding headers, but "always" is too strong.
Copies are not necessarily in sync with the original. Every time you modify the original, you need to regenerate the copies. That's not a very practical development environment.
Why? It's handled automatically. (Okay, the patch I posted doesn't handle this properly, but I've fixed it in my local version) In Christ, Steven Watanabe

On 15/05/2012 14:36, Steven Watanabe wrote:
Why? It's handled automatically. (Okay, the patch I posted doesn't handle this properly, but I've fixed it in my local version)
So the files are regenerated every time something is built? Doesn't sound very efficient. Also, I would think that not everyone uses b2 when doing development with Boost, and may compile things directly with their favourite compiler or IDE.

AMDG On 05/15/2012 06:37 AM, Mathias Gaunard wrote:
On 15/05/2012 14:36, Steven Watanabe wrote:
Why? It's handled automatically. (Okay, the patch I posted doesn't handle this properly, but I've fixed it in my local version)
So the files are regenerated every time something is built? Doesn't sound very efficient.
Only the files that are out of date are regenerated.
Also, I would think that not everyone uses b2 when doing development with Boost, and may compile things directly with their favourite compiler or IDE.
I already have this problem when using the IDE, because any time Boost changes, the compiled libraries have to be rebuilt. Remember also, that copying is a last resort fallback if neither sym links nor hard links are supported. In Christ, Steven Watanabe

on Mon May 14 2012, Steven Watanabe <watanabesj-AT-gmail.com> wrote:
AMDG
On 05/11/2012 07:36 PM, Dave Abrahams wrote:
on Fri May 11 2012, Beman Dawes <bdawes-AT-acm.org> wrote:
"Copy the leaves" if all else fails has the advantage of preserving existing uses (like HTML links) that won't be followed with a dummy forwarding #include. It may well be faster to use.
OTOH, I can imagine cases on a Boost developer's machine where the developer would prefer dummy forwarding #includes.
Like, always.
I disagree. I prefer copies for the following reasons:
a) Disk space is not an issue for me. b) The copies can be used for any purpose, not just compiling. In particular, clicking through forwarding headers in the IDE is annoying. c) The behavior is more consistent. Using copies, the preprocessor output should be identical to the preprocessor output to the preprocessor output using links.
I can see others preferring forwarding headers, but "always" is too strong.
OK, thanks for setting me straight on that point. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Mon, May 14, 2012 at 5:35 PM, Steven Watanabe <watanabesj@gmail.com> wrote:
AMDG
On 05/11/2012 07:36 PM, Dave Abrahams wrote:
on Fri May 11 2012, Beman Dawes <bdawes-AT-acm.org> wrote:
"Copy the leaves" if all else fails has the advantage of preserving existing uses (like HTML links) that won't be followed with a dummy forwarding #include. It may well be faster to use.
OTOH, I can imagine cases on a Boost developer's machine where the developer would prefer dummy forwarding #includes.
Like, always.
I disagree. I prefer copies for the following reasons:
a) Disk space is not an issue for me. b) The copies can be used for any purpose, not just compiling. In particular, clicking through forwarding headers in the IDE is annoying. c) The behavior is more consistent. Using copies, the preprocessor output should be identical to the preprocessor output to the preprocessor output using links.
I can see others preferring forwarding headers, but "always" is too strong.
Why not symlinks. I've had really good results with the symlink approach. Did I miss something? --Beman

AMDG On 05/15/2012 07:15 AM, Beman Dawes wrote:
On Mon, May 14, 2012 at 5:35 PM, Steven Watanabe <watanabesj@gmail.com> wrote:
On 05/11/2012 07:36 PM, Dave Abrahams wrote:
on Fri May 11 2012, Beman Dawes <bdawes-AT-acm.org> wrote:
"Copy the leaves" if all else fails has the advantage of preserving existing uses (like HTML links) that won't be followed with a dummy forwarding #include. It may well be faster to use.
<snip>
Why not symlinks. I've had really good results with the symlink approach. Did I miss something?
Beman Dawes <bdawes-AT-acm.org> wrote:
"Copy the leaves" *if all else fails*
(emphasis mine) In Christ, Steven Watanabe

On 05/12/2012 04:36 AM, Dave Abrahams wrote:
If I were going to invest in this I'd use forwarding headers and a link rewriter for the HTML.
Agree. At least for headers. Based on some experience, I think the really annoying thing is to have your editor or IDE during debugging or from log files or build output, take you to the trouble spot in your code. You see the problem, but do not realize you are in the *wrong* place. So you fix it - you think. So what happens then. If you are lucky, the build fails on next compile as it realizes the content of the derived file, the copy, has changed. But most build tools may not, as the the derived file has been edited. So it is more likely you discover a lot of your changes are overwritten next time you do a rebuild all or make clean. Making the copies read-only may help a bit and remind the anyyed earlier of the issue, but I think forwarding headers is better. If the indirection annoys you in other ways, use a more appropriate file system.
But I advise not investing too much in the options here, as this whole monolithic arrangement should be short-lived.
I am curious why you consider installing headers for build part of a monolitic arrangement. -- Bjørn

on Thu May 17 2012, Bjørn Roald <bjorn-AT-4roald.org> wrote:
On 05/12/2012 04:36 AM, Dave Abrahams wrote:
If I were going to invest in this I'd use forwarding headers and a link rewriter for the HTML.
Agree. At least for headers.
Based on some experience, I think the really annoying thing is to have your editor or IDE during debugging or from log files or build output, take you to the trouble spot in your code. You see the problem, but do not realize you are in the *wrong* place. So you fix it - you think.
So what happens then. If you are lucky, the build fails on next compile as it realizes the content of the derived file, the copy, has changed. But most build tools may not, as the the derived file has been edited. So it is more likely you discover a lot of your changes are overwritten next time you do a rebuild all or make clean.
Making the copies read-only may help a bit and remind the anyyed earlier of the issue, but I think forwarding headers is better. If the indirection annoys you in other ways, use a more appropriate file system.
But I advise not investing too much in the options here, as this whole monolithic arrangement should be short-lived.
I am curious why you consider installing headers for build part of a monolitic arrangement.
I don't. I consider *forwarding* headers part of a monolithic arrangement. When you build modularized Boost using CMake no such headers are needed. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 05/21/2012 06:54 PM, Dave Abrahams wrote:
on Thu May 17 2012, Bjørn Roald<bjorn-AT-4roald.org> wrote:
On 05/12/2012 04:36 AM, Dave Abrahams wrote:
If I were going to invest in this I'd use forwarding headers and a link rewriter for the HTML. Agree. At least for headers.
Based on some experience, I think the really annoying thing is to have your editor or IDE during debugging or from log files or build output, take you to the trouble spot in your code. You see the problem, but do not realize you are in the *wrong* place. So you fix it - you think.
So what happens then. If you are lucky, the build fails on next compile as it realizes the content of the derived file, the copy, has changed. But most build tools may not, as the the derived file has been edited. So it is more likely you discover a lot of your changes are overwritten next time you do a rebuild all or make clean.
Making the copies read-only may help a bit and remind the anyyed earlier of the issue, but I think forwarding headers is better. If the indirection annoys you in other ways, use a more appropriate file system.
But I advise not investing too much in the options here, as this whole monolithic arrangement should be short-lived. I am curious why you consider installing headers for build part of a monolitic arrangement. I don't. I consider *forwarding* headers part of a monolithic arrangement.
I assume you consider forwarding headers with symbolic, and hard links part of a monolithic arrangement as well, not only generated files with a single #include line to where the real file is.
When you build modularized Boost using CMake no such headers are needed.
Is it modularized Boost, or CMake that solves that? I suspect you refer to some mechanism deploying boost source and possibly binaries into the development environment before you invoke CMake to generate your build system. So if that is what you are thinking, having an installer arrange the headers the way you need them for builds is in your opinion not monolithic. But having the build system do the same thing is. Sorry, I don't agree. If you wish to reduce the number of directories in your projects include paths, you need somehow to arrange headers in a common structure. You could do without such a common header file structure with any sensible build system, Boost.Build as well I would think. But such solutions has its own issues with scaling to a potentially high number of include paths. Providing a mechanism to arrange a common header file structure out of selected parts is not monolithic in my view. What is monolithic with the current arrangement, is the way the pieces of boost source is deployed, as one big hunk. I am eager to see how the proposals for using 0Install works out and what change management work flows this will support. I am a bit concerned with how well it would play with work flows and Git, but I am probably not seeing the whole picture. However I see no reason why making boost modularized need to be tied to replacing Boost.Build. Git.Submodules + Boost.Build based solution has a very simple tool chain, potential to be modularized but may lack some desired features with managing multitudes of package interrelationships. Git + 0Install + Boost.Build based solution is a more complex tool chain and possibly more complex process too manage it, but it may add flexible dependency management between packages, something Git.Submodule most likely lacks. So this clearly can be modularized on the expense of some tool chain complexity. Git + 0Install + CMake + make/MSVC/... based solution is even more complex tool chain, but some will argue that this adds the only correct way to build these days. It does however not add anything to modularize boost over the alternatives above. -- Bjørn

on Thu May 10 2012, Steven Watanabe <watanabesj-AT-gmail.com> wrote:
Any thoughts on this? The attached patch has all the changes I made.
Haven't looked at the patch, but just wanted to express my thanks for your work on this. Thanks! -- Dave Abrahams BoostPro Computing http://www.boostpro.com

on Wed May 09 2012, "Oliver Kowalke" <oliver.kowalke-AT-gmx.de> wrote:
The headers are moved into directory'include' - how does the build system handle includes (header from other boost libs)? At least I get error that header xyz is not found if I execute the tests using bjam.
You either use a build system setup that adds every directory to the
so that means that the new build env does not work out of the box?! what about libs using boost.move or boost.utils ...
include path or you run a script that generates flattened headers.
nonsense
Relax, please. This is not meant to be a final rendition. Everything will work seamlessly when it's done; we're just trying to get input about the module boundaries. -- Dave Abrahams BoostPro Computing http://www.boostpro.com
participants (10)
-
Antony Polukhin
-
Beman Dawes
-
Bjørn Roald
-
Daniel James
-
Daniel Pfeifer
-
Dave Abrahams
-
Lars Viklund
-
Mathias Gaunard
-
Oliver Kowalke
-
Steven Watanabe