
2012/12/25 Rene Rivera <grafikrobot@gmail.com>
On 12/17/2012 12:25 PM, Dave Abrahams wrote:
on Sun Dec 16 2012, Rene Rivera <grafikrobot-AT-gmail.com> wrote:
On Wed, Dec 12, 2012 at 9:49 AM, Beman Dawes <bdawes@acm.org> wrote:
On Tue, Dec 11, 2012 at 10:52 AM, Rene Rivera <grafikrobot@gmail.com>
wrote:
Hm.. That's barely a step :-\ ..And there's no need to branch. The tools already support multiple transport methods so we can just add another. Which brings me to one of the transport methods regression testing currently supports.. Downloading the current trunk/release as a ZIP archive. I was hoping to use the github facility that exists for downloading ZIPs of the repos. But unfortunately I couldn't make it attached the contents of the indirect references to the library subrepos.
That's right, unfortunately. However, we can get the exact URLs of the ZIP files from the GitHub API. I've recently done some scripting with that, e.g. https://github.com/ryppl/**ryppl/blob/develop/scripts/** github2bitbucket.py#L40<https://github.com/ryppl/ryppl/blob/develop/scripts/github2bitbucket.py#L40>
In fact, I think someone has coded up what's needed to make a monolithic zip here: https://github.com/quarnster/**sublime_package_control/**commit/** 9fe2fc2cad9bd2e7e1a38d7e5d4aaa**02fb2b4aea<https://github.com/quarnster/sublime_package_control/commit/9fe2fc2cad9bd2e7e1a38d7e5d4aaa02fb2b4aea>
After looking at both of those I see no point in using the github api (or additional structure data from sublime -- not totally sure where the submodule info comes from in this case though) for this as it provides no additional information than one can get from just parsing the ".gitmodules" file.
Hence the complexity of supporting testing with ZIPs is now a
magnitude larger as it means dealing with fetching more than a hundred individual repos :-(
Which now seems the only choice. At the tester side I will have to get the boost-master archive. Then parse out the ".gitmodules" file. And get each subrepo archive individually. Which increases the likelihood of failure considerably.
If you do it manually, yes. And of course after all that, even for direct git access, recreate a
boost-header tree (either moving files or symlinks).
I repeat.. More testing complexity :-(
Again: if you do it manually. But before any testing is done, it would be helpful if Boost.Build was
updated to handle the generation of boost-root/boost header file links, rather than relying on the workaround cmake script.
Well.. In an ideal world it would be possible to have a fully integrated "monolithic" repo that the testers can just use as that is the simplest and likely most repliable path. But, alas, this hope of mine was essentially dismissed during the DVCS/git discussions.
This isn't about DVCS but about whether we're going to have real modularity.
I don't know what you mean by "real modularity".
Monolithic development (currently): There is one repository, one release cycle. Modularized development (proposed): Each module has its own repository and release cycle. Optional: Multiple release cycles may be synced. Multiple modules may be delivered as one package. Is there room for misunderstanding? Maybe it is unclear what Boost's future development/test/release process will be like. But the meaning of "real modularity" should be clear, no? But the testers *must* test what Boost delivers as a package. At some point
end users get a Boost installed. And that's what we have to test. If we don't test that we will have unknown issues to deal with.
Absolutely! Boost should continue to provide monolithic packages. And these packages need to be tested. What we want to modularize is the development. Not the package that we provide to end users. However, if we want to provide a monolithic release from modularized sources, we can not simply "not modularize the release". We need to "put it back together" instead. Of course, there is a slight increase in complexity. We try to keep it minimal, but can not avoid it completely. cheers, Daniel