[Git] Moving beyond arm waving?

To move discussion of Git beyond the arm waving stage, I'd like to suggest several concrete steps: * Use the tag "[git]" to identify list postings discussing integration of Git into Boost daily life. If that gets unwieldy, we can start a separate mailing list. * Start building documentation, with a lot of the initial effort going into rationale and "how to do it". To that end, I've started to populate a "Git" hierarchy on the Trac wiki: https://svn.boost.org/trac/boost/wiki/Git/GitHome. Please contribute! * As a demonstration and proof-of-concept, a Boost library should begin using Git. Presumably a public repository (on GitHub?) can channel changes back to Boost svn. I'll volunteer the filesystem library. Comments? --Beman

On Wed, Feb 2, 2011 at 3:44 PM, Beman Dawes <bdawes@acm.org> wrote:
To move discussion of Git beyond the arm waving stage, I'd like to suggest several concrete steps:
* Use the tag "[git]" to identify list postings discussing integration of Git into Boost daily life. If that gets unwieldy, we can start a separate mailing list.
* Start building documentation, with a lot of the initial effort going into rationale and "how to do it". To that end, I've started to populate a "Git" hierarchy on the Trac wiki: https://svn.boost.org/trac/boost/wiki/Git/GitHome. Please contribute!
* As a demonstration and proof-of-concept, a Boost library should begin using Git. Presumably a public repository (on GitHub?) can channel changes back to Boost svn. I'll volunteer the filesystem library.
Forgive me for being a bit slow here, but isn't 'moving beyond the arm waving stage' exactly what the Ryppl effort is doing? Is there a danger of treading on Ryppl's toes here? (Sorry if I've misunderstood what's what!) - Rob.

On 2 February 2011 16:49, Robert Jones <robertgbjones@gmail.com> wrote:
Forgive me for being a bit slow here, but isn't 'moving beyond the arm waving stage' exactly what the Ryppl effort is doing? Is there a danger of treading on Ryppl's toes here?
I don't think so, ryppl seems to be concerned with revolutionising package management and build systems. This is an attempt to see how git fits. I think that's a good idea. Daniel

On Wed, Feb 2, 2011 at 12:40 PM, Daniel James <dnljms@gmail.com> wrote:
On 2 February 2011 16:49, Robert Jones <robertgbjones@gmail.com> wrote:
Forgive me for being a bit slow here, but isn't 'moving beyond the arm waving stage' exactly what the Ryppl effort is doing? Is there a danger of treading on Ryppl's toes here?
I don't think so, ryppl seems to be concerned with revolutionising package management and build systems. This is an attempt to see how git fits. I think that's a good idea.
Ryppl was actually the inspiration for me looking seriously at Git, since Ryppl requires Git, IIUC. So getting (1) comfortable with Git is a prerequisite for Ryppl and (2) I'm hoping that Git can be a net plus for Boost fairly soon, whereas Ryppl still seems off in the future. --Beman

At Wed, 2 Feb 2011 14:35:10 -0500, Beman Dawes wrote:
On Wed, Feb 2, 2011 at 12:40 PM, Daniel James <dnljms@gmail.com> wrote:
On 2 February 2011 16:49, Robert Jones <robertgbjones@gmail.com> wrote:
Forgive me for being a bit slow here, but isn't 'moving beyond the arm waving stage' exactly what the Ryppl effort is doing? Is there a danger of treading on Ryppl's toes here?
I don't think so, ryppl seems to be concerned with revolutionising package management and build systems. This is an attempt to see how git fits. I think that's a good idea.
Ryppl was actually the inspiration for me looking seriously at Git, since Ryppl requires Git, IIUC.
So getting (1) comfortable with Git is a prerequisite for Ryppl and (2) I'm hoping that Git can be a net plus for Boost fairly soon, whereas Ryppl still seems off in the future.
Just to clarify, let me say this now: Ryppl *will* be ready for Boost by Boostcon. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

At Wed, 2 Feb 2011 16:49:41 +0000, Robert Jones wrote:
To move discussion of Git beyond the arm waving stage, I'd like to suggest several concrete steps:
* Use the tag "[git]" to identify list postings discussing integration of Git into Boost daily life. If that gets unwieldy, we can start a separate mailing list.
* Start building documentation, with a lot of the initial effort going into rationale and "how to do it". To that end, I've started to populate a "Git" hierarchy on the Trac wiki: https://svn.boost.org/trac/boost/wiki/Git/GitHome. Please contribute!
* As a demonstration and proof-of-concept, a Boost library should begin using Git. Presumably a public repository (on GitHub?) can channel changes back to Boost svn. I'll volunteer the filesystem library.
Forgive me for being a bit slow here, but isn't 'moving beyond the arm waving stage' exactly what the Ryppl effort is doing? Is there a danger of treading on Ryppl's toes here?
Well, a little... but it can't hurt to experiment :-) -- Dave Abrahams BoostPro Computing http://www.boostpro.com

AMDG On 2/2/2011 7:44 AM, Beman Dawes wrote:
To move discussion of Git beyond the arm waving stage, I'd like to suggest several concrete steps:
* Use the tag "[git]" to identify list postings discussing integration of Git into Boost daily life. If that gets unwieldy, we can start a separate mailing list.
* Start building documentation, with a lot of the initial effort going into rationale and "how to do it". To that end, I've started to populate a "Git" hierarchy on the Trac wiki: https://svn.boost.org/trac/boost/wiki/Git/GitHome. Please contribute!
The advantages look like a good start. I'd like to add: Con: * The migration itself will cause a certain amount of disruption (transient) * Those who just don't care will have to learn a new tool (transient, subjective). * Links to svn will be broken. Trac and svn are currently heavily cross-linked. If someone has a way to avoid this problem, I'd be more than happy to withdraw it. (long-term). (I'd just put this up, but I wanted to post here first to make sure that I'm not just missing something and that they're expressed in a fair way.) In Christ, Steven Watanabe

On 2/2/2011 11:30 AM, Steven Watanabe wrote:
AMDG
On 2/2/2011 7:44 AM, Beman Dawes wrote:
To move discussion of Git beyond the arm waving stage, I'd like to suggest several concrete steps:
* Use the tag "[git]" to identify list postings discussing integration of Git into Boost daily life. If that gets unwieldy, we can start a separate mailing list.
* Start building documentation, with a lot of the initial effort going into rationale and "how to do it". To that end, I've started to populate a "Git" hierarchy on the Trac wiki: https://svn.boost.org/trac/boost/wiki/Git/GitHome. Please contribute!
The advantages look like a good start. I'd like to add:
Con: * The migration itself will cause a certain amount of disruption (transient) * Those who just don't care will have to learn a new tool (transient, subjective). * Links to svn will be broken. Trac and svn are currently heavily cross-linked. If someone has a way to avoid this problem, I'd be more than happy to withdraw it. (long-term).
If you want more criticism of Git.. You might want to read through the docs for Fossil <http://www.fossil-scm.org/index.html/doc/trunk/www/fossil-v-git.wiki>. Ostensibly a better VCS, IMO ;-) -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

Am 02.02.2011 18:41, schrieb Rene Rivera:
If you want more criticism of Git.. You might want to read through the docs for Fossil <http://www.fossil-scm.org/index.html/doc/trunk/www/fossil-v-git.wiki>. Ostensibly a better VCS, IMO ;-)
this does not compare both vcs how git or fossil help the developer in his daily work - the doesn't really help Oliver

On 2/2/2011 12:38 PM, Oliver Kowalke wrote:
Am 02.02.2011 18:41, schrieb Rene Rivera:
If you want more criticism of Git.. You might want to read through the docs for Fossil <http://www.fossil-scm.org/index.html/doc/trunk/www/fossil-v-git.wiki>. Ostensibly a better VCS, IMO ;-)
this does not compare both vcs how git or fossil help the developer in his daily work - the doesn't really help
You have to read more of the docs than just that one page to get an idea of why Fossil is easier to use with the same "benefits" as Git. But the points mentioned in that page do matter for this discussion as they raise aspects regarding setup and maintenance. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

On Feb 2, 2011, at 12:41 PM, Rene Rivera wrote:
On 2/2/2011 11:30 AM, Steven Watanabe wrote:
AMDG
On 2/2/2011 7:44 AM, Beman Dawes wrote:
To move discussion of Git beyond the arm waving stage, I'd like to suggest several concrete steps:
* Use the tag "[git]" to identify list postings discussing integration of Git into Boost daily life. If that gets unwieldy, we can start a separate mailing list.
* Start building documentation, with a lot of the initial effort going into rationale and "how to do it". To that end, I've started to populate a "Git" hierarchy on the Trac wiki: https://svn.boost.org/trac/boost/wiki/Git/GitHome. Please contribute!
The advantages look like a good start. I'd like to add:
Con: * The migration itself will cause a certain amount of disruption (transient) * Those who just don't care will have to learn a new tool (transient, subjective). * Links to svn will be broken. Trac and svn are currently heavily cross-linked. If someone has a way to avoid this problem, I'd be more than happy to withdraw it. (long-term).
If you want more criticism of Git.. You might want to read through the docs for Fossil <http://www.fossil-scm.org/index.html/doc/trunk/www/fossil-v-git.wiki>. Ostensibly a better VCS, IMO ;-)
I like Fossil, and have used it in some projects lately, instead of a GitHub solution. The whole idea of having commits, tickets and documentation (Wiki) in the same repository is quite attractive. What I do not agree with in the their claims is that it is easier to handle than Git. Git is *conceptually* extremely simple, which is the beauty of it: a heap of two types of objects - commits and file trees - related in a DAG, where the branches and tags are just pointers to those heap-based objects. And this goes for everything in Git, including the "staging area" used and the (extremely useful for real world development) "stash" areas. I have never seen such conceptual clarity in a VCS - well, outside Darcs' patch theory, but patches are simply inferior to snapshots, for stability and performance-related reasons, IMHO. It is "just" that Linus (and others) created a host of convenience tools on top of that simple "DAG file system" ;-) I do not see that simplicity in Fossil, really, but perhaps my vision is blurred somewhat by the goodies entangled into it, in the form of the aforementioned tickets and Wiki? And the separation of source tree (or working directory) and the local repository in Fossil adds complexity with another SQLite database containing the "staged" changes. What is really cool about Fossil is that it is a one stop solution for a lot of projects, and that one can deal with tickets while offline, even changing their states. /David

At Wed, 02 Feb 2011 11:41:45 -0600, Rene Rivera wrote:
If you want more criticism of Git.. You might want to read through the docs for Fossil <http://www.fossil-scm.org/index.html/doc/trunk/www/fossil-v-git.wiki>. Ostensibly a better VCS, IMO ;-)
This isn't about choosing the best VCS. It's about choosing the best VCS with the most momentum, that will continue to be maintained, and that has a design most appropriate to Boost's future. Your page says: The Git model works best for large projects, like the Linux kernel for which Git was designed. Linus Torvalds does not need or want to see a thousand different branches, one for each contributor... Fossil is designed for smaller and non-hierarchical teams where all developers are operating directly on the master branch, or at most a small number of well defined branches.... and: Git has a huge user community. If following the herd and being like everybody else is important to you, then you should choose Git. Fossil is clearly the "road less traveled" These points all speak in favor of Git for Boost. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wednesday 02 February 2011, Dave Abrahams wrote:
This isn't about choosing the best VCS. It's about choosing the best VCS with the most momentum, that will continue to be maintained, and that has a design most appropriate to Boost's future.
Your page says:
The Git model works best for large projects, like the Linux kernel for which Git was designed. Linus Torvalds does not need or want to see a thousand different branches, one for each contributor...
Fossil is designed for smaller and non-hierarchical teams where all developers are operating directly on the master branch, or at most a small number of well defined branches....
and:
Git has a huge user community. If following the herd and being like everybody else is important to you, then you should choose Git.
Fossil is clearly the "road less traveled"
These points all speak in favor of Git for Boost.
Although your first two points also speak in favor of svn over git. The market share of Subversion alone is more than all distributed vcs systems combined (over 10 times the adoption of git), and it's not going away soon. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) iEYEARECAAYFAk1JwvcACgkQ5vihyNWuA4XCYwCcCzjombVwrNZEiqUCypHAMPpA uaUAoNcKFo7rKFXx69IXzdVTFZFgwKBc =G//g -----END PGP SIGNATURE-----

On Wed, Feb 2, 2011 at 3:28 PM, Dave Abrahams <dave@boostpro.com> wrote:
At Wed, 02 Feb 2011 11:41:45 -0600, Rene Rivera wrote:
If you want more criticism of Git.. You might want to read through the docs for Fossil <http://www.fossil-scm.org/index.html/doc/trunk/www/fossil-v-git.wiki>. Ostensibly a better VCS, IMO ;-)
This isn't about choosing the best VCS. It's about choosing the best VCS with the most momentum, that will continue to be maintained, and that has a design most appropriate to Boost's future.
Exactly. This isn't about VCS in general. The context is Boost. I probably shouldn't have even mentioned Fossil since it isn't a serious contender AFAIK. What about Mercurial and Bazaar? It would be good to hear about personal experiences with these systems. --Beman

At Wed, 2 Feb 2011 17:36:24 -0500, Beman Dawes wrote:
On Wed, Feb 2, 2011 at 3:28 PM, Dave Abrahams <dave@boostpro.com> wrote:
At Wed, 02 Feb 2011 11:41:45 -0600, Rene Rivera wrote:
If you want more criticism of Git.. You might want to read through the docs for Fossil <http://www.fossil-scm.org/index.html/doc/trunk/www/fossil-v-git.wiki>. Ostensibly a better VCS, IMO ;-)
This isn't about choosing the best VCS. It's about choosing the best VCS with the most momentum, that will continue to be maintained, and that has a design most appropriate to Boost's future.
Exactly. This isn't about VCS in general. The context is Boost.
I probably shouldn't have even mentioned Fossil since it isn't a serious contender AFAIK.
What about Mercurial and Bazaar? It would be good to hear about personal experiences with these systems.
My personal experience is that Mercurial's "multiple heads on a branch" system is conceptually broken and hard to use. I've only used bzr enough grab some source from somewhere. Part of this also has to do with what systems people are willing to invest in supporting. Several of us at least are enthusiastic about moving to Git. I wouldn't be so interested in working on it if it were going to be something else. I also have a personal opinion that Git is winning the DVCS war-of-popularity, FWIW. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Feb 2, 2011, at 5:57 PM, Dave Abrahams wrote: [snip]
Part of this also has to do with what systems people are willing to invest in supporting. Several of us at least are enthusiastic about moving to Git. I wouldn't be so interested in working on it if it were going to be something else. I also have a personal opinion that Git is winning the DVCS war-of-popularity, FWIW.
My opinion is that Git is winning the VCS war-of-popularity - might take two years, though. Yes, Git is (obviously?) a much better alternative for Boost than is Bazaar or Mercurial (or Darcs...) /David

On Feb 2, 2011, at 8:07 PM, David Bergman wrote:
On Feb 2, 2011, at 5:57 PM, Dave Abrahams wrote:
[snip]
Part of this also has to do with what systems people are willing to invest in supporting. Several of us at least are enthusiastic about moving to Git. I wouldn't be so interested in working on it if it were going to be something else. I also have a personal opinion that Git is winning the DVCS war-of-popularity, FWIW.
My opinion is that Git is winning the VCS war-of-popularity - might take two years, though.
Yes, Git is (obviously?) a much better alternative for Boost than is Bazaar or Mercurial (or Darcs...)
For those of us who haven't been following the DVCS revolution closely, this isn't obvious. I look forward to seeing some more information on the wiki page (or perhaps links to messages that have passed by in this forum?) about how these different systems' features/designs relate to Boost's development process. Ron
/David _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

At Thu, 3 Feb 2011 08:31:33 -0500, Ronald Garcia wrote:
On Feb 2, 2011, at 8:07 PM, David Bergman wrote:
On Feb 2, 2011, at 5:57 PM, Dave Abrahams wrote:
[snip]
Part of this also has to do with what systems people are willing to invest in supporting. Several of us at least are enthusiastic about moving to Git. I wouldn't be so interested in working on it if it were going to be something else. I also have a personal opinion that Git is winning the DVCS war-of-popularity, FWIW.
My opinion is that Git is winning the VCS war-of-popularity - might take two years, though.
Yes, Git is (obviously?) a much better alternative for Boost than is Bazaar or Mercurial (or Darcs...)
For those of us who haven't been following the DVCS revolution closely, this isn't obvious. I look forward to seeing some more information on the wiki page (or perhaps links to messages that have passed by in this forum?) about how these different systems' features/designs relate to Boost's development process.
This conversation (except for any parts that may be boost-specific---but Boost isn't really that unique in any way that matters to the issue) has been had and re-had in many other fora. Like all such discussions, it has a tendency to inspire fla^H^H^Hunproductive interactions. Furthermore, it's my personal experience (FWIW) that the only way to really understand these tools and especially the user experiences they confer is to work with them. I don't think you can learn very much as a spectator, in this case. So to anyone who cares about this issue, and wants to catch up, I suggest you do the research yourself. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave Abrahams <dave@boostpro.com> writes:
At Wed, 2 Feb 2011 17:36:24 -0500, Beman Dawes wrote:
On Wed, Feb 2, 2011 at 3:28 PM, Dave Abrahams <dave@boostpro.com> wrote:
At Wed, 02 Feb 2011 11:41:45 -0600, Rene Rivera wrote:
If you want more criticism of Git.. You might want to read through the docs for Fossil <http://www.fossil-scm.org/index.html/doc/trunk/www/fossil-v-git.wiki>. Ostensibly a better VCS, IMO ;-)
This isn't about choosing the best VCS. It's about choosing the best VCS with the most momentum, that will continue to be maintained, and that has a design most appropriate to Boost's future.
Exactly. This isn't about VCS in general. The context is Boost.
I probably shouldn't have even mentioned Fossil since it isn't a serious contender AFAIK.
What about Mercurial and Bazaar? It would be good to hear about personal experiences with these systems.
My personal experience is that Mercurial's "multiple heads on a branch" system is conceptually broken and hard to use. I've only used bzr enough grab some source from somewhere.
Interesting. I think it maps nicely to reality --- Bob clones the shared repo and makes changes whilst simultaneously Joe clones the shared repo and makes changes. Since they both made changes to the same branch you need to mark that somehow when they push/pull from each other --- creating multiple heads on the branch clearly highlights this. I use Mercurial extensively, on quite a few projects; it is my VCS of choice at the moment. I find it more intuitive and easier to use than git. Anthony -- Author of C++ Concurrency in Action http://www.stdthread.co.uk/book/ just::thread C++0x thread library http://www.stdthread.co.uk Just Software Solutions Ltd http://www.justsoftwaresolutions.co.uk 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK. Company No. 5478976

On Feb 2, 2011, at 5:57 PM, Dave Abrahams wrote:
What about Mercurial and Bazaar? It would be good to hear about personal experiences with these systems.
My personal experience is that Mercurial's "multiple heads on a branch" system is conceptually broken and hard to use. I've only used bzr enough grab some source from somewhere.
Is this feature of Mercurial something invasive, or can it be avoided? Ron

Am Mittwoch, den 02.02.2011, 09:30 -0800 schrieb Steven Watanabe:
* Those who just don't care will have to learn a new tool (transient, subjective).
No, they don't! You can continue to use TortoiseSVN for example. See: https://github.com/blog/626-announcing-svn-support cheers, Daniel

AMDG On 2/2/2011 11:34 AM, Daniel Pfeifer wrote:
Am Mittwoch, den 02.02.2011, 09:30 -0800 schrieb Steven Watanabe:
* Those who just don't care will have to learn a new tool (transient, subjective).
No, they don't! You can continue to use TortoiseSVN for example. See: https://github.com/blog/626-announcing-svn-support
"For now it's only read-only, but who knows what will happen in the future!" I doubt that it's possible to make this work perfectly because git and svn just handle some things differently. Otherwise, why isn't git-svn good enough for those who want to use git? In Christ, Steven Watanabe

On 2 February 2011 19:47, Steven Watanabe <watanabesj@gmail.com> wrote:
"For now it's only read-only, but who knows what will happen in the future!"
https://github.com/blog/644-subversion-write-support But not perfect. Daniel

On 2 February 2011 19:51, Daniel James <dnljms@gmail.com> wrote:
On 2 February 2011 19:47, Steven Watanabe <watanabesj@gmail.com> wrote:
"For now it's only read-only, but who knows what will happen in the future!"
https://github.com/blog/644-subversion-write-support
But not perfect.
I've looked into it a little and I don't think it would work for us. There doesn't seem to be any real support for branches or submodules. Daniel

On Thu, Feb 3, 2011 at 3:47 AM, Steven Watanabe <watanabesj@gmail.com> wrote:
AMDG
On 2/2/2011 11:34 AM, Daniel Pfeifer wrote:
Am Mittwoch, den 02.02.2011, 09:30 -0800 schrieb Steven Watanabe:
* Those who just don't care will have to learn a new tool (transient, subjective).
No, they don't! You can continue to use TortoiseSVN for example. See: https://github.com/blog/626-announcing-svn-support
"For now it's only read-only, but who knows what will happen in the future!" I doubt that it's possible to make this work perfectly because git and svn just handle some things differently. Otherwise, why isn't git-svn good enough for those who want to use git?
https://github.com/blog/644-subversion-write-support That should be fine for people who want to keep using Subversion for Git projects. -- Dean Michael Berris about.me/deanberris

On Feb 2, 2011, at 2:47 PM, Steven Watanabe wrote:
AMDG
On 2/2/2011 11:34 AM, Daniel Pfeifer wrote:
Am Mittwoch, den 02.02.2011, 09:30 -0800 schrieb Steven Watanabe:
* Those who just don't care will have to learn a new tool (transient, subjective).
No, they don't! You can continue to use TortoiseSVN for example. See: https://github.com/blog/626-announcing-svn-support
"For now it's only read-only, but who knows what will happen in the future!" I doubt that it's possible to make this work perfectly because git and svn just handle some things differently. Otherwise, why isn't git-svn good enough for those who want to use git?
I have used git-svn quite successfully in two bigger projects - definitely many more commits than Boost - to create isomorphically embedded images in Git, including proper treatment of tags and branches - using the standard mapping. Yes, I did cull old history (such as "older than two years") admittedly... Additionally, in one of them, I continued to bridge Git and SVN for six months. Nope, I did not prove the isomorphism by subsequently exporting the Git repository ("git fast-export --all") back into a SVN repository, so can only formally claim "perceptual homomorphism"... So, I would not say that git-svn is not good enough, in general, if the SVN project has reasonable (simplistic?) meta structures. /David

At Wed, 02 Feb 2011 09:30:28 -0800, Steven Watanabe wrote:
AMDG
On 2/2/2011 7:44 AM, Beman Dawes wrote:
To move discussion of Git beyond the arm waving stage, I'd like to suggest several concrete steps:
* Use the tag "[git]" to identify list postings discussing integration of Git into Boost daily life. If that gets unwieldy, we can start a separate mailing list.
* Start building documentation, with a lot of the initial effort going into rationale and "how to do it". To that end, I've started to populate a "Git" hierarchy on the Trac wiki: https://svn.boost.org/trac/boost/wiki/Git/GitHome. Please contribute!
The advantages look like a good start. I'd like to add:
Con: * The migration itself will cause a certain amount of disruption (transient) * Those who just don't care will have to learn a new tool (transient, subjective). * Links to svn will be broken. Trac and svn are currently heavily cross-linked. If someone has a way to avoid this problem, I'd be more than happy to withdraw it. (long-term).
John's modularization project maintains a complete correspondence of Git and SVN commits, so mapping those should be no problem.
(I'd just put this up, but I wanted to post here first to make sure that I'm not just missing something and that they're expressed in a fair way.)
Thank you, I think that's totally fair. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

At Wed, 2 Feb 2011 10:44:47 -0500, Beman Dawes wrote:
To move discussion of Git beyond the arm waving stage, I'd like to suggest several concrete steps:
* Use the tag "[git]" to identify list postings discussing integration of Git into Boost daily life. If that gets unwieldy, we can start a separate mailing list.
* Start building documentation, with a lot of the initial effort going into rationale and "how to do it". To that end, I've started to populate a "Git" hierarchy on the Trac wiki: https://svn.boost.org/trac/boost/wiki/Git/GitHome. Please contribute!
* As a demonstration and proof-of-concept, a Boost library should begin using Git. Presumably a public repository (on GitHub?) can channel changes back to Boost svn. I'll volunteer the filesystem library.
Comments?
That could be interesting. IMO Boost (and especially boost-in-Git) makes more sense when modularized, though, so expect some things to seem quite different once that's set up. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Beman Dawes <bdawes@acm.org> writes:
* As a demonstration and proof-of-concept, a Boost library should begin using Git. Presumably a public repository (on GitHub?) can channel changes back to Boost svn. I'll volunteer the filesystem library.
I've made a copy of the fully migrated Boost repository available online for review here: https://github.com/boost-lib/boost-history In the course of creating the migration process, I had to fix several bugs in libgit2. This makes me less than 100% convinced of the fidelity of the result so far. I'd like anyone who can to review the sections familiar to them, to make sure nothing obvious has gone wrong. A couple things to note in this repository: - Any branch which was not committed to since 2008 has been migrated as a tag with the prefix "old-branches/". - There is a 'flat-history' tag, which preserves the literal revision history as it appeared in Subversion, with no splitting up into branches or tags. This is for completeness, but can be ignored otherwise. John

On 3 February 2011 07:56, John Wiegley <johnw@boostpro.com> wrote:
In the course of creating the migration process, I had to fix several bugs in libgit2. This makes me less than 100% convinced of the fidelity of the result so far. I'd like anyone who can to review the sections familiar to them, to make sure nothing obvious has gone wrong.
Some of the header files seem to be corrupted, for example: https://github.com/boost-lib/boost-history/blob/trunk/boost/indirect_referen... I noticed this after noticing that in an old commit which removed the executable property from a load of files, the files were all marked as binary. The commit had hash '9b6259531efaa4f7f9bc207425951fdb77116640'. Daniel

Daniel James <dnljms@gmail.com> writes:
Some of the header files seem to be corrupted, for example:
https://github.com/boost-lib/boost-history/blob/trunk/boost/indirect_referen...
I noticed this after noticing that in an old commit which removed the executable property from a load of files, the files were all marked as binary. The commit had hash '9b6259531efaa4f7f9bc207425951fdb77116640'.
That's exactly the kind of feedback I need. Thanks, Daniel! I'll get this fixed and update the repository. John

Hi, do you (boost devs) have to keep really all the history of the project in the git repo(s)? Or maybe just some years would be fine? The rest could be kept in an historical svn, still available in read-only later. I've read that a lot of project switching to (any DSVC) did that to not have to hack the conversion tools too much. But it depends on the real needs regarding the source history. Joël Lamotte

Klaim - Joël Lamotte <mjklaim@gmail.com> writes:
Do you (boost devs) have to keep really all the history of the project in the git repo(s)? Or maybe just some years would be fine? The rest could be kept in an historical svn, still available in read-only later.
Having to use two different tools to track down a past change you're looking for is onerous in the extreme. It would render "git blame" and "git bisect" a lot more useless.
I've read that a lot of project switching to (any DSVC) did that to not have to hack the conversion tools too much. But it depends on the real needs regarding the source history.
Part of the reason we're putting this much effort into it is that we want a process which split Boost up into submodules during the migration process, while preserving as much history within each separate submodule as possible. There's just no tool out there that does that right now. So since we needed to write a tool anyway, why not solve the whole problem. John

On Feb 3, 2011, at 1:48 PM, John Wiegley wrote:
Klaim - Joël Lamotte <mjklaim@gmail.com> writes:
I've read that a lot of project switching to (any DSVC) did that to not have to hack the conversion tools too much. But it depends on the real needs regarding the source history.
Part of the reason we're putting this much effort into it is that we want a process which split Boost up into submodules during the migration process, while preserving as much history within each separate submodule as possible. There's just no tool out there that does that right now. So since we needed to write a tool anyway, why not solve the whole problem.
I haven't been following this closely so ignore if you've already discussed / decided this. I'd much prefer to leave the repo structure unchanged and migrate directly into git "as is". Restructure the repo into submodules after we've made the transition to git. It will be much easier to restructure the repo with everything already in git. There's two upsides, we lose no commit history and it only perturbs one aspect at a time (first give people chance to use same repo layout using new tool, followed by a restructure of the repo into submodules using the new tool). I worry about perturbing too many variables at once. -- Noel

AMDG On 2/3/2011 1:00 PM, Belcourt, K. Noel wrote:
I haven't been following this closely so ignore if you've already discussed / decided this.
I'd much prefer to leave the repo structure unchanged and migrate directly into git "as is". Restructure the repo into submodules after we've made the transition to git. It will be much easier to restructure the repo with everything already in git. There's two upsides, we lose no commit history and it only perturbs one aspect at a time (first give people chance to use same repo layout using new tool, followed by a restructure of the repo into submodules using the new tool). I worry about perturbing too many variables at once.
The disadvantage is that both are liable to cause massive disruption. I'd rather get it all over with at once, so I can get on with the things I actually care about. In Christ, Steven Watanabe

On Thu, Feb 3, 2011 at 10:11 PM, Steven Watanabe <watanabesj@gmail.com> wrote:
On 2/3/2011 1:00 PM, Belcourt, K. Noel wrote:
I'd much prefer to leave the repo structure unchanged and migrate directly into git "as is". Restructure the repo into submodules after we've made the transition to git.
The disadvantage is that both are liable to cause massive disruption. I'd rather get it all over with at once, so I can get on with the things I actually care about.
OTOH it would be simple to verify that the complete HEAD is identical after migration if the same structure was kept in the first step. /$

At Thu, 3 Feb 2011 14:00:50 -0700, Belcourt, K. Noel wrote:
On Feb 3, 2011, at 1:48 PM, John Wiegley wrote:
Klaim - Joël Lamotte <mjklaim@gmail.com> writes:
I've read that a lot of project switching to (any DSVC) did that to not have to hack the conversion tools too much. But it depends on the real needs regarding the source history.
Part of the reason we're putting this much effort into it is that we want a process which split Boost up into submodules during the migration process, while preserving as much history within each separate submodule as possible. There's just no tool out there that does that right now. So since we needed to write a tool anyway, why not solve the whole problem.
I haven't been following this closely so ignore if you've already discussed / decided this.
I'd much prefer to leave the repo structure unchanged and migrate directly into git "as is". Restructure the repo into submodules after we've made the transition to git. It will be much easier to restructure the repo with everything already in git.
That's essentially exactly what John is doing. There's no tool that faithfully does an "as is" translation of a history as complex as Boost's.
There's two upsides, we lose no commit history and it only perturbs one aspect at a time (first give people chance to use same repo layout using new tool, followed by a restructure of the repo into submodules using the new tool). I worry about perturbing too many variables at once.
I worry about having too many separate perturbations. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

"Belcourt, K. Noel" <kbelco@sandia.gov> writes:
I'd much prefer to leave the repo structure unchanged and migrate directly into git "as is". Restructure the repo into submodules after we've made the transition to git. It will be much easier to restructure the repo with everything already in git. There's two upsides, we lose no commit history and it only perturbs one aspect at a time (first give people chance to use same repo layout using new tool, followed by a restructure of the repo into submodules using the new tool). I worry about perturbing too many variables at once.
A few of us have been discussing this at length off-list. There are arguments on both sides, so I can't say which is truly best except I think we may prefer to get all this disruption over in one big step: 1. Move to Git, preserving monolithic history in a "boost-history" repo. 2. Separate the projects into submodules governed by a "boost" super-project. 3. Switch to CMake as the build process for these submodules. We could certainly stagger these changes -- and there are advantages to doing so -- but it means damaging the Boost development process in a big way on three separate occasions. That would really hurt progress, I think, more than getting it all over with at once. Also, you will not lose any commit history. Even when we move to submodules, there will always "boost-history" to represent exactly the state of Subversion on the date we made the change. We should probably even keep a read-only Subversion repo around for a while, just in case. John

John Wiegley wrote:
"Belcourt, K. Noel" <kbelco@sandia.gov> writes:
I'd much prefer to leave the repo structure unchanged and migrate directly into git "as is". Restructure the repo into submodules after we've made the transition to git. It will be much easier to restructure the repo with everything already in git. There's two upsides, we lose no commit history and it only perturbs one aspect at a time (first give people chance to use same repo layout using new tool, followed by a restructure of the repo into submodules using the new tool). I worry about perturbing too many variables at once.
A few of us have been discussing this at length off-list. There are arguments on both sides, so I can't say which is truly best except I think we may prefer to get all this disruption over in one big step:
1. Move to Git, preserving monolithic history in a "boost-history" repo. 2. Separate the projects into submodules governed by a "boost" super-project. 3. Switch to CMake as the build process for these submodules.
Oh fun. So, "a few of us", who are not really identified and whose level of participation in Boost development is therefore unknown, are apparently are trying to force various changes are result of off-list discussion? The history of this discussion seem to go like this: - Some folks proposed switch to git, and found some opposition, and no consensus was reached. - Beman suggests to play with git, using one library. That seems OK. - Suddenly, folks are starting to discuss various details as if switch to git is already decided. - Then, folks start to discuss modularization as if it's already planned, despite there been different opinions whether it's needed, and how exactly it's needed. - Then, obviously my favourite, it's proposed to switch to CMake in the same big bang, despite the fact that no discussion about that ever happened. It looks like either: - there's some hidden play going on - somebody is trying to just sneak his changes without discussion, either by just doing them, or by talking about them as if they are certain until everybody starts to believe that. What's going on? - Volodya -- Vladimir Prus Mentor Graphics +7 (812) 677-68-40

Vladimir Prus <vladimir@codesourcery.com> writes:
Oh fun. So, "a few of us", who are not really identified and whose level of participation in Boost development is therefore unknown, are apparently are trying to force various changes are result of off-list discussion?
Hi Vladimir, I'm not a Boost developer, just a friendly assitant to some who are. I think you've made a valid point that it's not my place to direct this discussion, so I'll leave it up to the Boost moderators to continue. If it's decided we don't switch to Git, at least I've learned a few things during the process. Yours, John

Oh fun. So, "a few of us", who are not really identified and whose level of participation in Boost development is therefore unknown, are apparently are trying to force various changes are result of off-list discussion?
The history of this discussion seem to go like this: - Some folks proposed switch to git, and found some opposition, and no consensus was reached. - Beman suggests to play with git, using one library. That seems OK. - Suddenly, folks are starting to discuss various details as if switch to git is already decided. - Then, folks start to discuss modularization as if it's already planned, despite there been different opinions whether it's needed, and how exactly it's needed. - Then, obviously my favourite, it's proposed to switch to CMake in the same big bang, despite the fact that no discussion about that ever happened.
It looks like either: - there's some hidden play going on - somebody is trying to just sneak his changes without discussion, either by just doing them, or by talking about them as if they are certain until everybody starts to believe that.
What's going on?
I second that question. Volodya, let me say that I'm deeply troubled about this and that I share your concerns. Regards Hartmut --------------- http://boost-spirit.com

Hi, On Feb 3, 2011, at 8:08 PM, Vladimir Prus wrote:
- Beman suggests to play with git, using one library. That seems OK.
I'd like to propose that we follow the Clang approach. Instead of playing with one Boost library converted to git, let's convert the entire SVN repo in it's current structure to git, and publicly maintain it by adding an SVN post-commit hook to run 'git svn fetch' following each SVN commit to bring the git repo up to date (or update the git mirror every few hours via cron). I'm happy to offer resources to establish and maintain a git mirror if that would help. -- Noel

On Thu, Feb 3, 2011 at 7:30 PM, Belcourt, K. Noel <kbelco@sandia.gov> wrote:
Hi,
On Feb 3, 2011, at 8:08 PM, Vladimir Prus wrote:
- Beman suggests to play with git, using one library. That seems OK.
I'd like to propose that we follow the Clang approach. Instead of playing with one Boost library converted to git, let's convert the entire SVN repo in it's current structure to git, and publicly maintain it by adding an SVN post-commit hook to run 'git svn fetch' following each SVN commit to bring the git repo up to date (or update the git mirror every few hours via cron).
I'm happy to offer resources to establish and maintain a git mirror if that would help.
-- Noel
This may be a dumb question, but if a group of developers wants to collaborate using git, do they really care if in the end they push to a git or a svn repository? Emil Dotchevski Reverge Studios, Inc. http://www.revergestudios.com/reblog/index.php?n=ReCode

On Feb 3, 2011, at 8:47 PM, Emil Dotchevski wrote:
On Thu, Feb 3, 2011 at 7:30 PM, Belcourt, K. Noel <kbelco@sandia.gov> wrote:
On Feb 3, 2011, at 8:08 PM, Vladimir Prus wrote:
- Beman suggests to play with git, using one library. That seems OK.
I'd like to propose that we follow the Clang approach. Instead of playing with one Boost library converted to git, let's convert the entire SVN repo in it's current structure to git, and publicly maintain it by adding an SVN post-commit hook to run 'git svn fetch' following each SVN commit to bring the git repo up to date (or update the git mirror every few hours via cron).
I'm happy to offer resources to establish and maintain a git mirror if that would help.
This may be a dumb question, but if a group of developers wants to collaborate using git, do they really care if in the end they push to a git or a svn repository?
Probably not. I made the proposal in the spirit of experimenting and public access to a live git mirror, which might help people better understand the pros and cons of a migration to git. -- Noel

Re: [boost] [Git] Moving beyond arm waving? On Fri, 4 Feb 2011, Belcourt, K. Noel wrote:
On Feb 3, 2011, at 8:47 PM, Emil Dotchevski wrote:
This may be a dumb question, but if a group of developers wants to collaborate using git, do they really care if in the end they push to a git or a svn repository?
Probably not. I made the proposal in the spirit of experimenting and public access to a live git mirror, which might help people better understand the pros and cons of a migration to git.
Yes it does matter. Very much! There is a common hope that this is not true, but SVN bindings hamstring the usage of git. Getting data from one version-control system into another is usually not that hard. Getting an accurate 1:1 mapping is well nigh impossible. I have burned myself badly by trying shared development on a git mirror parallel to SVN. It only works if git can only see a single copy of git-svn doing the sync. It is manageable with small numbers of git-svn only because the pain is reduced when one mirror has to be rebuilt and its private branches grafted (even my desktop and laptop have desynced several times over a few years). It would be a complete disaster for a team of 5 or more. Each user doing a commit from git necessarily has their own git-svn configured, thus they cannot share their git repos naturally. There is no way to have a central server for git-svn due to differences in the commit semantics. Git pushes a new state and fails if the previous state changed at all, SVN tries to merge and fails if there is a conflict (even though other files may have changed). There are larger differences in merge and branching semantics. I suppose you could have people attempt the moderation of all commits (a real pain). Once you have two or three people using git, all sorts of things can fail when they share git repos with multiple git-svn links. Users may have different URLs for the SVN repo (addressable via --rewrite-root), and there are several other parameters that must be manually set and agreed upon at all mirrors (e.g. --trunk=, --branches=, --tags=, --prefix=). On top of that, different versions of git-svn may do things differently, causing the git views to artificially diverge. These issues and more are alluded to in the "DESIGN PHILOSOPHY" and "CAVEATS" sections of the git-svn manpage. Some of the information is outdated, but the overall points still stand: git usage should be linearized and only shared through SVN or via git-format-patch. http://www.kernel.org/pub/software/scm/git/docs/git-svn.html In summary, a central git mirror can help individual developers seed their own mirrors; but they cannot use normal git workflows as long as SVN holds the master repository. All attempts to do otherwise will be punished by the laws of nature. Later, Daniel P.S. Sorry for the slight incoherence, I don't have time to edit things properly.

On 4 February 2011 03:30, Belcourt, K. Noel <kbelco@sandia.gov> wrote:
I'd like to propose that we follow the Clang approach. Instead of playing with one Boost library converted to git, let's convert the entire SVN repo in it's current structure to git, and publicly maintain it by adding an SVN post-commit hook to run 'git svn fetch' following each SVN commit to bring the git repo up to date (or update the git mirror every few hours via cron).
I'd find that very useful. Daniel

On 2/4/2011 10:08 AM, Hartmut Kaiser wrote:
Oh fun. So, "a few of us", who are not really identified and whose level of participation in Boost development is therefore unknown, are apparently are trying to force various changes are result of off-list discussion?
The history of this discussion seem to go like this: - Some folks proposed switch to git, and found some opposition, and no consensus was reached. - Beman suggests to play with git, using one library. That seems OK. - Suddenly, folks are starting to discuss various details as if switch to git is already decided. - Then, folks start to discuss modularization as if it's already planned, despite there been different opinions whether it's needed, and how exactly it's needed. - Then, obviously my favourite, it's proposed to switch to CMake in the same big bang, despite the fact that no discussion about that ever happened.
It looks like either: - there's some hidden play going on - somebody is trying to just sneak his changes without discussion, either by just doing them, or by talking about them as if they are certain until everybody starts to believe that.
What's going on?
I second that question. Volodya, let me say that I'm deeply troubled about this and that I share your concerns.
Totally valid questions and concerns. I think John got a little ahead of himself with his email. I've been on the off-list exchanges which took place between some folks at BoostPro and some folks at Kitware. The reason it took place off-list is because it involves Kitware business. I would like to dispel all doubt, confusion and conspiracy theories by posting relevant parts of the discussion here, but obviously I can't do that without first getting permission from the guys at Kitware, but I think it's safe to at least sum it up like this: (a) The modularized, git-ified, cmake-ified boost distribution is being pursued by Kitware for valid business reasons, (b) That work will go ahead whether Boost ultimately adopts the result or not, and (c) Dave is well aware of the fact that Boost may chose not to adopt the result. He has reminded Kitware of that fact on several occasions. Nobody is going to force anything on Boost. The folks doing this work genuinely believe in the value of it and hope that the Boost community will agree once it sees the results, but that's up to everybody to decide. Apologies if anybody's feathers got ruffled. -- Eric Niebler BoostPro Computing http://www.boostpro.com

my two cents: first I'm using git (at work) and it would be happy if I could use it for boost too. git is a little bit difficult to use at the beginning (when you are more familiar the possibilities you get outweigh this) so IMHO it makes no sense to switch from svn to git and forcing people to use git. I would suggest (if not already done) that the main repo remains managed by svn (with regression tests etc.) and for git we setup an alternative boost repo (for instance at http://gitorious.org). The svn-history should be importable into git and the changes pushed to the git-repo could forwarded to svn at daily (?) basis (and vica versa). The developers can use svn or git as what they prefer. Oliver -- Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de

On Thu, Feb 3, 2011 at 5:01 PM, John Wiegley <johnw@boostpro.com> wrote:
"Belcourt, K. Noel" <kbelco@sandia.gov> writes:
I'd much prefer to leave the repo structure unchanged and migrate directly into git "as is". Restructure the repo into submodules after we've made the transition to git. It will be much easier to restructure the repo with everything already in git. There's two upsides, we lose no commit history and it only perturbs one aspect at a time (first give people chance to use same repo layout using new tool, followed by a restructure of the repo into submodules using the new tool). I worry about perturbing too many variables at once.
A few of us have been discussing this at length off-list. There are arguments on both sides, so I can't say which is truly best except I think we may prefer to get all this disruption over in one big step:
1. Move to Git, preserving monolithic history in a "boost-history" repo. 2. Separate the projects into submodules governed by a "boost" super-project.
I'm with you up to this point.
3. Switch to CMake as the build process for these submodules.
That's a whole different topic. There is something deeply flawed about the modularization design if it requires a switch to CMake. Indeed, one of the reasons I want to try the modularization design on one of my libraries is to verify it is compatible with both our current build system and build systems that have nothing to do with either Boost.Build/bjam or CMake. --Beman

At Sun, 6 Feb 2011 16:36:54 -0500, Beman Dawes wrote:
There is something deeply flawed about the modularization design if it requires a switch to CMake.
Modularization doesn't *require* a switch to CMake at all. There's no reason the modularized boost couldn't be used with Boost.Build, provided someone was willing to make the (relatively minor) Jamfile changes that would be necessary. I think there's one other element, the generation of forwarding headers into a common boost/ directory (to keep compiler command-line length under control) that would require a fairly trivial amount of additional Boost.Build programming. But nobody has volunteered to do the work so far.
Indeed, one of the reasons I want to try the modularization design on one of my libraries is to verify it is compatible with both our current build system and build systems that have nothing to do with either Boost.Build/bjam or CMake.
There's nothing fancy going on with the directory structure of a modularized Boost, and absolutely no reason it should cause a problem for any reasonable build system. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

There is something deeply flawed about the modularization design if it requires a switch to CMake.
Modularization doesn't *require* a switch to CMake at all. There's no reason the modularized boost couldn't be used with Boost.Build, provided someone was willing to make the (relatively minor) Jamfile changes that would be necessary.
If we organize like the sandbox, there's no real Boost.Build changes needed at all, just a requirement for the user to set a variable to point to a complete (integrated) boost tree somewhere (probably the release branch, which I'm assuming would be integrated). HTH, John.

At Mon, 7 Feb 2011 09:26:22 -0000, John Maddock wrote:
There is something deeply flawed about the modularization design if it requires a switch to CMake.
Modularization doesn't *require* a switch to CMake at all. There's no reason the modularized boost couldn't be used with Boost.Build, provided someone was willing to make the (relatively minor) Jamfile changes that would be necessary.
If we organize like the sandbox,
Last I looked, there was no real organization there. Or, there were at least two organizations: http://svn.boost.org/svn/boost/sandbox/boost/library-name http://svn.boost.org/svn/boost/sandbox/libs/library-name which is unsustainable, and http://svn.boost.org/svn/boost/sandbox/library-name/boost/ http://svn.boost.org/svn/boost/sandbox/library-name/libs/ The planned/proposed organization is roughly like the latter. If you want to look at the organization in detail, see https://github.com/boost-lib/boost -- Dave Abrahams BoostPro Computing http://www.boostpro.com

If we organize like the sandbox,
Last I looked, there was no real organization there. Or, there were at least two organizations:
http://svn.boost.org/svn/boost/sandbox/boost/library-name http://svn.boost.org/svn/boost/sandbox/libs/library-name
which is unsustainable, and
And depricated.
http://svn.boost.org/svn/boost/sandbox/library-name/boost/ http://svn.boost.org/svn/boost/sandbox/library-name/libs/
The planned/proposed organization is roughly like the latter. If you want to look at the organization in detail, see https://github.com/boost-lib/boost
Nod, either could be supported by Boost.Build trivially provided there's a complete (integrated) release tree sitting around somewhere - otherwise as you mention the compiler command paths get stupidly long.... John.

At Mon, 7 Feb 2011 12:27:21 -0000, John Maddock wrote:
If we organize like the sandbox,
Last I looked, there was no real organization there. Or, there were at least two organizations:
http://svn.boost.org/svn/boost/sandbox/boost/library-name http://svn.boost.org/svn/boost/sandbox/libs/library-name
which is unsustainable, and
And depricated.
http://svn.boost.org/svn/boost/sandbox/library-name/boost/ http://svn.boost.org/svn/boost/sandbox/library-name/libs/
The planned/proposed organization is roughly like the latter. If you want to look at the organization in detail, see https://github.com/boost-lib/boost
Nod, either could be supported by Boost.Build trivially provided there's a complete (integrated) release tree sitting around somewhere - otherwise as you mention the compiler command paths get stupidly long....
I don't know what you mean by "complete (integrated) release tree", but we're not planning to do that. We're only planning, as part of the build process, to generate forwarding headers in an integrated include tree -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Mon, Feb 7, 2011 at 7:39 AM, Dave Abrahams <dave@boostpro.com> wrote:
At Mon, 7 Feb 2011 12:27:21 -0000, John Maddock wrote:
http://svn.boost.org/svn/boost/sandbox/library-name/boost/ http://svn.boost.org/svn/boost/sandbox/library-name/libs/
The planned/proposed organization is roughly like the latter. If you want to look at the organization in detail, see https://github.com/boost-lib/boost
Nod, either could be supported by Boost.Build trivially provided there's a complete (integrated) release tree sitting around somewhere - otherwise as you mention the compiler command paths get stupidly long....
I don't know what you mean by "complete (integrated) release tree", but we're not planning to do that. We're only planning, as part of the build process, to generate forwarding headers in an integrated include tree
By "we", do you mean ryppl? I've gone through the process John Wiegley kindly sent me: Grab the supermodule: git clone git://github.com/boost-lib/boost.git Then 'cd' into the "boost" directory it created, and run: git submodule update --init Then continued as described here: http://ryppl.github.com/gettingstarted.html That produced a completely new tree with the forwarding headers, not under version control. It seemed oriented to what a user might want. What about a library developer? What does the tree structure they work with look like? How does integrate with their development repo? I guess the non-version controlled tree produced by the above could be used as a "complete (integrated) release tree", but I'd like to know the specifics, and give them a try. Thanks, --Beman

On 2/7/2011 7:28 AM, Beman Dawes wrote:
On Mon, Feb 7, 2011 at 7:39 AM, Dave Abrahams<dave@boostpro.com> wrote:
At Mon, 7 Feb 2011 12:27:21 -0000, John Maddock wrote:
http://svn.boost.org/svn/boost/sandbox/library-name/boost/ http://svn.boost.org/svn/boost/sandbox/library-name/libs/
The planned/proposed organization is roughly like the latter. If you want to look at the organization in detail, see https://github.com/boost-lib/boost
Nod, either could be supported by Boost.Build trivially provided there's a complete (integrated) release tree sitting around somewhere - otherwise as you mention the compiler command paths get stupidly long....
I don't know what you mean by "complete (integrated) release tree", but we're not planning to do that. We're only planning, as part of the build process, to generate forwarding headers in an integrated include tree
By "we", do you mean ryppl?
I've gone through the process John Wiegley kindly sent me:
Grab the supermodule: git clone git://github.com/boost-lib/boost.git
Then 'cd' into the "boost" directory it created, and run: git submodule update --init
Then continued as described here: http://ryppl.github.com/gettingstarted.html
That produced a completely new tree with the forwarding headers, not under version control. It seemed oriented to what a user might want.
What about a library developer? What does the tree structure they work with look like? How does integrate with their development repo? I guess the non-version controlled tree produced by the above could be used as a "complete (integrated) release tree", but I'd like to know the specifics, and give them a try.
I would also like to know: 1. How does that non-versioned complete integrated tree work as regards to updates/pulls? 2. What does it mean for testing? Specifically, complete incremental testing? 3. Or is there no way to get a complete with source integrated tree? I'm worried as not having an easy way to get that would make testing rather difficult. Note, I dont' consider "use ryppl", or "use cmake", as an acceptable answer ;-) As being locked into any particular tool is something I'm vehemently against. Worried-ly, yours. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

At Mon, 07 Feb 2011 09:53:52 -0600, Rene Rivera wrote:
What about a library developer? What does the tree structure they work with look like? How does integrate with their development repo? I guess the non-version controlled tree produced by the above could be used as a "complete (integrated) release tree", but I'd like to know the specifics, and give them a try.
I would also like to know:
1. How does that non-versioned complete integrated tree work as regards to updates/pulls?
Today, you call GenHeaders explicitly, but if you check out Marcus' work you don't have to.
2. What does it mean for testing? Specifically, complete incremental testing?
Nothing? What, specifically, are you worried about?
3. Or is there no way to get a complete with source integrated tree?
Please define "complete with source integrated tree" so I can answer that question.
I'm worried as not having an easy way to get that would make testing rather difficult.
Note, I dont' consider "use ryppl", or "use cmake", as an acceptable answer ;-) As being locked into any particular tool is something I'm vehemently against.
I don't think I'd give you an answer like that to any of these questions. But that said, being "locked in" somehow is unavoidable; we're locked into SVN now aren't we? We're certainly locked into C++ and Python and Boost.Build and DocBook and BoostBook and QuickBook and... the list goes on. Or do you mean something distinct by "locked in?" -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 2/7/2011 7:57 PM, Dave Abrahams wrote:
At Mon, 07 Feb 2011 09:53:52 -0600, Rene Rivera wrote:
What about a library developer? What does the tree structure they work with look like? How does integrate with their development repo? I guess the non-version controlled tree produced by the above could be used as a "complete (integrated) release tree", but I'd like to know the specifics, and give them a try.
I would also like to know:
1. How does that non-versioned complete integrated tree work as regards to updates/pulls?
Today, you call GenHeaders explicitly, but if you check out Marcus' work you don't have to.
I tried finding where & how Marcus did what you say, by going to the github link you give in the other response.. But I don't see what there you are referring to.
2. What does it mean for testing? Specifically, complete incremental testing?
Nothing? What, specifically, are you worried about?
OK.. How would we run the existing testing in incremental mode with a "forwarding headers tree plus separate sources"? What would testers need to rely on? Cmake? Python? Ryppl? Ctest? SSH? Git? FTP? HTTP? HTTPS? Git protocol?
3. Or is there no way to get a complete with source integrated tree?
Please define "complete with source integrated tree" so I can answer that question.
I mean a tree in the form we have it now, or equivalent to what we intend to *release* as the *single* Boost C++ Libraries package. I ask from a testing manager POV. I need to know what the changes might be for testers and me managing the testing infrastructure. And testers need to know what will be required of them. I'm not that interested in knowing what the experience is from the library developer POV, because I know Beman and others will investigate that aspect more fully than I could.
I'm worried as not having an easy way to get that would make testing rather difficult.
Note, I dont' consider "use ryppl", or "use cmake", as an acceptable answer ;-) As being locked into any particular tool is something I'm vehemently against.
I don't think I'd give you an answer like that to any of these questions. But that said, being "locked in" somehow is unavoidable; we're locked into SVN now aren't we? We're certainly locked into C++ and Python and Boost.Build and DocBook and BoostBook and QuickBook and... the list goes on. Or do you mean something distinct by "locked in?"
Note, I'm ignoring doc tools with this.. I guess I do mean something distinct by "locked in". The current set of tools, a good portion of them just scripts, do not in my view lock us in because they are relatively straightforward to move to something else that provides the basic equivalent functionality. This is because they map essentially to operations one could do by hand. But I do concede that is a rather thin definitional line ;-) My fear is that we end up with a system that is conceptually hard to understand how to put together and hence hard to make work. For example, testing at the moment doesn't depend on SVN. It's possible to download the tree as a tar archive through the almost universally available HTTP protocol. This is because we have testers, or at least have had testers, that can't do anything else. So if there's a requirement to have Git protocol access, that's likely a killer for testing. This is just one example. I'd have to see what the entire testing pipeline looks like to figure out where the "locked in" points are. I.e. I need to know what's actually going on, rather than just a "run this and that's it" answer. Hoping-that-I-didn't-confuse-things-more-ly ;-) -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

At Mon, 07 Feb 2011 20:30:54 -0600, Rene Rivera wrote:
On 2/7/2011 7:57 PM, Dave Abrahams wrote:
At Mon, 07 Feb 2011 09:53:52 -0600, Rene Rivera wrote:
What about a library developer? What does the tree structure they work with look like? How does integrate with their development repo? I guess the non-version controlled tree produced by the above could be used as a "complete (integrated) release tree", but I'd like to know the specifics, and give them a try.
I would also like to know:
1. How does that non-versioned complete integrated tree work as regards to updates/pulls?
Today, you call GenHeaders explicitly, but if you check out Marcus' work you don't have to.
I tried finding where & how Marcus did what you say, by going to the github link you give in the other response.. But I don't see what there you are referring to.
I've asked him to follow up here with details, but I believe it's in the commits from 2011-01-19 at https://github.com/boost-lib/cmake/commits/master
2. What does it mean for testing? Specifically, complete incremental testing?
Nothing? What, specifically, are you worried about?
OK.. How would we run the existing testing
Hmm, "existing testing" is a little vague, but...
in incremental mode with a "forwarding headers tree plus separate sources"? What would testers need to rely on? Cmake? Python? Ryppl? Ctest? SSH? Git? FTP? HTTP? HTTPS? Git protocol?
Here's where I think we'll end up: - testers need to set up and register a buildbot buildslave. I believe we can make that relatively trivial for people - that would imply a reliance on, at a minimum: - Python - A net connection Other things they will probably need to have, or the testing process will need to get for them: - CMake - C and C++ compilers - BuildBot - Twisted - Git or LibGit2 or Dulwich - Git or HTTP or HTTPS protocol capability
3. Or is there no way to get a complete with source integrated tree?
Please define "complete with source integrated tree" so I can answer that question.
I mean a tree in the form we have it now, or equivalent to what we intend to *release* as the *single* Boost C++ Libraries package.
Well, we need to decide the details, but I can imagine two possibilities for a Boost distro: 1. It is equivalent to the result of doing a "git clone" of the Boost superproject plus "git submodule update --init" to get all the subprojects (possibly with the .git directories deleted) 2. It's like #1 plus a generated forwarding headers directory. Either of these is pretty trivial to generate.
I ask from a testing manager POV. I need to know what the changes might be for testers and me managing the testing infrastructure. And testers need to know what will be required of them.
My plan is to require as little as possible. I've had some good experiences with writing BuildBot recipes that get all the pieces needed into the environment before running the tests, and I think that can be done straightforwardly.
I'm worried as not having an easy way to get that would make testing rather difficult.
Note, I dont' consider "use ryppl", or "use cmake", as an acceptable answer ;-) As being locked into any particular tool is something I'm vehemently against.
I don't think I'd give you an answer like that to any of these questions. But that said, being "locked in" somehow is unavoidable; we're locked into SVN now aren't we? We're certainly locked into C++ and Python and Boost.Build and DocBook and BoostBook and QuickBook and... the list goes on. Or do you mean something distinct by "locked in?"
Note, I'm ignoring doc tools with this..
I guess I do mean something distinct by "locked in". The current set of tools, a good portion of them just scripts, do not in my view lock us in because they are relatively straightforward to move to something else that provides the basic equivalent functionality. This is because they map essentially to operations one could do by hand. But I do concede that is a rather thin definitional line ;-)
Yes. And since you can do anything at all by hand... ;-)
My fear is that we end up with a system that is conceptually hard to understand how to put together and hence hard to make work. For example, testing at the moment doesn't depend on SVN. It's possible to download the tree as a tar archive through the almost universally available HTTP protocol.
Yes. Well, *incremental* testing today implies either having SVN or some kind of rsync-like mechanism to avoid updating unchanged files. For that, I think we should simply require Dulwich or LibGit2, or, failing that, Git. Clean-slate testing should only require HTTP and zip to get the code, since GitHub can supply zip- (or tar-)balls for any revision (http://github.com/ryppl/ryppl/zipball/master and http://github.com/ryppl/ryppl/tarball/master for example).
This is because we have testers, or at least have had testers, that can't do anything else. So if there's a requirement to have Git protocol access, that's likely a killer for testing.
Git already can use HTTP/HTTPS, so there's no reason Git protocol should be required.
This is just one example. I'd have to see what the entire testing pipeline looks like to figure out where the "locked in" points are. I.e. I need to know what's actually going on, rather than just a "run this and that's it" answer.
Provided I can automate everything else, I think all you need to know is what ports have to be open, right? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 2/8/2011 1:17 PM, Dave Abrahams wrote:
At Mon, 07 Feb 2011 20:30:54 -0600, Rene Rivera wrote:
On 2/7/2011 7:57 PM, Dave Abrahams wrote:
At Mon, 07 Feb 2011 09:53:52 -0600, Rene Rivera wrote:
What about a library developer? What does the tree structure they work with look like? How does integrate with their development repo? I guess the non-version controlled tree produced by the above could be used as a "complete (integrated) release tree", but I'd like to know the specifics, and give them a try.
I would also like to know:
1. How does that non-versioned complete integrated tree work as regards to updates/pulls?
Today, you call GenHeaders explicitly, but if you check out Marcus' work you don't have to.
I tried finding where& how Marcus did what you say, by going to the github link you give in the other response.. But I don't see what there you are referring to.
I've asked him to follow up here with details, but I believe it's in the commits from 2011-01-19 at https://github.com/boost-lib/cmake/commits/master
OK, I see that now.. So the answer is that it still generates forwarding headers. But now does it "automatically" using Cmake.
2. What does it mean for testing? Specifically, complete incremental testing?
Nothing? What, specifically, are you worried about?
OK.. How would we run the existing testing
Hmm, "existing testing" is a little vague, but...
I didn't think it was.. Since I'm talking about the context of this thread which is about using Git. And not about using Cmake, or Ryppl, or anything else. Hence the "existing testing" is using the current scripts but with Git sources.
in incremental mode with a "forwarding headers tree plus separate sources"? What would testers need to rely on? Cmake? Python? Ryppl? Ctest? SSH? Git? FTP? HTTP? HTTPS? Git protocol?
Here's where I think we'll end up:
- testers need to set up and register a buildbot buildslave. I believe we can make that relatively trivial for people
Even though I like buildbot.. I'm less than thrilled about adding setup steps for testers. As one of the testing goals is to make it simple for someone to provide testing resources, and hopefully fine-grained resources. Which means...
- that would imply a reliance on, at a minimum:
- Python - A net connection
Other things they will probably need to have, or the testing process will need to get for them:
- CMake - C and C++ compilers - BuildBot - Twisted - Git or LibGit2 or Dulwich - Git or HTTP or HTTPS protocol capability
...that any additional required programs/scripts would ideally be automatically installed to a sandbox (i.e. locally to not need system administrator access). For example; I've had a good deal of experience dealing with installing Twisted and it doesn't approach being nice about being installed anywhere other than in the system location. But my experience might be dated by now, since I haven't looked at it in more than a year. And as you can see from my other posts, I'm trying to figure out what the experience is like of installing Git is :-) With less than pleasurable results so far :-(
3. Or is there no way to get a complete with source integrated tree?
Please define "complete with source integrated tree" so I can answer that question.
I mean a tree in the form we have it now, or equivalent to what we intend to *release* as the *single* Boost C++ Libraries package.
Well, we need to decide the details, but I can imagine two possibilities for a Boost distro:
1. It is equivalent to the result of doing a "git clone" of the Boost superproject plus "git submodule update --init" to get all the subprojects (possibly with the .git directories deleted)
2. It's like #1 plus a generated forwarding headers directory.
Either of these is pretty trivial to generate.
I see. Is there a way to do the same without Git? I.e. do the tar/zip archives available from github contain the same files as the clone+submodules?
I ask from a testing manager POV. I need to know what the changes might be for testers and me managing the testing infrastructure. And testers need to know what will be required of them.
My plan is to require as little as possible. I've had some good experiences with writing BuildBot recipes that get all the pieces needed into the environment before running the tests, and I think that can be done straightforwardly.
OK. A comment on that at the end...
My fear is that we end up with a system that is conceptually hard to understand how to put together and hence hard to make work. For example, testing at the moment doesn't depend on SVN. It's possible to download the tree as a tar archive through the almost universally available HTTP protocol.
Yes. Well, *incremental* testing today implies either having SVN or some kind of rsync-like mechanism to avoid updating unchanged files.
Currently yes. But it's possible to make it not be required. It would be straightforward to change the regression scripts to download a new tree, using the tar archive option and assuming the time stamps are correctly preserved (it does already IIRC), and move over the "bin.v2" directory from the old tree to the fresh one. And since all the generated files and timestamps in the bin.v2 dir are appropriate one could do an incremental test run on the new tree.
This is just one example. I'd have to see what the entire testing pipeline looks like to figure out where the "locked in" points are. I.e. I need to know what's actually going on, rather than just a "run this and that's it" answer.
Provided I can automate everything else, I think all you need to know is what ports have to be open, right?
...And the comment I promised above... No, there's more. The assumption is that testers have the capability to open a port. Also assumes that we want them open a port. I'd argue that we don't want them to open a port because it's yet another manual setup step they have to do. Essentially if I want to make it possible for someone at home, with their cable/DSL connected PC, to runs tests, I need to make it as simple as: 1. Have Python >= 2.4; 2. download "test.py"; 3. run "python test.py". In my dreams it would be even simpler, and just be download an installer, run it, and nothing else. Which brings up another item; Your description so far also assumes the tester will need to install/setup a variety of programs. Which is moving in the wrong direction, IMO. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

At Tue, 08 Feb 2011 15:10:03 -0600, Rene Rivera wrote:
the "existing testing" is using the current scripts but with Git sources.
Thanks for explaining; it really wasn't obvious to me. Using the current scripts but with Git sources... obviously the scripts would need to be adjusted to fetch sources differently, and assuming we're modularized, Jamfiles would need to be adjusted to the new directory structure. Aside from that, I can't think of anything that would be different. Does that answer your question?
in incremental mode with a "forwarding headers tree plus separate sources"? What would testers need to rely on? Cmake? Python? Ryppl? Ctest? SSH? Git? FTP? HTTP? HTTPS? Git protocol?
Here's where I think we'll end up:
- testers need to set up and register a buildbot buildslave. I believe we can make that relatively trivial for people
Even though I like buildbot.. I'm less than thrilled about adding setup steps for testers.
What I mean by "relatively trivial" is "one command," or nearly so.
As one of the testing goals is to make it simple for someone to provide testing resources, and hopefully fine-grained resources. Which means...
- that would imply a reliance on, at a minimum:
- Python - A net connection
Other things they will probably need to have, or the testing process will need to get for them:
- CMake - C and C++ compilers - BuildBot - Twisted - Git or LibGit2 or Dulwich - Git or HTTP or HTTPS protocol capability
...that any additional required programs/scripts would ideally be automatically installed to a sandbox (i.e. locally to not need system administrator access).
Naturally. You know I've been thinking about these things. virtualenv is really good for that.
For example; I've had a good deal of experience dealing with installing Twisted and it doesn't approach being nice about being installed anywhere other than in the system location.
https://gist.github.com/818662
But my experience might be dated by now, since I haven't looked at it in more than a year. And as you can see from my other posts, I'm trying to figure out what the experience is like of installing Git is :-) With less than pleasurable results so far :-(
Yeah, well as I said there are several alternatives to actually having Git. I'd rather not make testers install it either.
3. Or is there no way to get a complete with source integrated tree?
Please define "complete with source integrated tree" so I can answer that question.
I mean a tree in the form we have it now, or equivalent to what we intend to *release* as the *single* Boost C++ Libraries package.
Well, we need to decide the details, but I can imagine two possibilities for a Boost distro:
1. It is equivalent to the result of doing a "git clone" of the Boost superproject plus "git submodule update --init" to get all the subprojects (possibly with the .git directories deleted)
2. It's like #1 plus a generated forwarding headers directory.
Either of these is pretty trivial to generate.
I see. Is there a way to do the same without Git? I.e. do the tar/zip archives available from github contain the same files as the clone+submodules?
Yes. The tarball won't contain the submodules, but references to them that can be similarly downloaded and untar'd.
My plan is to require as little as possible. I've had some good experiences with writing BuildBot recipes that get all the pieces needed into the environment before running the tests, and I think that can be done straightforwardly.
OK. A comment on that at the end...
My fear is that we end up with a system that is conceptually hard to understand how to put together and hence hard to make work. For example, testing at the moment doesn't depend on SVN. It's possible to download the tree as a tar archive through the almost universally available HTTP protocol.
Yes. Well, *incremental* testing today implies either having SVN or some kind of rsync-like mechanism to avoid updating unchanged files.
Currently yes. But it's possible to make it not be required. It would be straightforward to change the regression scripts to download a new tree, using the tar archive option
Don't forget Windows; no tar there usually.
and assuming the time stamps are correctly preserved (it does already IIRC), and move over the "bin.v2" directory from the old tree to the fresh one. And since all the generated files and timestamps in the bin.v2 dir are appropriate one could do an incremental test run on the new tree.
So there are straightforward ways of doing these things in either system... someone just needs to code them.
This is just one example. I'd have to see what the entire testing pipeline looks like to figure out where the "locked in" points are. I.e. I need to know what's actually going on, rather than just a "run this and that's it" answer.
Provided I can automate everything else, I think all you need to know is what ports have to be open, right?
...And the comment I promised above...
No, there's more. The assumption is that testers have the capability to open a port. Also assumes that we want them open a port.
Wha? I never meant to imply we were going to ask testers to open a port! The current system has to communicate over the web, which means *some* port has to be open somewhere, to talk to the outside world. The same would apply, and we'd make sure any of the most commonly-open ports (e.g. 80) will work. Just as it is now. That's a no-brainer, isn't it?
I'd argue that we don't want them to open a port because it's yet another manual setup step they have to do.
Of course we don't!
Essentially if I want to make it possible for someone at home, with their cable/DSL connected PC, to runs tests, I need to make it as simple as: 1. Have Python >= 2.4; 2. download "test.py"; 3. run "python test.py". In my dreams it would be even simpler, and just be download an installer, run it, and nothing else.
Exactly.
Which brings up another item; Your description so far also assumes the tester will need to install/setup a variety of programs. Which is moving in the wrong direction, IMO.
I think you misunderstood my earlier remarks. The point of the list above was to enumerate both a. what you need to have to get started and b. what the system will ultimately cause you to have on your machine either via manual or automatic installation. Some people care about what software will get installed, and that's what I thought you were asking about. As I said earlier, My plan is to require as little as possible I mean that literally. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

For what I'm not quoting.. Right, got it, yes. On 2/9/2011 9:58 AM, Dave Abrahams wrote:
At Tue, 08 Feb 2011 15:10:03 -0600, Rene Rivera wrote:
...that any additional required programs/scripts would ideally be automatically installed to a sandbox (i.e. locally to not need system administrator access).
Naturally. You know I've been thinking about these things. virtualenv is really good for that.
Ah, right, I had forgotten about virtualenv. Even though I've used it recently to do trac installs.
I see. Is there a way to do the same without Git? I.e. do the tar/zip archives available from github contain the same files as the clone+submodules?
Yes. The tarball won't contain the submodules, but references to them that can be similarly downloaded and untar'd.
OK, so it will take a bit more work on the download to create the integrated tree for the current testing. Hopefully it's easy to determine what one needs to do from the archive of the superproject and the references?
Currently yes. But it's possible to make it not be required. It would be straightforward to change the regression scripts to download a new tree, using the tar archive option
Don't forget Windows; no tar there usually.
Sorry.. Everywhere I mentioned tar replace it with tar/zip. I keep forgetting to be complete when writing the long emails ;-)
No, there's more. The assumption is that testers have the capability to open a port. Also assumes that we want them open a port.
Wha? I never meant to imply we were going to ask testers to open a port!
The current system has to communicate over the web, which means *some* port has to be open somewhere, to talk to the outside world. The same would apply, and we'd make sure any of the most commonly-open ports (e.g. 80) will work. Just as it is now. That's a no-brainer, isn't it?
Sorry.. At the mention of buildbot I assumed there was a port to open. Since that what I remember from my last use of it. I.e. that the connections between the master & slaves is done through a non-standard port. And hence, if you happen to be behind a restrictive firewall, you'll have to fidget with opening ports (or using a proxy). One aspect of many restrictive firewalls is that they are statefull and will only allow HTTP traffic through 80 (and 443), but not through others. SO what I'm saying is that it's not such a no-brainer ;-) It takes some consideration as to what you send through which ports. Or provide some central public proxy to provide the HTTP encapsulation to get around firewalls. PS. Thanks for being patient with my interrogation :-) -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

On 8 Feb 2011, at 19:17, Dave Abrahams wrote:
At Mon, 07 Feb 2011 20:30:54 -0600, Rene Rivera wrote:
On 2/7/2011 7:57 PM, Dave Abrahams wrote:
At Mon, 07 Feb 2011 09:53:52 -0600, Rene Rivera wrote:
What about a library developer? What does the tree structure they work with look like? How does integrate with their development repo? I guess the non-version controlled tree produced by the above could be used as a "complete (integrated) release tree", but I'd like to know the specifics, and give them a try.
I would also like to know:
1. How does that non-versioned complete integrated tree work as regards to updates/pulls?
Today, you call GenHeaders explicitly, but if you check out Marcus' work you don't have to.
I tried finding where & how Marcus did what you say, by going to the github link you give in the other response.. But I don't see what there you are referring to.
I've asked him to follow up here with details, but I believe it's in the commits from 2011-01-19 at https://github.com/boost-lib/cmake/commits/master
2. What does it mean for testing? Specifically, complete incremental testing?
Nothing? What, specifically, are you worried about?
OK.. How would we run the existing testing
Hmm, "existing testing" is a little vague, but...
in incremental mode with a "forwarding headers tree plus separate sources"? What would testers need to rely on? Cmake? Python? Ryppl? Ctest? SSH? Git? FTP? HTTP? HTTPS? Git protocol?
Here's where I think we'll end up:
- testers need to set up and register a buildbot buildslave. I believe we can make that relatively trivial for people
- that would imply a reliance on, at a minimum:
- Python - A net connection
The current test requires these, and then requires running a python script. The current system can be built using the base system on both linux and mac, and that is very useful. If the new system required must more than that doing by hand, or required installing things as root, I would find it hard to continue providing testing. While I have CPU power to hand, I don't have that much time, and run tests on systems where I do not have root access. Some dependancies, such as cmake and git are possible to justify. Installing buildbot or twisted or dulwich(?) as root is much less likely Chris

At Tue, 8 Feb 2011 21:55:36 +0000, Christopher Jefferson wrote:
If the new system required must more than that doing by hand, or required installing things as root, I would find it hard to continue providing testing. While I have CPU power to hand, I don't have that much time, and run tests on systems where I do not have root access.
Some dependancies, such as cmake and git are possible to justify. Installing buildbot or twisted or dulwich(?) as root is much less likely
Please see my reply to Rene. We have no intention of requiring any "as-root" installation steps. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave Abrahams wrote:
At Mon, 07 Feb 2011 09:53:52 -0600, Rene Rivera wrote:
I would also like to know:
1. How does that non-versioned complete integrated tree work as regards to updates/pulls?
Today, you call GenHeaders explicitly, but if you check out Marcus' work you don't have to.
I worked with a build system that did something like that, generate headers to a common directory tree that forwards to the real header files. This has some drawbacks: * it takes time to generate the headers * the user will need to jump to the real headers to look for details * we need a cleanup to ensure that removed files are no more on the common directory having as consequence a complete rebuild. Then we moved to copy instead of generate forward headers and last we moved to add on the SCM soft links to the real headers. While this is done by hand we found it was the best compromise. I don't know if Svn or Git allows to create soft links on all the platforms we use. Just in case the build system you are proposing could had or already solve one of these issues. Best, Vicente -- View this message in context: http://boost.2283326.n4.nabble.com/Git-Moving-beyond-arm-waving-tp3254610p32... Sent from the Boost - Dev mailing list archive at Nabble.com.

At Mon, 7 Feb 2011 22:27:20 -0800 (PST), Vicente Botet wrote:
Dave Abrahams wrote:
At Mon, 07 Feb 2011 09:53:52 -0600, Rene Rivera wrote:
I would also like to know:
1. How does that non-versioned complete integrated tree work as regards to updates/pulls?
Today, you call GenHeaders explicitly, but if you check out Marcus' work you don't have to.
I worked with a build system that did something like that, generate headers to a common directory tree that forwards to the real header files. This has some drawbacks:
* it takes time to generate the headers
I suppose. You only have to do it once.
* the user will need to jump to the real headers to look for details
Jump there from where? Any error messages will point at the real headers. What else is there?
* we need a cleanup to ensure that removed files are no more on the common directory
Yes.
having as consequence a complete rebuild.
Huh? I don't see why that should be necessary. Just remove the right forwarding headers and you're golden.
Then we moved to copy instead of generate forward headers and last we moved to add on the SCM soft links to the real headers. While this is done by hand we found it was the best compromise. I don't know if Svn or Git allows to create soft links on all the platforms we use.
If you mean a soft link in the filesystem, I think the answer is that some OS platforms don't support them. If you mean something else, I need clarification. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

AMDG On 2/8/2011 11:20 AM, Dave Abrahams wrote:
At Mon, 7 Feb 2011 22:27:20 -0800 (PST), Vicente Botet wrote:
* the user will need to jump to the real headers to look for details
Jump there from where? Any error messages will point at the real headers. What else is there?
At least for me, using the VS IDE, I open files via the #include directives on a regular basis. Having to jump through a forwarding header every time will get annoying. In Christ, Steven Watanabe

At Tue, 08 Feb 2011 12:00:03 -0800, Steven Watanabe wrote:
AMDG
On 2/8/2011 11:20 AM, Dave Abrahams wrote:
At Mon, 7 Feb 2011 22:27:20 -0800 (PST), Vicente Botet wrote:
* the user will need to jump to the real headers to look for details
Jump there from where? Any error messages will point at the real headers. What else is there?
At least for me, using the VS IDE, I open files via the #include directives on a regular basis. Having to jump through a forwarding header every time will get annoying.
I guess we can arrange for CMake to not use forwarding headers when it generates VS IDE projects, where I assume there's no issue about command line length. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 02/08/2011 09:00 PM, Steven Watanabe wrote:
AMDG
On 2/8/2011 11:20 AM, Dave Abrahams wrote:
At Mon, 7 Feb 2011 22:27:20 -0800 (PST), Vicente Botet wrote:
* the user will need to jump to the real headers to look for details
Jump there from where? Any error messages will point at the real headers. What else is there?
At least for me, using the VS IDE, I open files via the #include directives on a regular basis. Having to jump through a forwarding header every time will get annoying.
It is not clear to me why this can not be a hard or soft link to the real header that is in SCM. I much prefer that to an annoying forwarding header or a copy. No avoidable look-alike duplicates causing confusion please. Even Emacs will take you to the wrong copy of the header, or at least the one you do not want to edit. You do not need Visual Studio to get annoyed or confused by this stuff. I assure you that having the build system write over your recent changes get this point burned in. After a few times of that, it is hopefully just an annoyance to have to look up the right copy of the header before you make the your changes. Git support for managing symbolic links exist, so that is a possibility. However platform portability of these features must ensured. Also I generally think this is better handled by the build, not the SCM, that seems more appropriate. Note also that using symbolic links may break the build dependency, I guess it need testing. In any case you need rules in the build system touching the links each time the header is changed. There are also issues with cleaning up when headers are no more in use. For platforms not having links, a copy can do just fine if we accept the annoyance. This should be miner though as I think all major operating systems have file systems with proper links these days. -- Bjørn

Dave Abrahams wrote:
At Mon, 7 Feb 2011 22:27:20 -0800 (PST), Vicente Botet wrote:
Dave Abrahams wrote:
At Mon, 07 Feb 2011 09:53:52 -0600, Rene Rivera wrote:
I would also like to know:
1. How does that non-versioned complete integrated tree work as
regards
to updates/pulls?
Today, you call GenHeaders explicitly, but if you check out Marcus' work you don't have to.
I worked with a build system that did something like that, generate headers to a common directory tree that forwards to the real header files. This has some drawbacks:
* it takes time to generate the headers
I suppose. You only have to do it once.
Once every time there is a possibility to have a new file. That means once after each update, cleanup or additional header created manually. For non incremental builds this must be done every time. If we want to don't worry we need to ensure the coherency every time.
* the user will need to jump to the real headers to look for details
Jump there from where? Any error messages will point at the real headers. What else is there?
The problem didn't was in order to check for error messages, but to just read the sources. I remember now, the problem was that the tree directory structure was not preserved (the include directory was flat) , and the people didn't know from which directory the files were generated until they look into its contents. If the directory structure is preserved there is no issue.
* we need a cleanup to ensure that removed files are no more on the common directory
Yes.
having as consequence a complete rebuild.
Huh? I don't see why that should be necessary. Just remove the right forwarding headers and you're golden.
This is not so simple. The build system needs to compare both repositories and remove the generated headers that are missing on the new source repository. This again need to be done each time a file could be removed. That is after each update or explicit header removal. If we want to don't worry we need to ensure the coherency every time.
Then we moved to copy instead of generate forward headers and last we moved to add on the SCM soft links to the real headers. While this is done by hand we found it was the best compromise. I don't know if Svn or Git allows to create soft links on all the platforms we use.
If you mean a soft link in the filesystem, I think the answer is that some OS platforms don't support them. If you mean something else, I need clarification.
You are right. We were on Unix-like systems. The same links were not visible under windows. What I'm doing now when I test my on going libraries respect to a given boost version, and I think most of the people developing new libraries, is to checkout the header library directory directly in the boost root directory. In this way there is no need to generate any header. I do it also for other libraries that use a git repository without any trouble. Maybe this schema merits some reflection if we want to work with multiple modular libraries and possibility with different SCM systems and preserve the proved current system. Just my 2cts, Vicente -- View this message in context: http://boost.2283326.n4.nabble.com/Git-Moving-beyond-arm-waving-tp3254610p32... Sent from the Boost - Dev mailing list archive at Nabble.com.

At Tue, 8 Feb 2011 12:13:37 -0800 (PST), Vicente Botet wrote:
Dave Abrahams wrote:
At Mon, 7 Feb 2011 22:27:20 -0800 (PST), Vicente Botet wrote:
Dave Abrahams wrote:
At Mon, 07 Feb 2011 09:53:52 -0600, Rene Rivera wrote:
I would also like to know:
1. How does that non-versioned complete integrated tree work as
regards
to updates/pulls?
Today, you call GenHeaders explicitly, but if you check out Marcus' work you don't have to.
I worked with a build system that did something like that, generate headers to a common directory tree that forwards to the real header files. This has some drawbacks:
* it takes time to generate the headers
I suppose. You only have to do it once.
Once every time there is a possibility to have a new file. That means once after each update, cleanup or additional header created manually. For non incremental builds this must be done every time.
You only have to generate a forwarding header when you actually have a new header file. Yes, you have to check for the existence of forwarding headers each time, but that's not the same cost as actually generating them.
Jump there from where? Any error messages will point at the real headers. What else is there?
The problem didn't was in order to check for error messages, but to just read the sources. I remember now, the problem was that the tree directory structure was not preserved (the include directory was flat) , and the people didn't know from which directory the files were generated until they look into its contents.
If the directory structure is preserved there is no issue.
We would preserve the structure.
Huh? I don't see why that should be necessary. Just remove the right forwarding headers and you're golden.
This is not so simple. The build system needs to compare both repositories
No repository action is needed. This all operates on regular directory trees.
and remove the generated headers that are missing on the new source repository.
What's not-so-simple about that?
This again need to be done each time a file could be removed. That is after each update or explicit header removal. If we want to don't worry we need to ensure the coherency every time.
So we'll do that. I don't get what you're worried about.
If you mean a soft link in the filesystem, I think the answer is that some OS platforms don't support them. If you mean something else, I need clarification.
You are right. We were on Unix-like systems. The same links were not visible under windows.
What I'm doing now when I test my on going libraries respect to a given boost version, and I think most of the people developing new libraries, is to checkout the header library directory directly in the boost root directory. In this way there is no need to generate any header. I do it also for other libraries that use a git repository without any trouble.
Maybe this schema merits some reflection if we want to work with multiple modular libraries and possibility with different SCM systems and preserve the proved current system.
I don't mind reflecting on it, but this has nothing to do with SCMs. You'd have the same problem even if you (for some insane reason) wanted to do development without source control. As for preserving the current system, I have little interest in that. If you want to be able to test an *installed* boost (and I think it's crucial), you need to accept the idea that the headers being used for compiling tests are different than the ones in your source distribution, because an *installation* of Boost generally doesn't include all the sources. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave wrote:
You only have to generate a forwarding header when you actually have a new header file. Yes, you have to check for the existence of forwarding headers each time, but that's not the same cost as actually generating them.
Forwarding headers have the advantage of working regardless of the filesystem. But many Boost lib developers are running on filesystems that support directory symlinks. Directory symlinks have a lot of advantages; you only have to generate one per library, they are faster, and easier to use with IDE's that let you click on a header to open it. With a forwarding header, you have to do that twice to get at the real header. I've been testing on Windows with boost/filesystem being a directory symlink to ../../libs/filesystem/include/boost/filesystem and it works smoothly with no changes whatsoever to either by bjam testing or VisualC++ IDE testing. Would it be possible to generate directory symlinks instead of forwarding headers if the filesystem supports them? --Beman

At Tue, 8 Feb 2011 16:03:33 -0500, Beman Dawes wrote:
Dave wrote:
You only have to generate a forwarding header when you actually have a new header file. Yes, you have to check for the existence of forwarding headers each time, but that's not the same cost as actually generating them.
Forwarding headers have the advantage of working regardless of the filesystem. But many Boost lib developers are running on filesystems that support directory symlinks. Directory symlinks have a lot of advantages; you only have to generate one per library, they are faster, and easier to use with IDE's that let you click on a header to open it. With a forwarding header, you have to do that twice to get at the real header.
I've been testing on Windows with boost/filesystem being a directory symlink to ../../libs/filesystem/include/boost/filesystem
Since when does Windows support symlinks?!
and it works smoothly with no changes whatsoever to either by bjam testing or VisualC++ IDE testing.
Would it be possible to generate directory symlinks instead of forwarding headers if the filesystem supports them?
Sure! What filesystem doesn't? FAT? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave Abrahams wrote:
Beman Dawes wrote:
I've been testing on Windows with boost/filesystem being a directory symlink to ../../libs/filesystem/include/boost/filesystem
Since when does Windows support symlinks?!
They're called "junctions," IIRC, and they only work for local directories.
Would it be possible to generate directory symlinks instead of forwarding headers if the filesystem supports them?
Sure! What filesystem doesn't? FAT?
I think they're only for NTFS. _____ Rob Stewart robert.stewart@sig.com Software Engineer, Core Software using std::disclaimer; Susquehanna International Group, LLP http://www.sig.com IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

On Tue, Feb 8, 2011 at 4:48 PM, Dave Abrahams <dave@boostpro.com> wrote:
At Tue, 8 Feb 2011 16:03:33 -0500, Beman Dawes wrote:
Dave wrote:
You only have to generate a forwarding header when you actually have a new header file. Yes, you have to check for the existence of forwarding headers each time, but that's not the same cost as actually generating them.
Forwarding headers have the advantage of working regardless of the filesystem. But many Boost lib developers are running on filesystems that support directory symlinks. Directory symlinks have a lot of advantages; you only have to generate one per library, they are faster, and easier to use with IDE's that let you click on a header to open it. With a forwarding header, you have to do that twice to get at the real header.
I've been testing on Windows with boost/filesystem being a directory symlink to ../../libs/filesystem/include/boost/filesystem
Since when does Windows support symlinks?!
Since Vista. That's how Boost.Filesystem can now supports symlinks on Windows. On the command line, mklink does the job.
and it works smoothly with no changes whatsoever to either by bjam testing or VisualC++ IDE testing.
Would it be possible to generate directory symlinks instead of forwarding headers if the filesystem supports them?
Sure! What filesystem doesn't? FAT?
FAT, pre-Vista NTFS, and some CD-ROM and DVD file systems. --Beman

On Tue, Feb 08, 2011 at 04:59:25PM -0500, Beman Dawes wrote:
On Tue, Feb 8, 2011 at 4:48 PM, Dave Abrahams <dave@boostpro.com> wrote:
At Tue, 8 Feb 2011 16:03:33 -0500, Beman Dawes wrote: Since when does Windows support symlinks?!
Since Vista. That's how Boost.Filesystem can now supports symlinks on Windows.
Note that they are not as usable as they may first appear, as you generally cannot make them as a limited user, even if you have the usual permissions for the directories and files.
and it works smoothly with no changes whatsoever to either by bjam testing or VisualC++ IDE testing.
Would it be possible to generate directory symlinks instead of forwarding headers if the filesystem supports them?
Sure! What filesystem doesn't? FAT?
FAT, pre-Vista NTFS, and some CD-ROM and DVD file systems.
I would recommend that if you go down this path, that you _thoroughly_ test the behaviour of them under regular and limited users. In my experience, they're virtually useless as a non-elevated user. -- Lars Viklund | zao@acc.umu.se

At Wed, 9 Feb 2011 00:21:14 +0100, Lars Viklund wrote:
and it works smoothly with no changes whatsoever to either by bjam testing or VisualC++ IDE testing.
Would it be possible to generate directory symlinks instead of forwarding headers if the filesystem supports them?
Sure! What filesystem doesn't? FAT?
FAT, pre-Vista NTFS, and some CD-ROM and DVD file systems.
I would recommend that if you go down this path, that you _thoroughly_ test the behaviour of them under regular and limited users. In my experience, they're virtually useless as a non-elevated user.
As long as we have forwarding headers as a fallback mechanism, we'll be fine I suppose. But, naturally, testing is essential. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

At Tue, 8 Feb 2011 16:59:25 -0500, Beman Dawes wrote:
I've been testing on Windows with boost/filesystem being a directory symlink to ../../libs/filesystem/include/boost/filesystem
Since when does Windows support symlinks?!
Since Vista.
Oh, that explains why I've never seen them. I guess it's time for me to get some more up-to-date Windows OS installations!
Would it be possible to generate directory symlinks instead of forwarding headers if the filesystem supports them?
Sure! What filesystem doesn't? FAT?
FAT, pre-Vista NTFS, and some CD-ROM and DVD file systems.
OK, so much the better. I'm sure we can eliminate most of the forwarding headers most of the time, then! -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 2/8/2011 3:48 PM, Dave Abrahams wrote:
At Tue, 8 Feb 2011 16:03:33 -0500, Beman Dawes wrote:
Dave wrote:
You only have to generate a forwarding header when you actually have a new header file. Yes, you have to check for the existence of forwarding headers each time, but that's not the same cost as actually generating them.
Forwarding headers have the advantage of working regardless of the filesystem. But many Boost lib developers are running on filesystems that support directory symlinks. Directory symlinks have a lot of advantages; you only have to generate one per library, they are faster, and easier to use with IDE's that let you click on a header to open it. With a forwarding header, you have to do that twice to get at the real header.
I've been testing on Windows with boost/filesystem being a directory symlink to ../../libs/filesystem/include/boost/filesystem
Since when does Windows support symlinks?!
Since Windows Vista...
and it works smoothly with no changes whatsoever to either by bjam testing or VisualC++ IDE testing.
Would it be possible to generate directory symlinks instead of forwarding headers if the filesystem supports them?
Sure! What filesystem doesn't? FAT?
And if you only need to link to directories using NTFS junction points is an option. And that's available since Win2K. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

At Tue, 08 Feb 2011 16:08:06 -0600, Rene Rivera wrote:
And if you only need to link to directories using NTFS junction points is an option. And that's available since Win2K.
Yep, I thought of that. In that case we'd be limited to forwarding for headers that fall directly in boost/ (and a few others, like boost/detail). -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 02/08/2011 10:48 PM, Dave Abrahams wrote:
Since when does Windows support symlinks?!
API for NTFS since XP I think. http://msdn.microsoft.com/en-us/library/aa363878%28v=vs.85%29.aspx Windows 7 and posibly Vista have an mklink command line utility. Symbolic directory links should work close to perfect for this given that the directory structure support it. Some note that there are limitations to symbolic links to targets to the same file system. That I think apply to I Junctions which have been around for a while as API. There are diferences between Junctions and Symbolic Links in Windows. http://msdn.microsoft.com/en-us/library/aa365006%28v=vs.85%29.aspx I know for sure that mklink on Win7 can make directory links even to a samba SMB share on a remote linux box. It just works. -- Bjørn

At Mon, 7 Feb 2011 08:28:56 -0500, Beman Dawes wrote:
On Mon, Feb 7, 2011 at 7:39 AM, Dave Abrahams <dave@boostpro.com> wrote:
At Mon, 7 Feb 2011 12:27:21 -0000, John Maddock wrote:
http://svn.boost.org/svn/boost/sandbox/library-name/boost/ http://svn.boost.org/svn/boost/sandbox/library-name/libs/
The planned/proposed organization is roughly like the latter. If you want to look at the organization in detail, see https://github.com/boost-lib/boost
Nod, either could be supported by Boost.Build trivially provided there's a complete (integrated) release tree sitting around somewhere - otherwise as you mention the compiler command paths get stupidly long....
I don't know what you mean by "complete (integrated) release tree", but we're not planning to do that. We're only planning, as part of the build process, to generate forwarding headers in an integrated include tree
By "we", do you mean ryppl?
I guess so. At least, I haven't heard of anyone planning to do it differently.
I've gone through the process John Wiegley kindly sent me:
Grab the supermodule: git clone git://github.com/boost-lib/boost.git
Then 'cd' into the "boost" directory it created, and run: git submodule update --init
Then continued as described here: http://ryppl.github.com/gettingstarted.html
That produced a completely new tree with the forwarding headers, not under version control.
Yeah, but to be clear: just a header tree, not the full Boost distro.
It seemed oriented to what a user might want.
IIUC, Marcus has already made the separate header-generation step obsolete, so it now happens automatically if you have his changes, which are here: https://github.com/boost-lib/boost/commits/1.45.0
What about a library developer? What does the tree structure they work with look like? How does integrate with their development repo? I guess the non-version controlled tree produced by the above could be used as a "complete (integrated) release tree", but I'd like to know the specifics, and give them a try.
Once you have Marcus' header generation, that would work perfectly. Until then, it will probably work just fine, but you might have to explicitly build the GenHeaders target if your library's set of header paths changes. You might want to consider putting the individual include/ directories of the libraries on which you're working (e.g. https://github.com/boost-lib/filesystem/tree/master/include) in your #include path ahead of the generated headers to avoid that I guess. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Part of the reason we're putting this much effort into it is that we want a process which split Boost up into submodules during the migration process, while preserving as much history within each separate submodule as possible. There's just no tool out there that does that right now. So since we needed to write a tool anyway, why not solve the whole problem.
John
I see. Excellent! Joël Lamotte

On 2/3/2011 1:56 AM, John Wiegley wrote:
Beman Dawes<bdawes@acm.org> writes:
* As a demonstration and proof-of-concept, a Boost library should begin using Git. Presumably a public repository (on GitHub?) can channel changes back to Boost svn. I'll volunteer the filesystem library.
I've made a copy of the fully migrated Boost repository available online for review here:
https://github.com/boost-lib/boost-history
In the course of creating the migration process, I had to fix several bugs in libgit2. This makes me less than 100% convinced of the fidelity of the result so far. I'd like anyone who can to review the sections familiar to them, to make sure nothing obvious has gone wrong.
FYI... After doing a clone, which took about 9 minutes on my 80Mbps downlink (at 1.5M/s down from github).. And trying to play with some export/imports. I ran "git fsck", and got hundreds of warnings and errors. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

Rene Rivera <grafikrobot@gmail.com> writes:
FYI... After doing a clone, which took about 9 minutes on my 80Mbps downlink (at 1.5M/s down from github).. And trying to play with some export/imports. I ran "git fsck", and got hundreds of warnings and errors.
This has been fixed. There were two bugs in libgit2 having to do with writing out Git trees. John

On 2/2/2011 9:44 AM, Beman Dawes wrote:
Comments?
Could someone put together a guide as to how to install TortoiseGit successfully and minimally? -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

On Thu, Feb 3, 2011 at 10:38 PM, Rene Rivera <grafikrobot@gmail.com> wrote:
On 2/2/2011 9:44 AM, Beman Dawes wrote:
Comments?
Could someone put together a guide as to how to install TortoiseGit successfully and minimally?
See https://svn.boost.org/trac/boost/wiki/Git/InstallTortoiseGit This was done hurriedly, but should be at least close. --Beman

On 2/6/2011 8:17 PM, Beman Dawes wrote:
On Thu, Feb 3, 2011 at 10:38 PM, Rene Rivera<grafikrobot@gmail.com> wrote:
On 2/2/2011 9:44 AM, Beman Dawes wrote:
Comments?
Could someone put together a guide as to how to install TortoiseGit successfully and minimally?
See https://svn.boost.org/trac/boost/wiki/Git/InstallTortoiseGit
This was done hurriedly, but should be at least close.
I eventually figured it out with help from some kind #boost IRC people. But you should mention in the guide that one needs to get the "Git-1.**.exe" download. As the rest are for doing development of Git itself. And in case anyone is curious that install contains: * Approximately 72 executables, including: bash, perl, tcl, tk, ssh, bison, bzip2, vim, rxvt, tar, and many more. * 88 DLLs * A bunch of help files, as HTML, which is good I guess. But I would have preferred having the option to not install them. * A total of 2,578 files in 222 directories. For a total install size of 172MB, and then adding the 18MB for TurtoiseGit, makes for a 190MB footprint. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

On Sun, Feb 6, 2011 at 9:59 PM, Rene Rivera <grafikrobot@gmail.com> wrote:
On 2/6/2011 8:17 PM, Beman Dawes wrote:
On Thu, Feb 3, 2011 at 10:38 PM, Rene Rivera<grafikrobot@gmail.com> wrote:
On 2/2/2011 9:44 AM, Beman Dawes wrote:
Comments?
Could someone put together a guide as to how to install TortoiseGit successfully and minimally?
See https://svn.boost.org/trac/boost/wiki/Git/InstallTortoiseGit
This was done hurriedly, but should be at least close.
I eventually figured it out with help from some kind #boost IRC people. But you should mention in the guide that one needs to get the "Git-1.**.exe" download. As the rest are for doing development of Git itself. And in case anyone is curious that install contains:
* Approximately 72 executables, including: bash, perl, tcl, tk, ssh, bison, bzip2, vim, rxvt, tar, and many more.
* 88 DLLs
* A bunch of help files, as HTML, which is good I guess. But I would have preferred having the option to not install them.
* A total of 2,578 files in 222 directories.
For a total install size of 172MB, and then adding the 18MB for TurtoiseGit, makes for a 190MB footprint.
The TortoiseGit and msysGit installs have lots of issues. If anyone cares, they should report the problems to the TortoiseGit and msysGit folks. --Beman

On Sun, Feb 6, 2011 at 9:59 PM, Rene Rivera <grafikrobot@gmail.com> wrote:
On 2/6/2011 8:17 PM, Beman Dawes wrote:
On Thu, Feb 3, 2011 at 10:38 PM, Rene Rivera<grafikrobot@gmail.com> wrote:
On 2/2/2011 9:44 AM, Beman Dawes wrote:
Comments?
Could someone put together a guide as to how to install TortoiseGit successfully and minimally?
See https://svn.boost.org/trac/boost/wiki/Git/InstallTortoiseGit
This was done hurriedly, but should be at least close.
I eventually figured it out with help from some kind #boost IRC people. But you should mention in the guide that one needs to get the "Git-1.**.exe" download.
Not sure what you mean by "Git-1.**.exe" download? I debugged those instructions on a freshly installed virtual machine, so was confident they aren't missing anything. But I need to change them to eliminate the mention of a Cygwin install. I didn't actually do that and it isn't required. --Beman

On 2/7/2011 6:53 AM, Beman Dawes wrote:
On Sun, Feb 6, 2011 at 9:59 PM, Rene Rivera<grafikrobot@gmail.com> wrote:
On 2/6/2011 8:17 PM, Beman Dawes wrote:
On Thu, Feb 3, 2011 at 10:38 PM, Rene Rivera<grafikrobot@gmail.com> wrote:
On 2/2/2011 9:44 AM, Beman Dawes wrote:
Comments?
Could someone put together a guide as to how to install TortoiseGit successfully and minimally?
See https://svn.boost.org/trac/boost/wiki/Git/InstallTortoiseGit
This was done hurriedly, but should be at least close.
I eventually figured it out with help from some kind #boost IRC people. But you should mention in the guide that one needs to get the "Git-1.**.exe" download.
Not sure what you mean by "Git-1.**.exe" download?
I mean that when you get to the msysgit downloads page, you pick the "Git-1.7.4-preview20110204.exe" (current version). As that makes for the minimal install. The others will install even more stuff that is not needed for just *using* Git. I was confused because the instructions from the TurtoiseGit site where not clear. And neither where the ones from msysgit. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail
participants (28)
-
Anthony Williams
-
Belcourt, K. Noel
-
Beman Dawes
-
Bjørn Roald
-
Christopher Jefferson
-
Daniel Herring
-
Daniel James
-
Daniel Pfeifer
-
Dave Abrahams
-
David Bergman
-
Dean Michael Berris
-
Emil Dotchevski
-
Eric Niebler
-
Frank Mori Hess
-
Hartmut Kaiser
-
Henrik Sundberg
-
John Maddock
-
John Wiegley
-
Klaim - Joël Lamotte
-
Lars Viklund
-
Oliver Kowalke
-
Rene Rivera
-
Robert Jones
-
Ronald Garcia
-
Steven Watanabe
-
Stewart, Robert
-
Vicente Botet
-
Vladimir Prus