
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? They're kind-of like CVS vs. Subversion, except I think they came up in parallel. (While Subversion was designed as an updated CVS.) I think Git was made up of a bunch of script hacks, while Mercurial was a regimented single program. I don't have a preference, but I want to make sure we consider the rival options. Daryle W.

On 19/03/2012 14:02, Daryle Walker wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? They're kind-of like CVS vs. Subversion, except I think they came up in parallel. (While Subversion was designed as an updated CVS.) I think Git was made up of a bunch of script hacks, while Mercurial was a regimented single program. I don't have a preference, but I want to make sure we consider the rival options.
Actually git was written from zero in C. Scripts only if you write them on top of git, extremely efficient and steep learning curve, but rewarding to use! I'm using git daily together with p4 integration (for SOX compliant history) and it's really great for team collaboration. B.

On 19/03/12 15:02, Daryle Walker wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? They're kind-of like CVS vs. Subversion, except I think they came up in parallel. (While Subversion was designed as an updated CVS.)
It's not at all like CVS vs Subversion, in part for the reason you mentioned...
I think Git was made up of a bunch of script hacks, while Mercurial was a regimented single program.
That is incorrect. It's simply following the UNIX philosophy.
I don't have a preference, but I want to make sure we consider the rival options.
How about we stop wasting time discussing this? It has been way too long already. Git is the most powerful versioning system today, is increasingly popular, and has a vibrant community around it. While Mercurial is comparable, Git has built-in support for more advanced features and is more popular in the open-source world. Most importantly, Git is already being used by several boost libraries. Hasn't it been years since the idea of moving to Git has been submitted? What's left to discuss?

What's left to discuss?
maybe, when does boost-trunk move to git? -- Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de

Mathias Gaunard wrote:
On 19/03/12 15:02, Daryle Walker wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? They're kind-of like CVS vs. Subversion, except I think they came up in parallel. (While Subversion was designed as an updated CVS.)
It's not at all like CVS vs Subversion, in part for the reason you mentioned...
I think Git was made up of a bunch of script hacks, while Mercurial was a regimented single program.
That is incorrect. It's simply following the UNIX philosophy.
I don't have a preference, but I want to make sure we consider the rival options.
How about we stop wasting time discussing this? It has been way too long already.
Git is the most powerful versioning system today, is increasingly popular, and has a vibrant community around it. While Mercurial is comparable, Git has built-in support for more advanced features and is more popular in the open-source world.
This is unsupportable nonsense. Neither is more powerful. They are both quite comparable. If you believe one is more 'powerful', whatever that means, I suspect you are not looking at current information. For example, it's been quite some time since hg supported rebase.

On 19.03.2012 15:02, Daryle Walker wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? They're kind-of like CVS vs. Subversion, except I think they came up in parallel. (While Subversion was designed as an updated CVS.) I think Git was made up of a bunch of script hacks, while Mercurial was a regimented single program. I don't have a preference, but I want to make sure we consider the rival options. Daryle W.
While we're at it, Google's analysis of Git and Mercurial shouldn't be neglected: http://code.google.com/p/support/wiki/DVCSAnalysis

On 03/19/2012 10:48 AM, Sergiu Dotenco wrote:
On 19.03.2012 15:02, Daryle Walker wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? While we're at it, Google's analysis of Git and Mercurial shouldn't be neglected: http://code.google.com/p/support/wiki/DVCSAnalysis
FYI, here is a somewhat more entertaining, but still relevant, comparison which frames git as MacGyver and mercurial as James Bond. http://importantshock.wordpress.com/2008/08/07/git-vs-mercurial/ I'm sorry if I am fanning the flames of resurrecting an old discussion. I did a little searching to see if this question had a distinct decision on the forums. I saw discussions that were primarily focused on "why DVCS?" not "which DVCS?" It looks like the Boost wiki asks the question of "Why Git?" ( https://svn.boost.org/trac/boost/wiki/Git/WhyGit ) but does not answer it. Perhaps the deciders can give more information on the wiki? Then when this comes up again, simply provide the link. My anecdote: Take it for what it's worth ( maybe two cents) ... I first encountered DVCS by helping out under the Eigen c++ library project. I quickly became a convert to the concept of dvcs and secondarily to mercurial as an implementation. I talked the rest of the developers at my company into migrating our then-cvs repos forward to something more modern. I did not want to taint the decision by simply grabbing for what I knew. So we weighed out the pros and cons of using various distributed version control systems. The fight quickly devolved to hg vs. git. In the end we chose mercurial because 1. Hg did everything we could envision needing to do. Mostly through simple prebuilt commands, infrequently by more advanced scripting see "hg help templates" or "hg help revsets" to get a flavor of the power. 2. The commands and concepts of mercurial are closer to the cvs/svn concepts we knew. Many of the commands are actually the same. This minimized the cost of converting our coders, which far outweighs the cost of converting our code. 3. Hg is simpler than git for doing common tasks ( or at least seemed so to us -- see #2) There were various other minor reasons that tipped us toward mercurial, like cross-platform consistency and hg's habit of keeping the repo compressed. Let me end by applauding Boost for doing the right thing. Moving from svn to any DVCS is a step in the right direction. The detail of which one is a nuance.

on Mon Mar 19 2012, Sergiu Dotenco <sergiu.dotenco-AT-gmail.com> wrote:
On 19.03.2012 15:02, Daryle Walker wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? They're kind-of like CVS vs. Subversion, except I think they came up in parallel. (While Subversion was designed as an updated CVS.) I think Git was made up of a bunch of script hacks, while Mercurial was a regimented single program. I don't have a preference, but I want to make sure we consider the rival options. Daryle W.
While we're at it, Google's analysis of Git and Mercurial shouldn't be neglected:
That analysis completely ignores the (most?) important factors, mindshare and marketplace. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

re mindshare: I don't understand. Mercurial chose to adopt cvs/svn commands and nomenclature when it made sense. Git chose to reinvent everything. Migrating developers from svn to mercurial should be much easier. re marketplace: Google trends shows. "git" outpaces "mercurial" by roughly 2:1. Although I'd guess that number is somewhat skewed by people using "git-r-done" in their blogs than "mercurial" Does it matter which is more popular? As long as the choice is popular *enough* that it won't vanish. On 03/19/2012 01:17 PM, Dave Abrahams wrote:
on Mon Mar 19 2012, Sergiu Dotenco<sergiu.dotenco-AT-gmail.com> wrote:
On 19.03.2012 15:02, Daryle Walker wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? They're kind-of like CVS vs. Subversion, except I think they came up in parallel. (While Subversion was designed as an updated CVS.) I think Git was made up of a bunch of script hacks, while Mercurial was a regimented single program. I don't have a preference, but I want to make sure we consider the rival options. Daryle W. While we're at it, Google's analysis of Git and Mercurial shouldn't be neglected:
http://code.google.com/p/support/wiki/DVCSAnalysis That analysis completely ignores the (most?) important factors, mindshare and marketplace.

On 2012.03.19 13.17, Dave Abrahams wrote:
on Mon Mar 19 2012, Sergiu Dotenco <sergiu.dotenco-AT-gmail.com> wrote:
On 19.03.2012 15:02, Daryle Walker wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? They're kind-of like CVS vs. Subversion, except I think they came up in parallel. (While Subversion was designed as an updated CVS.) I think Git was made up of a bunch of script hacks, while Mercurial was a regimented single program. I don't have a preference, but I want to make sure we consider the rival options. Daryle W.
While we're at it, Google's analysis of Git and Mercurial shouldn't be neglected:
That analysis completely ignores the (most?) important factors, mindshare and marketplace.
Uh, can you provide some data for this, please? The two major surveys I know contradict this. http://www.eclipse.org/org/community_survey/Eclipse_Survey_2011_Report.pdf, page 16 http://blogs.forrester.com/application_development/2010/01/forrester-databyt... -- Bryce Lelbach aka wash STE||AR Group, Center for Computation and Science, LSU -- boost-spirit.com stellar.cct.lsu.edu llvm.wiki.kernel.org

Page 12, rather. On 2012.03.19 14.54, Bryce Lelbach wrote:
On 2012.03.19 13.17, Dave Abrahams wrote:
on Mon Mar 19 2012, Sergiu Dotenco <sergiu.dotenco-AT-gmail.com> wrote:
On 19.03.2012 15:02, Daryle Walker wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? They're kind-of like CVS vs. Subversion, except I think they came up in parallel. (While Subversion was designed as an updated CVS.) I think Git was made up of a bunch of script hacks, while Mercurial was a regimented single program. I don't have a preference, but I want to make sure we consider the rival options. Daryle W.
While we're at it, Google's analysis of Git and Mercurial shouldn't be neglected:
That analysis completely ignores the (most?) important factors, mindshare and marketplace.
Uh, can you provide some data for this, please?
The two major surveys I know contradict this.
http://www.eclipse.org/org/community_survey/Eclipse_Survey_2011_Report.pdf, page 16 http://blogs.forrester.com/application_development/2010/01/forrester-databyt...
-- Bryce Lelbach aka wash STE||AR Group, Center for Computation and Science, LSU -- boost-spirit.com stellar.cct.lsu.edu llvm.wiki.kernel.org
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Bryce Lelbach aka wash STE||AR Group, Center for Computation and Science, LSU -- boost-spirit.com stellar.cct.lsu.edu llvm.wiki.kernel.org

On 19/03/12 20:54, Bryce Lelbach wrote:
Uh, can you provide some data for this, please?
The two major surveys I know contradict this.
http://www.eclipse.org/org/community_survey/Eclipse_Survey_2011_Report.pdf, page 16 http://blogs.forrester.com/application_development/2010/01/forrester-databyt...
Git is the first DVCS according to both those surveys. What were you trying to say? That subversion is still a lot more popular than Git?

on Mon Mar 19 2012, Bryce Lelbach <blelbach-AT-cct.lsu.edu> wrote:
On 2012.03.19 13.17, Dave Abrahams wrote:
on Mon Mar 19 2012, Sergiu Dotenco <sergiu.dotenco-AT-gmail.com> wrote:
On 19.03.2012 15:02, Daryle Walker wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? They're kind-of like CVS vs. Subversion, except I think they came up in parallel. (While Subversion was designed as an updated CVS.) I think Git was made up of a bunch of script hacks, while Mercurial was a regimented single program. I don't have a preference, but I want to make sure we consider the rival options. Daryle W.
While we're at it, Google's analysis of Git and Mercurial shouldn't be neglected:
That analysis completely ignores the (most?) important factors, mindshare and marketplace.
Uh, can you provide some data for this, please?
Data? All you have to do is read the article to see that it ignores those factors.
The two major surveys I know contradict this.
http://www.eclipse.org/org/community_survey/Eclipse_Survey_2011_Report.pdf, page 16 http://blogs.forrester.com/application_development/2010/01/forrester-databyt...
Contradict what? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

on Mon Mar 19 2012, Bryce Lelbach <blelbach-AT-cct.lsu.edu> wrote:
On 2012.03.19 13.17, Dave Abrahams wrote:
on Mon Mar 19 2012, Sergiu Dotenco <sergiu.dotenco-AT-gmail.com> wrote:
On 19.03.2012 15:02, Daryle Walker wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? They're kind-of like CVS vs. Subversion, except I think they came up in parallel. (While Subversion was designed as an updated CVS.) I think Git was made up of a bunch of script hacks, while Mercurial was a regimented single program. I don't have a preference, but I want to make sure we consider the rival options. Daryle W.
While we're at it, Google's analysis of Git and Mercurial shouldn't be neglected:
That analysis completely ignores the (most?) important factors, mindshare and marketplace.
Uh, can you provide some data for this, please?
Data? All you have to do is read the article to see that it ignores those factors.
The two major surveys I know contradict this.
http://www.eclipse.org/org/community_survey/Eclipse_Survey_2011_Report .pdf, page 16 http://blogs.forrester.com/application_development/2010/01/forrester-d atabyte-developer-scm-tool-adoption-and-use.html
Contradict what?
Well, it contradicts your claim that 'Git is winning in the marketplace', which is total nonsense if you look at the surveys (SVN 50% vs. GIT 13% 'marketshare'). Regards Hartmut --------------- http://boost-spirit.com http://stellar.cct.lsu.edu

on Mon Mar 19 2012, "Hartmut Kaiser" <hartmut.kaiser-AT-gmail.com> wrote:
on Mon Mar 19 2012, Bryce Lelbach <blelbach-AT-cct.lsu.edu> wrote:
The two major surveys I know contradict this.
http://www.eclipse.org/org/community_survey/Eclipse_Survey_2011_Report .pdf, page 16 http://blogs.forrester.com/application_development/2010/01/forrester-d atabyte-developer-scm-tool-adoption-and-use.html
Contradict what?
Well, it contradicts your claim that 'Git is winning in the marketplace', which is total nonsense if you look at the surveys (SVN 50% vs. GIT 13% 'marketshare').
If you read the thread carefully, you'll see I was talking about the DVCS marketplace (in fact, just about Git vs Mercurial), where SVN is not a contender. Please tone down the 'tude, friend. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 03/20/2012 02:52 AM, Dave Abrahams wrote:
on Mon Mar 19 2012, "Hartmut Kaiser"<hartmut.kaiser-AT-gmail.com> wrote:
on Mon Mar 19 2012, Bryce Lelbach<blelbach-AT-cct.lsu.edu> wrote:
The two major surveys I know contradict this.
http://www.eclipse.org/org/community_survey/Eclipse_Survey_2011_Report .pdf, page 16 http://blogs.forrester.com/application_development/2010/01/forrester-d atabyte-developer-scm-tool-adoption-and-use.html
Contradict what?
Well, it contradicts your claim that 'Git is winning in the marketplace', which is total nonsense if you look at the surveys (SVN 50% vs. GIT 13% 'marketshare').
If you read the thread carefully, you'll see I was talking about the DVCS marketplace (in fact, just about Git vs Mercurial), where SVN is not a contender. Please tone down the 'tude, friend.
Let me try to wrap my head around all this... So essentially, the argument is to choose git over whatever is because of its marketshare, right? The only reason behind this i can think of is to attract new boost contributors (yeah ... I know, I am one of those hippies completely neglecting the commercial interest behind boost). Ok, I can see that as a possible advantage for git. Especially since a lot of people expressed themselves that svn is _the_ major blocker for not contributing. So far so good, this is the argument for people already familiar with git. Let's check the statistics again: https://github.com/languages Right ... so many potential new developers ... Maybe we should provide Javascript bindings ... So, what about persons who are infected by their favorite poison and have to learn the VCS tool of choice. I keep reading about git having a steep learning curve, so maybe we won't attract those developers either ... I'd argue that writing code is not done in the VCS. Be it writing a patch for an existing software or a completely new library. The complexity is in writing the code itself. Or applying the patch and verify it. FWIW, I am the last person who will oppose such a change. But currently, noone presented a fair reasoning in favor for git, or how such a transition could be done. No one. The only things that have been discussed on this list is FUD from both sides. And this marketshare argument, completely disregarding a possible other option ... wow. Maybe you did the comparison once. Somehow people tend to forget in their Git crusade that other people didn't go through the transition yet, and are searching for arguments to actually make such a change. Regards, Thomas

on Tue Mar 20 2012, Thomas Heller <thom.heller-AT-googlemail.com> wrote:
On 03/20/2012 02:52 AM, Dave Abrahams wrote:
on Mon Mar 19 2012, "Hartmut Kaiser"<hartmut.kaiser-AT-gmail.com> wrote:
on Mon Mar 19 2012, Bryce Lelbach<blelbach-AT-cct.lsu.edu> wrote:
The two major surveys I know contradict this.
http://www.eclipse.org/org/community_survey/Eclipse_Survey_2011_Report .pdf, page 16 http://blogs.forrester.com/application_development/2010/01/forrester-d atabyte-developer-scm-tool-adoption-and-use.html
Contradict what?
Well, it contradicts your claim that 'Git is winning in the marketplace', which is total nonsense if you look at the surveys (SVN 50% vs. GIT 13% 'marketshare').
If you read the thread carefully, you'll see I was talking about the DVCS marketplace (in fact, just about Git vs Mercurial), where SVN is not a contender. Please tone down the 'tude, friend.
Let me try to wrap my head around all this... So essentially, the argument is to choose git over whatever is because of its marketshare, right?
The only reason behind this i can think of is to attract new boost contributors (yeah ... I know, I am one of those hippies completely neglecting the commercial interest behind boost).
That's just one of many reasons. If you mentally amplify the difference in popularity between Mercurial and Git, I'm sure some of the others will become more apparent to you. It's all a matter of degrees.
FWIW, I am the last person who will oppose such a change. But currently, noone presented a fair reasoning in favor for git, or how such a transition could be done. No one. The only things that have been discussed on this list is FUD from both sides.
Careful; I'm sure you didn't mean it that way, but that term is quite inflammatory---to some people it means a lot more (and much worse) than to others. And besides, I totally disagree with you. Personal anecdotes of frustration with a tool are not FUD, no matter how you interpret the term. Arguments that it is easier to make a transition to a tool with similar commands (e.g. SVN->Hg) are not FUD. Human factors count---a lot. That's part of the reason the popularity measuremnet is important to me.
And this marketshare argument, completely disregarding a possible other option ... wow.
I'm not completely disregarding it. I've done enough evaluation to satisfy myself of the answer.
Maybe you did the comparison once.
Yes.
Somehow people tend to forget in their Git crusade that other people didn't go through the transition yet, and are searching for arguments to actually make such a change.
I'm not on a Git crusade. And I'm sorry that I can't help you further to find a killer argument for yourself; in the end, everyone makes his own choice of favorite. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

To be honest I'm a bit wary of this discussion, but I think some important arguments are overlooked while other, less important arguments are being exaggerated. Thomas Heller wrote:
Let me try to wrap my head around all this... So essentially, the argument is to choose git over whatever is because of its marketshare, right?
I don't think this is the only argument, but you do agree that marketshare can make a difference, right?
The only reason behind this i can think of is to attract new boost contributors [...] Ok, I can see that as a possible advantage for git. Especially since a lot of people expressed themselves that svn is _the_ major blocker for not contributing. So far so good, this is the argument for people already familiar with git. Let's check the statistics again: https://github.com/languages Right ... so many potential new developers ... Maybe we should provide Javascript bindings ...
This is a false comparison. The first line on boost.org is literally "Boost provides free peer-reviewed portable C++ source libraries". So it's not among the goals of Boost to provide Java, JavaScript or even Intercal bindings. Using any kind of VCS, on the other hand, is a means to the end of producing free peer-reviewed portable C++ source libraries.
So, what about persons who are infected by their favorite poison and have to learn the VCS tool of choice. I keep reading about git having a steep learning curve, so maybe we won't attract those developers either ...
I really don't see why there's such a fuss about git having a steep learning curve. Basic usage of git isn't any harder than basic usage of svn -- or probably any other VCS; you always need to cover about eight commands, two configuration files and a bunch of options. People tend to learn only basic usage at first (whatever VCS they're learning) and the more advanced stuff is only covered when we feel we need it. Perhaps the learning curve is steep if you want to become a black belt git master, but that's irrelevant for most Boost developers. Basic usage of git is different from basic usage of svn in some crucial aspects, but similar enough for anyone to be able to adjust even if you don't like it. It can definitely be learnt within a day. Why don't you just give it a try? It never hurts to learn something new.
I'd argue that writing code is not done in the VCS. Be it writing a patch for an existing software or a completely new library. The complexity is in writing the code itself. Or applying the patch and verify it.
I think I'm missing your point here. Is it just an aside, or did you mean to argue for or against a particular VCS?
FWIW, I am the last person who will oppose such a change. But currently, noone presented a fair reasoning in favor for git,
Well, allow me to present some fair reasoning to you. With regard to git versus svn: I think enough fair reasons have been given why git (or a DVCS in general) is better than svn. I'm not going to repeat those arguments here. With regard to git versus mercurial: given that it's probably a good idea to switch to a DVCS, git and mercurial seem to be the primary candidates. I think everyone in this thread should be more willing to admit that they're close competitors. In many ways they're about equally "good", and when they aren't the differences are quite moot: - mercurial has native support for Windows, but git is also fairly well supported on Windows and seems to be rapidly improving; - git allows you to edit history while mercurial doesn't, but which you like better is a matter of preference; - they seem to treat heads and branches differently, but again that appears to be mostly a matter of preference; - git seems to be more "powerful" and less susceptible to errors, while mercurial is said to have better documentation -- while this doesn't make either objectively better than the other in the first place, they're also both catching up on their weaker side; - they are built with very different architectures (many executables written in C versus a monolithic program in Python), but in the end both work well enough and both seem extensible enough for most purposes. At this point one could say that it doesn't matter whether we pick git or mercurial, as long as we actually do it and move away from svn (preferably as soon as possible). However, as far as popularity and enthousiasm is concerned git seems to win: - Within the existing Boost community, git seems to be more popular than mercurial. I've seen several library proposals pass by that were versioned with git. I'm also sure that at least one existing Boost library is being maintained in a GitHub repository. AFAICT mercurial scores a solid zero on both of those. - A lot of work has already been done in order to transition Boost from svn to git. From what I read in the "neglected aspects" thread, John Wiegley's subconvert tool seems to be almost ready and that also seems to be the last (only?) thing we need in order to switch. For mercurial no work has been done yet. - Git is also more popular than mercurial outside Boost. Like it or not, this simply means that git is a better bet.
or how such a transition could be done.
To me, that doesn't seem like a hard problem. With some more thought one could probably produce a more robust plan, but basically I think it would be something like this: 1. Finish the subconvert tool. 2. Verify that everything builds and that all unit tests run as expected. 3. Give library maintainers a grace period in which they can switch development from svn to git. Keep syncing with subconvert in the meanwhile. 4. Once all libraries have transitioned, we can probably establish that the switch was successful and abandon the svn repository.
[...] Somehow people tend to forget in their Git crusade that other people didn't go through the transition yet, and are searching for arguments to actually make such a change.
Here's an argument for just learning the basics of git: most people who tried both svn and git seem to agree that git is sufficiently better than svn to make the switch. Surely you want to have a look? I recommend that you try the tutorial at http://gitimmersion.com/ . It doesn't have to take much of your time. -Julian

On 20 Mar 2012, at 11:03, Julian Gonggrijp wrote: I feel two points from this email are the most important ones:
- Within the existing Boost community, git seems to be more popular than mercurial. I've seen several library proposals pass by that were versioned with git. I'm also sure that at least one existing Boost library is being maintained in a GitHub repository. AFAICT mercurial scores a solid zero on both of those. - A lot of work has already been done in order to transition Boost from svn to git. From what I read in the "neglected aspects" thread, John Wiegley's subconvert tool seems to be almost ready and that also seems to be the last (only?) thing we need in order to switch. For mercurial no work has been done yet.
At the end of the day, no-one is paid to work on boost, and the people who are willing to put the work in all want to use git. Unless mercurial fans are willing to put serious work into building the infrastructure required for boost, then the idea is a non-starter. Having used both git and mercurial extensively, I believe the differences in practice to boost of which was chosen would be minimal. They both accomplish broadly the same goals in broadly the same way. Chris

On Tue, Mar 20, 2012 at
At the end of the day, no-one is paid to work on boost, and the people who are willing to put the work in all want to use git. Unless mercurial fans are willing to put serious work into building the infrastructure required for boost, then the idea is a non-starter.
This is not true. Have look at boostpro.com. Some people _are_ paid to work on boost even though it may be indirectly. Julien

On 20/03/2012 11:03, Julian Gonggrijp wrote:
Here's an argument for just learning the basics of git: most people who tried both svn and git seem to agree that git is sufficiently better than svn to make the switch. Surely you want to have a look?
I recommend that you try the tutorial at http://gitimmersion.com/ . It doesn't have to take much of your time.
I personally recommend this book: http://progit.org/book/ (paid & printed version available for those who want to support the author) . There is also quite useful blog on the same site. B.

On 03/20/2012 12:03 PM, Julian Gonggrijp wrote: <snip>
Basic usage of git is different from basic usage of svn in some crucial aspects, but similar enough for anyone to be able to adjust even if you don't like it. It can definitely be learnt within a day. Why don't you just give it a try? It never hurts to learn something new.
*SIGH* you keep assuming that i never tried git. My last adventure with trying to use git was around half a year ago. I still have nightmares from that.
I'd argue that writing code is not done in the VCS. Be it writing a patch for an existing software or a completely new library. The complexity is in writing the code itself. Or applying the patch and verify it.
I think I'm missing your point here. Is it just an aside, or did you mean to argue for or against a particular VCS?
No, I was trying to show how nonsensical the argument is that more patches get applied when switching to git or any other VCS, be it centralized or not. Maybe switching to a DCVS might increase the quantity of contributions. quantity != quality. And that is what i personally fear most. Tons of low quality "forks" sprouting out of the ground. But really, the complexity of maintaining a boost library lies not in the version control system. With that being said, I am ready to admit that something like git might improve the handling of patches etc. but it should be clear that this is totally unrelated to actually applying and verifying those patches.
FWIW, I am the last person who will oppose such a change.
*Nuff said*.

On Tue, Mar 20, 2012 at 12:30:31PM +0100, Thomas Heller wrote:
On 03/20/2012 12:03 PM, Julian Gonggrijp wrote: <snip>
Basic usage of git is different from basic usage of svn in some crucial aspects, but similar enough for anyone to be able to adjust even if you don't like it. It can definitely be learnt within a day. Why don't you just give it a try? It never hurts to learn something new.
*SIGH* you keep assuming that i never tried git. My last adventure with trying to use git was around half a year ago. I still have nightmares from that.
Could you please give some example for that? Git is so easy to learn and use, that is is possible that it could be your "incompetence" which created your nightmares. "Incompetence" could have many meanings, good and bad ones. But, as an outsider who follows discussions on the Boost mailing list, what I can see in this thread is that people arguing in favour of Git mostly use concrete arguments, supported by apparently quite some work done regarding the Boost-Git connection, while people arguing against it do not seem to present arguments, but mostly only negative emotions. Like the statement above --- where is some evidence? I can understand that discussions get a bit heated, and then one reacts quickly. So no need for big elaborations, only some more concrete hints under which circumstances you applied Git, what were your expectations, and where were your problems, so that we can better understand the above statement. Oliver

On 03/20/2012 12:50 PM, Oliver Kullmann wrote:
On Tue, Mar 20, 2012 at 12:30:31PM +0100, Thomas Heller wrote:
On 03/20/2012 12:03 PM, Julian Gonggrijp wrote: <snip>
Basic usage of git is different from basic usage of svn in some crucial aspects, but similar enough for anyone to be able to adjust even if you don't like it. It can definitely be learnt within a day. Why don't you just give it a try? It never hurts to learn something new.
*SIGH* you keep assuming that i never tried git. My last adventure with trying to use git was around half a year ago. I still have nightmares from that.
Could you please give some example for that? Git is so easy to learn and use, that is is possible that it could be your "incompetence" which created your nightmares. "Incompetence" could have many meanings, good and bad ones. But, as an outsider who follows discussions on the Boost mailing list, what I can see in this thread is that people arguing in favour of Git mostly use concrete arguments, supported by apparently quite some work done regarding the Boost-Git connection, while people arguing against it do not seem to present arguments, but mostly only negative emotions. Like the statement above --- where is some evidence?
Well the evidence is hard ... but let me try to replay my experience. I am sure, the next guy will step up and tell me that i did it totally wrong (actually happened when i tried to collaborate on said project using git). So, the journey starts about a year ago or so. I decided i need to check out this new project i heard about. I was (actually still am) very determined to contribute to that project, so i cloned the repository, browsed the code etc. eventually i decided to fork this project cause i wanted to get some hacking done. That is what i did. Then life happened and i had to postpone the work on the project. A few months later, I got a new assignment to contribute a module for that project. Remember, i still got that (public) fork lying around. So i tried to get it up to date. First bummer. I don't remember which commands i tried in which order, but merge didn't really work, and i messed up during rebase. the result was, that i spent an entire day trying to figure out how to get this outdated fork uptodate to start hacking again. Also, since trying to learn this new git tool and its cool branches and stuff, i had of course multiple local branches lying around, never really figured how to properly maintain that (origin branch, master fork branch, origin feature branch1, etc. ...) and constantly pushed to the wrong branches and/or repos (luckily, I didn't have any write rights to the repository i forked from). And not to forget that i wanted to try some feature X from branch Y, but needed to combine that with my feature Z on branch U. Essentially, whenever I tried to publicly show my progress to someone, I ended up totally confused, and in a complete local litter box of branches, where half of them didn't really do what they were supposed to (like remote tracking). I needed to search the internets for how to accomplish task X that wasn't a simple "git add" or "git commit". Asking people after i didn't know any further lead to the answer that i shouldn't have executed command X in the first place. D'oh, that was how i read it on the internets ... I am sorry, this isn't a really detailed usage story, missing all the commands etc. that is why I wasn't clearer in the first place. To be perfectly honest, i even forgot most of the git usage i learned back then. I hope you can still relate a little to what I am talking about. But yeah ... this is the memory that makes me arguing against git. Also, it is the reason why i argue against all those "advantages" people see over using git. I clearly fail to see them cause i miserably failed in actually trying to use them. My $0.02 ... P.S.: In this case, the usage of git actually prevented me to make the contribution i wanted to do. Nevertheless I was able to contribute a tiny bit. But well ...

I can't believe you said that: … Git is so easy to learn and use ... Some stats first: Stackoverflow.com - git 14000 questions - svn 11000 questions If you consider the ratio (question / market share), git is a clear loser. Even the people pushing for git on this ML tend to agree that it is not the simplest in terms of usage. Julien

On Tue, Mar 20, 2012 at 09:30:18PM +0900, Julien Nitard wrote:
I can't believe you said that:
… Git is so easy to learn and use ...
Some stats first: Stackoverflow.com - git 14000 questions - svn 11000 questions
If you consider the ratio (question / market share), git is a clear loser.
Intuitively, the above is quite clear to me: Git is the future, so many people explore it, want to know about it. Svn is the past, not of so much interest anymore. And Git is also more powerful than Svn (I think that's undisputable), so there are bound to be more question (e.g., there should be more questions on C++ than on C). One could make an empirical study ... (not me).
Even the people pushing for git on this ML tend to agree that it is not the simplest in terms of usage.
I don't think that this was said: the task *is complex*, that's the point! Git is appropriate to the task, and thus it must be complex. The basic usage is unquestionable simple (using simple tools like git-gui and gitk), and then comes the rest --- and who wants power has first to practice. If Boost would provide good recommendations (best practice) for the common (Boost!) workflows, that would definitely be of great help, and I think that would sort out most of the problems. Oliver -- Dr. Oliver Kullmann Department of Computer Science College of Science, Swansea University Faraday Building, Singleton Park Swansea SA2 8PP, UK http://cs.swan.ac.uk/~csoliver/

On 03/21/2012 01:05 PM, Mathias Gaunard wrote:
On 20/03/12 13:30, Julien Nitard wrote:
Even the people pushing for git on this ML tend to agree that it is not the simplest in terms of usage.
So is C++.
With great power comes great responsibility.
100% agree. But to be honest, i'd rather focus my mental abilities in developing C++ code than in mastering the tool that should ease that development.

On 20/03/2012 12:21, Thomas Heller wrote:
On 03/20/2012 12:50 PM, Oliver Kullmann wrote:
On Tue, Mar 20, 2012 at 12:30:31PM +0100, Thomas Heller wrote:
On 03/20/2012 12:03 PM, Julian Gonggrijp wrote: <snip>
Basic usage of git is different from basic usage of svn in some crucial aspects, but similar enough for anyone to be able to adjust even if you don't like it. It can definitely be learnt within a day. Why don't you just give it a try? It never hurts to learn something new.
*SIGH* you keep assuming that i never tried git. My last adventure with trying to use git was around half a year ago. I still have nightmares from that.
Could you please give some example for that? Git is so easy to learn and use, that is is possible that it could be your "incompetence" which created your nightmares. "Incompetence" could have many meanings, good and bad ones. But, as an outsider who follows discussions on the Boost mailing list, what I can see in this thread is that people arguing in favour of Git mostly use concrete arguments, supported by apparently quite some work done regarding the Boost-Git connection, while people arguing against it do not seem to present arguments, but mostly only negative emotions. Like the statement above --- where is some evidence?
Well the evidence is hard ... but let me try to replay my experience. I am sure, the next guy will step up and tell me that i did it totally wrong (actually happened when i tried to collaborate on said project using git).
So, the journey starts about a year ago or so. I decided i need to check out this new project i heard about. I was (actually still am) very determined to contribute to that project, so i cloned the repository, browsed the code etc. eventually i decided to fork this project cause i wanted to get some hacking done. That is what i did. Then life happened and i had to postpone the work on the project. A few months later, I got a new assignment to contribute a module for that project. Remember, i still got that (public) fork lying around. So i tried to get it up to date. First bummer. I don't remember which commands i tried in which order, but merge didn't really work, and i
You should have used rebase to refresh your repository, not merge :) Also when things are really starting to look bad, your best help are two commands "git reflog" and "git reset --hard" :) B.

You should have used rebase to refresh your repository, not merge :)
Also when things are really starting to look bad, your best help are two commands "git reflog" and "git reset --hard" :)
Hmm strange - I never ran into something like that with SVN, where somebody told me 'you should have done it this way and not that way' (and yes, before you ask, I've had quite a bit of exposure to GIT myself). Let's face it, GIT is a usability nightmare (IMHO) and it will not enable anything we couldn't do with SVN (or with Mercurial for that matter) if we only wanted to (IHMO, at least I still have to see somebody giving me that use case). Regards Hartmut --------------- http://boost-spirit.com http://stellar.cct.lsu.edu

On 20/03/2012, at 13:45, Hartmut Kaiser wrote:
You should have used rebase to refresh your repository, not merge :)
Also when things are really starting to look bad, your best help are two commands "git reflog" and "git reset --hard" :)
Hmm strange - I never ran into something like that with SVN, where somebody told me 'you should have done it this way and not that way' (and yes, before you ask, I've had quite a bit of exposure to GIT myself).
Let's face it, GIT is a usability nightmare (IMHO) and it will not enable anything we couldn't do with SVN (or with Mercurial for that matter) if we only wanted to (IHMO, at least I still have to see somebody giving me that use case).
And SVN is nightmare for team work. Thats how I started to hate it. The simple fact that you can't commit before an update is a complete nightmare and production killer. Not to mention the nightmare of having to resolve conflicts before you could commit. With git i can commit, commit, commit… and at the end day push it to central repo and solve conflicts in one go if necessary, and if a screw up I have the complete history of change sets. SVN is very poor in this regards, before git come out I had to resort to bash scripts and patch files to overcome svn limitations.

On 20/03/2012 13:45, Hartmut Kaiser wrote:
You should have used rebase to refresh your repository, not merge :)
Also when things are really starting to look bad, your best help are two commands "git reflog" and "git reset --hard" :)
Hmm strange - I never ran into something like that with SVN, where somebody told me 'you should have done it this way and not that way' (and yes, before you ask, I've had quite a bit of exposure to GIT myself).
Let's face it, GIT is a usability nightmare (IMHO) and it will not enable anything we couldn't do with SVN (or with Mercurial for that matter) if we only wanted to (IHMO, at least I still have to see somebody giving me that use case).
If ability to do distributed development and scalability are not convincing arguments for you, I don't know what will. Yes you have to learn git in order to use it efficiently, just as you once learned basics of version control. Some of these basics no longer apply when you switch to DVCS and whilst some tools (e.g. hg) try to follow old habits, such approach brings its own idiosyncrasies (many heads to branch). Anyway I'm not going to try to convince anybody. There are people doing the work to ensure future scalability of boost version control, I'm grateful for that and not going to stifle the effort. There will be always people complaining about necessity to unlearn old habits and learn new tools, but I think in this case it's just this: necessity. I believe boost code base simply won't scale without better version control. B.

On 20/03/2012 13:45, Hartmut Kaiser wrote:
You should have used rebase to refresh your repository, not merge :)
Also when things are really starting to look bad, your best help are two commands "git reflog" and "git reset --hard" :)
Hmm strange - I never ran into something like that with SVN, where somebody told me 'you should have done it this way and not that way' (and yes, before you ask, I've had quite a bit of exposure to GIT myself).
Let's face it, GIT is a usability nightmare (IMHO) and it will not enable anything we couldn't do with SVN (or with Mercurial for that matter) if we only wanted to (IHMO, at least I still have to see somebody giving me that use case).
If ability to do distributed development and scalability are not convincing arguments for you, I don't know what will.
Nobody has shown to me that SVN is not capable of doing this - or Mercurial or ...put your favorite VCS name here... If the community decides we need to switch, then GIT is definitely the worst possible choice in terms of usability, code quality, error messages, robustness, user friendliness, etc. Again, all of this IMHO.
Yes you have to learn git in order to use it efficiently, just as you once learned basics of version control. Some of these basics no longer apply when you switch to DVCS and whilst some tools (e.g. hg) try to follow old habits, such approach brings its own idiosyncrasies (many heads to branch).
Sure. The question is whether you need to switch in the first place.
Anyway I'm not going to try to convince anybody. There are people doing the work to ensure future scalability of boost version control, I'm grateful for that and not going to stifle the effort. There will be always people complaining about necessity to unlearn old habits and learn new tools, but I think in this case it's just this: necessity. I believe boost code base simply won't scale without better version control.
Your implicit assumptions related to my 'unwillingness' to learn new things are wrong and I don't know where you got those from. Regards Hartmut --------------- http://boost-spirit.com http://stellar.cct.lsu.edu

Nobody has shown to me that SVN is not capable of doing this - or Mercurial or ...put your favorite VCS name here...
Here is a (simple innacurate) list of what a DCVS can do that a VCS usually cannot do: 1. commit when you don't have a working internet connection 2. see the diffs between two very old revisions without a working internet connection 3. don't wait 5-10 seconds to see the diffs between old revisions because of the internet latency 4. use branches and merge them without fear of making mistakes (because you can simply undo what you did before pushing to the remote) About the two first point, if you suggest that you can do it with svn by setting up your own local svn server and do a deep copy of the svn server and then using that one to commit/view diffs when you don't have a working internet connection this is just too much work compared to a DCVS for it to be a real alternative. Point 1 & 2 are more important than you think, it frees people to hack & work on the project without bothering others on the central repo. They can hack & throw away changes and only present stuffs when it's ready (because you can throw away unpushed commits). Point 3 is not crucial but in the long run it's very nice to have, now when I switch back to projects using SVN I get frustrated having to wait. Point 4 is the real important point here imho. My 0.02$ Philippe

On 20/03/2012 14:54, Hartmut Kaiser wrote:
If ability to do distributed development and scalability are not convincing arguments for you, I don't know what will.
Nobody has shown to me that SVN is not capable of doing this - or Mercurial or ...put your favorite VCS name here...
I think points of distributed development (fast or offline work, freedom to experiment) have been already expressed in this thread. As for performance, hope you will trust http://git-scm.com/about (since I can't be bothered to find benchmarks for you).
Sure. The question is whether you need to switch in the first place.
I believe we do, and base this on trust I place on Dave's judgment on this. He knows the pains of maintaining existing boostcode base, just as many of us know pains of maintaining any large codebase in a central source control repository.
Anyway I'm not going to try to convince anybody. There are people doing the work to ensure future scalability of boost version control, I'm grateful for that and not going to stifle the effort. There will be always people complaining about necessity to unlearn old habits and learn new tools, but I think in this case it's just this: necessity. I believe boost code base simply won't scale without better version control.
Your implicit assumptions related to my 'unwillingness' to learn new things are wrong and I don't know where you got those from.
Apologies if this was implied. I literally meant "people", not you. B.

On 3/20/2012 10:54 AM, Hartmut Kaiser wrote:
Nobody has shown to me that SVN is not capable of doing this - or Mercurial or ...put your favorite VCS name here...
I haven't finished catching up with all the messages on this thread but this was too bizarre to go past. As is usual with "Turing equivalency" statements it is irrelevant in practice. Yes, everything that is done in C++ can be done in C, or in Basic, or in JavaScript, or in Cobol, or in Assembly, or on a Turing machine with I/O added. But that doesn't make those alternatives a convenient or even practical replacement for C++ in all cases. Similarly, there is nothing that any version control system does that cannot be done with a network file server. The question is not what any of them *can* do its how convenient, pleasant, reliable and efficient it is to do what is needed with each of them (I include the learning curve issue under "convenient"). I'm really quite agnostic on the issue -- the limited amount of experience I have with git does not weigh heavily against learning Mercurial from scratch. So far, though, there really has not been a single substantive argument that I can remember being made for Mercurial -- only *against* git, apparently based on atypical personal experience including the blunder of trying to pick up use of a tool never mastered after several months without the elementary precaution of copying the local repository first. (Coincidentally, I was in pretty much the same situation only a few days ago. An old client using git to distribute to Heroku asked me to add some patches. Since I had some tests and tools in there I updated my old repository rather than cloning a new one -- after backing it up for safety, which is just a matter of copying the top directory). Note that this would not be an option with a non-distributed VCS like svn -- the best you can do is back up your local sandbox. I do have sympathy for your stance given your experience, but it does seem to be quite atypical. I have to wonder whether at least part of the problem wasn't a poorly structured repository. Topher Cooper

On 20/03/12 15:54, Hartmut Kaiser wrote:
If ability to do distributed development and scalability are not convincing arguments for you, I don't know what will.
Nobody has shown to me that SVN is not capable of doing this - or Mercurial or ...put your favorite VCS name here...
You were given a pretty simple explanation in the previous post. You cannot commit in SVN without updating first. For an analogy in parallel programming, SVN requires a global barrier every time you need to do something, while Git doesn't. Surely you can see that Git scales much better. Now, if you do very large commits anyway, scalability at this level doesn't matter so much. But good practice is to make relatively small commits, one commit being a meaningful atomic feature. Small commits make it much easier to trace the development that has been done, to identify when problems were introduced, etc. Git enables to do many small commits easily without synchronization with the master repository. It not only improves development time, but quality of the history as well.

AMDG On 03/21/2012 05:25 AM, Mathias Gaunard wrote:
On 20/03/12 15:54, Hartmut Kaiser wrote:
If ability to do distributed development and scalability are not convincing arguments for you, I don't know what will.
Nobody has shown to me that SVN is not capable of doing this - or Mercurial or ...put your favorite VCS name here...
You were given a pretty simple explanation in the previous post. You cannot commit in SVN without updating first.
Only if some of the files that you modify have been modified in the repository.
For an analogy in parallel programming, SVN requires a global barrier every time you need to do something, while Git doesn't. Surely you can see that Git scales much better.
Now, if you do very large commits anyway, scalability at this level doesn't matter so much. But good practice is to make relatively small commits, one commit being a meaningful atomic feature. Small commits make it much easier to trace the development that has been done, to identify when problems were introduced, etc.
Git enables to do many small commits easily without synchronization with the master repository. It not only improves development time, but quality of the history as well.
This is only a problem in SVN if multiple developers are working on the same files at the same time. I don't see this happening a lot for Boost, given our total man-power, average file granularity, and total code size. In Christ, Steven Watanabe

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Steven Watanabe Sent: Wednesday, March 21, 2012 4:39 PM To: boost@lists.boost.org Subject: Re: [boost] [git] Mercurial?
AMDG
You were given a pretty simple explanation in the previous post. You cannot commit in SVN without updating first.
Only if some of the files that you modify have been modified in the repository.
For an analogy in parallel programming, SVN requires a global barrier every time you need to do something, while Git doesn't. Surely you can see that Git scales much better.
Now, if you do very large commits anyway, scalability at this level doesn't matter so much. But good practice is to make relatively small commits, one commit being a meaningful atomic feature. Small commits make it much easier to trace the development that has been done, to identify when problems were introduced, etc.
Git enables to do many small commits easily without synchronization with the master repository. It not only improves development time, but quality of the history as well.
This is only a problem in SVN if multiple developers are working on the same files at the same time. I don't see this happening a lot for Boost, given our total man-power, average file granularity, and total code size.
Well both in working collaboratively on Boost.Math and a couple of GSoC projects, I have found that coming to commit and finding that someone else has committed a change is common. Unless you agree who has temporary exclusive access to the code, it is all too easy to find that changes collide. I'm unclear how git alters that (if at all) I have yet to understand. Paul --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

On 03/21/2012 06:38 PM, Paul A. Bristow wrote:
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Steven Watanabe Sent: Wednesday, March 21, 2012 4:39 PM To: boost@lists.boost.org Subject: Re: [boost] [git] Mercurial?
AMDG
You were given a pretty simple explanation in the previous post. You cannot commit in SVN without updating first.
Only if some of the files that you modify have been modified in the repository.
For an analogy in parallel programming, SVN requires a global barrier every time you need to do something, while Git doesn't. Surely you can see that Git scales much better.
Now, if you do very large commits anyway, scalability at this level doesn't matter so much. But good practice is to make relatively small commits, one commit being a meaningful atomic feature. Small commits make it much easier to trace the development that has been done, to identify when problems were introduced, etc.
Git enables to do many small commits easily without synchronization with the master repository. It not only improves development time, but quality of the history as well.
This is only a problem in SVN if multiple developers are working on the same files at the same time. I don't see this happening a lot for Boost, given our total man-power, average file granularity, and total code size. Well both in working collaboratively on Boost.Math and a couple of GSoC projects, I have found that coming to commit and finding that someone else has committed a change is common.
Unless you agree who has temporary exclusive access to the code, it is all too easy to find that changes collide.
I'm unclear how git alters that (if at all) I have yet to understand.
Same here ... But if we believe the git advocates, resolving those conflicts is easier with git. From what i understand it is partly due to the fact that gits merge algorithm is slightly better than svns. This might also be due to the fact, that commits are broken into smaller pieces, thus the conflict is more isolated.
Paul
--- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

Thomas Heller <thom.heller@googlemail.com> writes:
On 03/21/2012 06:38 PM, Paul A. Bristow wrote:
Well both in working collaboratively on Boost.Math and a couple of GSoC projects, I have found that coming to commit and finding that someone else has committed a change is common.
Unless you agree who has temporary exclusive access to the code, it is all too easy to find that changes collide.
I'm unclear how git alters that (if at all) I have yet to understand.
Same here ... But if we believe the git advocates, resolving those conflicts is easier with git.
That's actually not true. DVCS tools are not magic and when there is a genuine conflict in SVN (the same region was editited in parallel by two developers) then you also get a conflict in Git and Mercurial. The difference is that Git and Mercurial track the history more explicitly in their changeset graphs -- SVN has merge tracking after version 1.5, but according to the SVN authors, it's a fragile feature. I've heard from clients that they migrated to Mercurial after finding that Subversion had conflicts in the *meta data* that was supposed to track merges!
From what i understand it is partly due to the fact that gits merge algorithm is slightly better than svns.
True, Mercurial and Git has a clearer picture of what's going on. I've already presented a case where Subversion fails to merge cleanly but Mercurial and Git will: http://stackoverflow.com/a/2486662/110204 In the scenario, there is a rename on one branch and a modification on another branch.
This might also be due to the fact, that commits are broken into smaller pieces, thus the conflict is more isolated.
No, the number of commits is not important for a three-way merge. This is a very common misunderstanding, though, and many blog posts keep saying this since it sounds intuitively correct. But the fact is that a three-way merge is concerned with three things only: common ancestor, your version, and my version. The number of commits that went into producing your and my version is irrelevant. I've written a little about it here: http://stackoverflow.com/a/8592480/110204 http://stackoverflow.com/a/9500764/110204 -- Martin Geisler Mercurial links: http://mercurial.ch/

Hi,
That's actually not true. DVCS tools are not magic and when there is a genuine conflict in SVN (the same region was editited in parallel by two developers) then you also get a conflict in Git and Mercurial.
Could you please confirm this ? I am not an expert on that, but it seems that the diff algo in Git and Hg is completely different. For instance it is able to detect that lines were moved rather than some lines deleted and some (unrelated) lines inserted in the SVN way. Regards, Julien

Julien Nitard <julien.nitard@m4tp.org> writes:
Hi,
That's actually not true. DVCS tools are not magic and when there is a genuine conflict in SVN (the same region was editited in parallel by two developers) then you also get a conflict in Git and Mercurial.
Could you please confirm this ? I am not an expert on that, but it seems that the diff algo in Git and Hg is completely different. For instance it is able to detect that lines were moved rather than some lines deleted and some (unrelated) lines inserted in the SVN way.
No, that's not true. It's part of a popular myth around Git: people say that it tracks "content", not "files". This is being repeated and repeated all over the web as a defining and amazing feature of Git. An example is the bottom of this page: http://book.git-scm.com/3_normal_workflow.html What it really means is that a particular file is stored in the key-value store under a key derived from it's content. The key is independent of the file name. This means that a rename file is stored under the same key -- nothing more. A *changed* file (the situation you have when you move a code block from foo.c to bar.c) will stil be stored under a different key and will look completely unrelated to Git. It is true that 'git blame' has -M and -C options you can use to make it look for moved blocks of code. But this is pure post-processing: Git is comparing the versions it has stored and can detect moved code based on that. Subversion could in principle also do this based on the data it has stored. Based on the "track content, not files" myth, people have been trying to make Git magically recognize that code was moved in one branch and changed in another. This question is a good example: http://stackoverflow.com/q/8843891/110204 You can try yourself it out with these repositories: https://bitbucket.org/mg/git-move-edit/changesets https://bitbucket.org/mg/hg-move-edit/changesets So, to recap: Mercurial, Git, and even Subversion use three-way merges to resolve conflicts. A three-way merge is a simple algorithm that I sketch in this answer: http://stackoverflow.com/a/9533927/110204 The three-way merge uses a common ancestor version (Subversion can mostly track this after version 1.5, Git and Mercurial has it as a core concept) and the two divergent versions. For each hunk the merge table looks like this: ancestor mine your -> merge old old old old (nobody changed the hunk) old new old new (I changed the hunk) old old new new (you changed the hunk) old new new new (hunk was cherry picked onto both branches) old foo bar <!> (conflict, both changed hunk but differently) Put simply: a three-way merge uses the ancestor to decide which hunk is new and which hunk is still old. Change trumphs, so new hunks are copied to the merge result. Finally, just to make sure nobody complains that I say Git and Mercurial merges 100% identically: Git will create a virtual common ancestor if there are more than greatest common ancestor. That can help resolve some criss-cross merges. Currently, Mercurial does not let you select the ancestor but I wrote a tiny extension for this: http://stackoverflow.com/a/9430810/110204 -- Martin Geisler aragost Trifork Professional Mercurial support http://www.aragost.com/mercurial/

Julien Nitard wrote:
That's actually not true. DVCS tools are not magic and when there is a genuine conflict in SVN (the same region was editited in parallel by two developers) then you also get a conflict in Git and Mercurial.
Could you please confirm this ? I am not an expert on that, but it seems that the diff algo in Git and Hg is completely different. For instance it is able to detect that lines were moved rather than some lines deleted and some (unrelated) lines inserted in the SVN way.
I've heard about that too, even that git (and probably also hg) is able to detect that some lines moved from one file to another. I haven't observed that neat trick by myself yet (not sure whether it didn't happen or I just wasn't aware of it) and I can't remember where I read about it. I think it doesn't necessarily need to have something to do with the diff tool; the magic could also be in the algorithms that process the diffs or perhaps in the use of hashes. In fact I suspect it's not in the diff tool, since you can configure git to use a different diff engine. That said, when two developers really edit the very same line in parallel (say one changes it to all uppercase and the other to all lowercase), of course there is no way that any tool in the world would be able to resolve the conflict. -Julian

Julian Gonggrijp <j.gonggrijp@gmail.com> writes:
Julien Nitard wrote:
That's actually not true. DVCS tools are not magic and when there is a genuine conflict in SVN (the same region was editited in parallel by two developers) then you also get a conflict in Git and Mercurial.
Could you please confirm this ? I am not an expert on that, but it seems that the diff algo in Git and Hg is completely different. For instance it is able to detect that lines were moved rather than some lines deleted and some (unrelated) lines inserted in the SVN way.
I've heard about that too, even that git (and probably also hg) is able to detect that some lines moved from one file to another. I haven't observed that neat trick by myself yet (not sure whether it didn't happen or I just wasn't aware of it) and I can't remember where I read about it.
It seems that everybody has heard of this magic... but nobody has actually seen it, and nobody can remember where they read about it :)
I think it doesn't necessarily need to have something to do with the diff tool; the magic could also be in the algorithms that process the diffs or perhaps in the use of hashes. In fact I suspect it's not in the diff tool, since you can configure git to use a different diff engine.
Right -- both Git and Mercurial let you plug in merge and diff tools. The diff tool is used for presentation: you can add a 'hg oodiff' command that will diff OpenOffice documents for you. It's just post-processing, but still very nice. The merge tool is called when there is a conflict. It is called with three files: ancestor, mine, yours. It can do whatever it want to merge the files. It can for example analyse the history from ancestor to mine and from ancestor to yours and detect moved code and act accordingly.
That said, when two developers really edit the very same line in parallel (say one changes it to all uppercase and the other to all lowercase), of course there is no way that any tool in the world would be able to resolve the conflict.
Agreed! -- Martin Geisler aragost Trifork Professional Mercurial support http://www.aragost.com/mercurial/

It seems that everybody has heard of this magic... but nobody has actually seen it, and nobody can remember where they read about it :)
I do, in case somebody is interested. It's in the introduction to Mercurial wrote by Joel Spolski.that was posted here not so long ago. http://hginit.com/00.html paragraph "One more big conceptual difference". Julien

Julien Nitard <julien.nitard@m4tp.org> writes:
It seems that everybody has heard of this magic... but nobody has actually seen it, and nobody can remember where they read about it :)
I do, in case somebody is interested. It's in the introduction to Mercurial wrote by Joel Spolski.that was posted here not so long ago.
http://hginit.com/00.html paragraph "One more big conceptual difference".
Oh, yeah, that guide... :) I'm afraid Joel didn't really know what he was talking about back when he wrote that piece. Let me hurry and say that I *also* used to think that Mercurial/Git/... would re-apply the changes made on the branches when merging. I thought that merging z into t in r --- s --- t / ... a \ x --- y --- z would mean applying the x, y, z diffs onto t. If that was true, then it would make a difference if you have 10 or 1000 changes between a and z. But it's not what happens in a standard three-way merge: it depends only on (a, t, z). The history back to the ancestor is only used to resolve renamed files so that foo.c in a can be merged with bar.c in z. A related operation is rebasing. There the granularity of your changes *does* make a difference. A rebase is a series of merges. Rebasing x:y onto t would mean merging t and x to create x', then merging x' and y to create y', and finally y' and z to create z': r --- s --- t --- x' --- y' --- z' / .------------' / / ... a / .--------------' / \ / / .--------------' x --- y --- z You then delete the second parents of x' to z' so that you have: ... a --- r --- s --- t --- x' --- y' --- z' Since you do repeated merges it can be easier to merge if you have smaller changesets. Too small changesets can mean that you have to resolve a conflict that is cancelled by a later changeset (imagine that x makes a change that conflicts with t and y undoes the change again). -- Martin Geisler aragost Trifork Professional Mercurial support http://www.aragost.com/mercurial/

on Thu Mar 22 2012, Martin Geisler <mg-AT-aragost.com> wrote:
Julien Nitard <julien.nitard@m4tp.org> writes:
It seems that everybody has heard of this magic... but nobody has actually seen it, and nobody can remember where they read about it :)
I do, in case somebody is interested. It's in the introduction to Mercurial wrote by Joel Spolski.that was posted here not so long ago.
http://hginit.com/00.html paragraph "One more big conceptual difference".
Oh, yeah, that guide... :)
I'm afraid Joel didn't really know what he was talking about back when he wrote that piece.
+1 -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 22 March 2012 17:17, Dave Abrahams <dave@boostpro.com> wrote:
on Thu Mar 22 2012, Martin Geisler <mg-AT-aragost.com> wrote:
Julien Nitard <julien.nitard@m4tp.org> writes:
It seems that everybody has heard of this magic... but nobody has actually seen it, and nobody can remember where they read about it :)
I do, in case somebody is interested. It's in the introduction to Mercurial wrote by Joel Spolski.that was posted here not so long ago.
http://hginit.com/00.html paragraph "One more big conceptual difference".
Oh, yeah, that guide... :)
I'm afraid Joel didn't really know what he was talking about back when he wrote that piece.
+1
Wrt. git, the 'magic' certainly predates 'hg init'. For example, see the second answer at: http://stackoverflow.com/questions/1897585/how-does-git-handle-merging-code-... I think it dates back to early git development, when people were arguing about rename tracking. I think people might have read too much into things like: http://article.gmane.org/gmane.comp.version-control.git/217 Or maybe got confused with the content tracking in 'git blame'. Or things just get distorted as they are repeated.

Daniel James <dnljms@gmail.com> writes:
On 22 March 2012 17:17, Dave Abrahams <dave@boostpro.com> wrote:
on Thu Mar 22 2012, Martin Geisler <mg-AT-aragost.com> wrote:
Julien Nitard <julien.nitard@m4tp.org> writes:
It seems that everybody has heard of this magic... but nobody has actually seen it, and nobody can remember where they read about it :)
I do, in case somebody is interested. It's in the introduction to Mercurial wrote by Joel Spolski.that was posted here not so long ago.
http://hginit.com/00.html paragraph "One more big conceptual difference".
Oh, yeah, that guide... :)
I'm afraid Joel didn't really know what he was talking about back when he wrote that piece.
+1
Wrt. git, the 'magic' certainly predates 'hg init'. For example, see the second answer at:
http://stackoverflow.com/questions/1897585/how-does-git-handle-merging-code-...
No, that answer is full of "I think" and "git should" and so on. It's not factually correct -- he doesn't give any steps that demonstrates that Git can merge a change to foo.c into bar.c just because *part* of foo.c was moved into bar.c. Please don't believe people who give that kind of fuzzy answers. Especially when it's *trivial* and easy to test! I made these repositories in less than five minuttes: https://bitbucket.org/mg/git-move-edit/changesets https://bitbucket.org/mg/hg-move-edit/changesets Clone them and inspect them. You'll see that there is a small change in one branch and a moved function in another. Try merging the branches and you'll get a merge conflict in both Git and Mercurial. It's of course clear that you could write a merge tool that would look at the history and try to be smarter. But neither Git nor Mercurial ships with such a merge tool out of the box.
I think it dates back to early git development, when people were arguing about rename tracking. I think people might have read too much into things like:
I think that post is about inferring file renames after the fact. I'm fine with that -- it's annoying to get a bad merge with Mercurial because someone did hg remove foo.c hg add bar.c instead of hg rename foo.c bar.c On the other hand, I've heard people complain about the lack of formal renamed in Git -- they liked how Mercurial lets you be explicit about a file rename. We've talked about making Mercurial infer renames like Git so that we catch mistakes like the above.
Or maybe got confused with the content tracking in 'git blame'. Or things just get distorted as they are repeated.
Yes, that's very true. The "track content, not files" mantra has been repeated again and again by people who don't really understand what it means and there is now a lot of developers who will swear that Git is fundamentally different because it "tracks content, not files". -- Martin Geisler Mercurial links: http://mercurial.ch/

On 22 March 2012 23:00, Martin Geisler <mg@aragost.com> wrote:
Daniel James <dnljms@gmail.com> writes:
On 22 March 2012 17:17, Dave Abrahams <dave@boostpro.com> wrote:
on Thu Mar 22 2012, Martin Geisler <mg-AT-aragost.com> wrote:
Julien Nitard <julien.nitard@m4tp.org> writes:
It seems that everybody has heard of this magic... but nobody has actually seen it, and nobody can remember where they read about it :)
I do, in case somebody is interested. It's in the introduction to Mercurial wrote by Joel Spolski.that was posted here not so long ago.
http://hginit.com/00.html paragraph "One more big conceptual difference".
Oh, yeah, that guide... :)
I'm afraid Joel didn't really know what he was talking about back when he wrote that piece.
+1
Wrt. git, the 'magic' certainly predates 'hg init'. For example, see the second answer at:
http://stackoverflow.com/questions/1897585/how-does-git-handle-merging-code-...
No, that answer is full of "I think" and "git should" and so on. It's not factually correct
I was linking to an example of someone who was wrong. My mail was about history, not what git is capable of. The point was that the misconception didn't start with 'hg init'.

Daniel James <dnljms@gmail.com> writes:
On 22 March 2012 23:00, Martin Geisler <mg@aragost.com> wrote:
Daniel James <dnljms@gmail.com> writes:
On 22 March 2012 17:17, Dave Abrahams <dave@boostpro.com> wrote:
on Thu Mar 22 2012, Martin Geisler <mg-AT-aragost.com> wrote:
Julien Nitard <julien.nitard@m4tp.org> writes:
> It seems that everybody has heard of this magic... but nobody has > actually seen it, and nobody can remember where they read about it :)
I do, in case somebody is interested. It's in the introduction to Mercurial wrote by Joel Spolski.that was posted here not so long ago.
http://hginit.com/00.html paragraph "One more big conceptual difference".
Oh, yeah, that guide... :)
I'm afraid Joel didn't really know what he was talking about back when he wrote that piece.
+1
Wrt. git, the 'magic' certainly predates 'hg init'. For example, see the second answer at:
http://stackoverflow.com/questions/1897585/how-does-git-handle-merging-code-...
No, that answer is full of "I think" and "git should" and so on. It's not factually correct
I was linking to an example of someone who was wrong. My mail was about history, not what git is capable of. The point was that the misconception didn't start with 'hg init'.
Oh, I see now! I'm sorry about the misunderstanding. -- Martin Geisler Mercurial links: http://mercurial.ch/

On 03/22/2012 07:59 AM, Martin Geisler wrote:
Thomas Heller<thom.heller@googlemail.com> writes:
On 03/21/2012 06:38 PM, Paul A. Bristow wrote:
Well both in working collaboratively on Boost.Math and a couple of GSoC projects, I have found that coming to commit and finding that someone else has committed a change is common.
Unless you agree who has temporary exclusive access to the code, it is all too easy to find that changes collide.
I'm unclear how git alters that (if at all) I have yet to understand. Same here ... But if we believe the git advocates, resolving those conflicts is easier with git. That's actually not true. DVCS tools are not magic and when there is a genuine conflict in SVN (the same region was editited in parallel by two developers) then you also get a conflict in Git and Mercurial.
The difference is that Git and Mercurial track the history more explicitly in their changeset graphs -- SVN has merge tracking after version 1.5, but according to the SVN authors, it's a fragile feature. I've heard from clients that they migrated to Mercurial after finding that Subversion had conflicts in the *meta data* that was supposed to track merges!
From what i understand it is partly due to the fact that gits merge algorithm is slightly better than svns. True, Mercurial and Git has a clearer picture of what's going on. I've already presented a case where Subversion fails to merge cleanly but Mercurial and Git will:
http://stackoverflow.com/a/2486662/110204
In the scenario, there is a rename on one branch and a modification on another branch.
This might also be due to the fact, that commits are broken into smaller pieces, thus the conflict is more isolated. No, the number of commits is not important for a three-way merge. This is a very common misunderstanding, though, and many blog posts keep saying this since it sounds intuitively correct.
But the fact is that a three-way merge is concerned with three things only: common ancestor, your version, and my version. The number of commits that went into producing your and my version is irrelevant.
I've written a little about it here:
http://stackoverflow.com/a/8592480/110204 http://stackoverflow.com/a/9500764/110204
Thanks for the clarifications!

on Thu Mar 22 2012, Martin Geisler <mg-AT-aragost.com> wrote:
No, the number of commits is not important for a three-way merge. This is a very common misunderstanding, though, and many blog posts keep saying this since it sounds intuitively correct.
But the fact is that a three-way merge is concerned with three things only: common ancestor, your version, and my version. The number of commits that went into producing your and my version is irrelevant.
I've written a little about it here:
http://stackoverflow.com/a/8592480/110204 http://stackoverflow.com/a/9500764/110204
It's true that 3-way merge doesn't in practice take things in bits and pieces. That said, I have considerably simplified several nasty merge problems by merging in intermediate commits along the path to the commit I eventually want to merge. An automated tool that could do the same could be very useful for those tough merges (and would be easy to build on top of Git, and, presumably, Mercurial). -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 03/21/2012 01:38 PM, Paul A. Bristow wrote:
[snip] ... it is all too easy to find that changes collide.
I'm unclear how git alters that (if at all) I have yet to understand.
Conflicts and merging happens. A merge in cvs or svn always caused me to get worried and annoyed, because there was a non-trivial chance I was going to lose something I wanted. It meant my code that I just spent some time working on has been hacked up and spit out by an automated merge routine. If I couldn't make sense of my code vs. the other guy's, I was in trouble. With mercurial, git and other DVCS's. The merge is different. *EVERYTHING IS SAVED ALREADY* There is no fear. You have two saved changesets, whose difference you wish to resolve. Often, the merge is simple and can be done automatically. Sometimes there are conflicts. You can work on those conflicts without worrying about losing any work. You can rebase your changes on to those of the other guy (with saved change bundles). You can decide his fix is a better approach and strip out your local changes. Anything is possible. Just please don't rewrite history that has already been grabbed by others -- that's annoying. The point is that history is maintained before you ever merge. In a DVCS the merge operation is a first-class citizen -- by necessity, since every sandbox is a first class repo, complete with history and everything you need to continue your work on the plane,train, or in the cave of the dalai lama. Here's a "dont' fear the merge" answer I wrote a while back with a simple example for hg. The concepts should apply to any DVCS. http://stackoverflow.com/questions/2968905/merging-from-another-clone-with-m... Note: Although, I mentioned rebasing and stripping history as possible options for how to deal with conflicts, they are slightly riskier than a simple merge. This is why mercurial requires you to add a single line to your config file to enable these extensions. It is sort of a safety catch on a rifle. It is still possible to blow your foot off, but not quite as easy.

Mark Borgerding <mark@borgerding.net> writes:
On 03/21/2012 01:38 PM, Paul A. Bristow wrote:
[snip] ... it is all too easy to find that changes collide.
I'm unclear how git alters that (if at all) I have yet to understand.
Conflicts and merging happens. A merge in cvs or svn always caused me to get worried and annoyed, because there was a non-trivial chance I was going to lose something I wanted. It meant my code that I just spent some time working on has been hacked up and spit out by an automated merge routine. If I couldn't make sense of my code vs. the other guy's, I was in trouble.
With mercurial, git and other DVCS's. The merge is different. *EVERYTHING IS SAVED ALREADY* There is no fear. You have two saved changesets, whose difference you wish to resolve.
That is an important difference: with Subversion you run $ svn commit only to discover that someone else has touched a file you also touch. You now need to run $ svn update to get the changes from the repository and *merge* them into your *uncommitted* changes. I know Subversion leaves .mine files behind, but we're still talking about uncommited changes. The normal way to avoid this in Subversion is to work on a branch. You create the branch *up-front* and everybody promise each other that they wont mess around in each others branches. Mercurial and Git have a non-linear history at the core. There the branch is created as needed and automatically if people push to the central repository before you do. Afterwards, you can directly see what happened in parallel and what happened in sequence.
Often, the merge is simple and can be done automatically. Sometimes there are conflicts. You can work on those conflicts without worrying about losing any work.
You can even ask someone else to do the merge: he can pull your committed changes into his repository and do the merge there. Maybe someone restructured the code while you made some smaller changes. Than that guy might be better at merging than you are. -- Martin Geisler aragost Trifork Professional Mercurial support http://www.aragost.com/mercurial/

Well the evidence is hard ... but let me try to replay my experience. I am sure, the next guy will step up and tell me that i did it totally wrong (actually happened when i tried to collaborate on said project using git).
So, the journey starts about a year ago or so. I decided i need to check out this new project i heard about. I was (actually still am) very determined to contribute to that project, so i cloned the repository, browsed the code etc. eventually i decided to fork this project cause i wanted to get some hacking done. That is what i did. Then life happened and i had to postpone the work on the project. A few months later, I got a new assignment to contribute a module for that project. Remember, i still got that (public) fork lying around. So i tried to get it up to date. First bummer. I don't remember which commands i tried in which order, but merge didn't really work, and i messed up during rebase. the result was, that i spent an entire day trying to figure out how to get this outdated fork uptodate to start hacking again. Also, since trying to learn this new git tool and its cool branches and stuff, i had of course multiple local branches lying around, never really figured how to properly maintain that (origin branch, master fork branch, origin feature branch1, etc. ...) and constantly pushed to the wrong branches and/or repos (luckily, I didn't have any write rights to the repository i forked from). And not to forget that i wanted to try some feature X from branch Y, but needed to combine that with my feature Z on branch U. Essentially, whenever I tried to publicly show my progress to someone, I ended up totally confused, and in a complete local litter box of branches, where half of them didn't really do what they were supposed to (like remote tracking). I needed to search the internets for how to accomplish task X that wasn't a simple "git add" or "git commit". Asking people after i didn't know any further lead to the answer that i shouldn't have executed command X in the first place. D'oh, that was how i read it on the internets ... I am sorry, this isn't a really detailed usage story, missing all the commands etc. that is why I wasn't clearer in the first place. To be perfectly honest, i even forgot most of the git usage i learned back then. I hope you can still relate a little to what I am talking about.
But yeah ... this is the memory that makes me arguing against git. Also, it is the reason why i argue against all those "advantages" people see over using git. I clearly fail to see them cause i miserably failed in actually trying to use them.
My $0.02 ...
P.S.: In this case, the usage of git actually prevented me to make the contribution i wanted to do. Nevertheless I was able to contribute a tiny bit. But well ...
Thanks a lot! My first thought here is: It's all about the mental concepts! Just a little, perhaps commonly understandable, example before coming to Git etc.: Starting with C++, I found it fascinating (especially templates), and at the beginning it was all hard work --- but at some point things started to flow, I got an intuitive feeling for it, and could just "write" code (with templates). That's an example for the mental concepts, or perhaps "mental images", mental maps, I'm referring to, and which one needs to work (live). Also a disclosure: I personally use Git all over the place, private projects, software projects, university work, and I am very satisfied with it. In the same sense, it flows, things work out (mostly) as expected, it helps me a lot. Compared to what you describe above, I am perhaps a more conservative user --- if I really want to use a new feature, I better create some experimental repositories first, make experiments, until I start being able to predict what should happen. It has been said that Mercurial and Git aren't that different: I think in a certain sense that's true, but then, more fundamentally, it's very misleading. Branching and the understanding of what is "history" are such complicated concepts (inherently!), that the differences crucially matter, make a big difference, when it comes to the point that you want to do more complicated things, and this in a *comfortable* way (that you (intuitively) know what you are doing). Unfortunately in the whole literature on computing (and also mathematics; sometimes computer science is a bit better, at least wants to do things better) little attention is on concepts: a stream of commands / actions is shown to you, and from that you are supposed to learn something. The Internet is great for technical details (amazing StackOverflow and related sides), but when it comes to greater pictures, the underlying world models, there is not much (though, again, on StackOverflow there is some help). It would be a great thing if Boost could come up with "best practice", mental maps, advice and role models how to do the standard actions required to work with Git (w.r.t. Boost). If you really jump into the complexity of the world of SCM, which Git I believe offers you, without careful preparations, huhuhu, if I may say, there you go and the sharks are waiting. You can branch, stash, sub-something, remote here and there, a bit of history surgery, and this all at once, all mixed together ... And there are also practical problems: you mention pushing to the wrong branches --- this can happen, and one has to develop a certain discipline to avoid that. Before starting to work, "git status" on the command line, or rescanning with git-gui is a must. So, to conclude, I believe that if Boost provides workflow models, with general explanations and examples how to do it (command-line *and* git-gui+gitk would be great --- git-gui and gitk are such a great help, but they are overlooked so often), for the common actions, and there is a bit of culture to sustain these efforts, expand and improve it, then Git will be a great tool for Boost. (There is, of course, a little devil in Git, the seduction by all these possibilities, and a bit care is needed.) Oliver

On 20/03/12 13:21, Thomas Heller wrote:
So, the journey starts about a year ago or so. I decided i need to check out this new project i heard about. I was (actually still am) very determined to contribute to that project, so i cloned the repository, browsed the code etc. eventually i decided to fork this project cause i wanted to get some hacking done. That is what i did. Then life happened and i had to postpone the work on the project. A few months later, I got a new assignment to contribute a module for that project. Remember, i still got that (public) fork lying around. So i tried to get it up to date. First bummer. I don't remember which commands i tried in which order, but merge didn't really work, and i messed up during rebase. the result was, that i spent an entire day trying to figure out how to get this outdated fork uptodate to start hacking again. Also, since trying to learn this new git tool and its cool branches and stuff, i had of course multiple local branches lying around, never really figured how to properly maintain that (origin branch, master fork branch, origin feature branch1, etc. ...) and constantly pushed to the wrong branches and/or repos (luckily, I didn't have any write rights to the repository i forked from). And not to forget that i wanted to try some feature X from branch Y, but needed to combine that with my feature Z on branch U. Essentially, whenever I tried to publicly show my progress to someone, I ended up totally confused, and in a complete local litter box of branches, where half of them didn't really do what they were supposed to (like remote tracking).
I have an idea of what happened. You had three repositories: the project you were forking (let's call it master), your published fork, and your local repository. You wanted to update your local repository and fork to the latest version of master, and you decided to do that using a rebase. What rebase does is that it rewrites history of the local repository to undo some changes you've done, update the repo, then re-apply them. Of course, once you changed history, you weren't able to push your changes back to your fork, since history was not the same between the two. A forced push would have fixed it, but that isn't really accepted practice for anything that has been published. The important lesson here is to never rebase across boundaries of published repositories. Only rebase if only local repositories will be affected by it. Or if you want to treat forks as just a snapshot of your local repo, just force push, but people won't be able to maintain a clone of your fork. To be honest I don't know what's the right way to deal with published forks.

On 03/21/2012 01:02 PM, Mathias Gaunard wrote:
On 20/03/12 13:21, Thomas Heller wrote:
So, the journey starts about a year ago or so. I decided i need to check out this new project i heard about. I was (actually still am) very determined to contribute to that project, so i cloned the repository, browsed the code etc. eventually i decided to fork this project cause i wanted to get some hacking done. That is what i did. Then life happened and i had to postpone the work on the project. A few months later, I got a new assignment to contribute a module for that project. Remember, i still got that (public) fork lying around. So i tried to get it up to date. First bummer. I don't remember which commands i tried in which order, but merge didn't really work, and i messed up during rebase. the result was, that i spent an entire day trying to figure out how to get this outdated fork uptodate to start hacking again. Also, since trying to learn this new git tool and its cool branches and stuff, i had of course multiple local branches lying around, never really figured how to properly maintain that (origin branch, master fork branch, origin feature branch1, etc. ...) and constantly pushed to the wrong branches and/or repos (luckily, I didn't have any write rights to the repository i forked from). And not to forget that i wanted to try some feature X from branch Y, but needed to combine that with my feature Z on branch U. Essentially, whenever I tried to publicly show my progress to someone, I ended up totally confused, and in a complete local litter box of branches, where half of them didn't really do what they were supposed to (like remote tracking).
I have an idea of what happened.
You had three repositories: the project you were forking (let's call it master), your published fork, and your local repository.
You wanted to update your local repository and fork to the latest version of master, and you decided to do that using a rebase. What rebase does is that it rewrites history of the local repository to undo some changes you've done, update the repo, then re-apply them.
Of course, once you changed history, you weren't able to push your changes back to your fork, since history was not the same between the two. A forced push would have fixed it, but that isn't really accepted practice for anything that has been published.
The important lesson here is to never rebase across boundaries of published repositories. Only rebase if only local repositories will be affected by it.
Yeah, something like that happened. It eventually worked out in the end.
Or if you want to treat forks as just a snapshot of your local repo, just force push, but people won't be able to maintain a clone of your fork.
To be honest I don't know what's the right way to deal with published forks.
Right, and isn't this exactly what is advocated here as one of the big advantages of git in specific and DCVS's in general?

I have an idea of what happened.
You had three repositories: the project you were forking (let's call it master), your published fork, and your local repository.
You wanted to update your local repository and fork to the latest version of master, and you decided to do that using a rebase. What rebase does is that it rewrites history of the local repository to undo some changes you've done, update the repo, then re-apply them.
Of course, once you changed history, you weren't able to push your changes back to your fork, since history was not the same between the two. A forced push would have fixed it, but that isn't really accepted practice for anything that has been published.
The important lesson here is to never rebase across boundaries of published repositories. Only rebase if only local repositories will be affected by it.
Yeah, something like that happened. It eventually worked out in the end.
Or if you want to treat forks as just a snapshot of your local repo, just force push, but people won't be able to maintain a clone of your fork.
To be honest I don't know what's the right way to deal with published forks.
Right, and isn't this exactly what is advocated here as one of the big advantages of git in specific and DCVS's in general?
In principle the solution is easy: Do not rewrite history of published repositories. This is mandated strongly by the Git-people. And this is also practical: Only under very special circumstances the history of a public repository gets rewritten (for example some strong legal reasons), and then it's a big thing. And this will only happen very rarely: Exactly since people have *local* repositories, they can handle their *local* repositories: First keep everything, every little silly mistake (you never know), and once its finished, they clone this local repository, the clone is carefully prepared for publication and review, and this is then pushed to the main repository. In the above example, we have the case where a "local repository" is also a "public repository". This should then be flagged as such, and thus everybody will understand the rewriting of history (since it is the purpose of the local repository to get integrated into the main repository). One could also handle this via branching --- the version with the changed history becomes a new branch. So one needs some policies about that, but that's natural. Oliver

On 03/21/2012 01:35 PM, Oliver Kullmann wrote:
I have an idea of what happened.
You had three repositories: the project you were forking (let's call it master), your published fork, and your local repository.
You wanted to update your local repository and fork to the latest version of master, and you decided to do that using a rebase. What rebase does is that it rewrites history of the local repository to undo some changes you've done, update the repo, then re-apply them.
Of course, once you changed history, you weren't able to push your changes back to your fork, since history was not the same between the two. A forced push would have fixed it, but that isn't really accepted practice for anything that has been published.
The important lesson here is to never rebase across boundaries of published repositories. Only rebase if only local repositories will be affected by it. Yeah, something like that happened. It eventually worked out in the end.
Or if you want to treat forks as just a snapshot of your local repo, just force push, but people won't be able to maintain a clone of your fork.
To be honest I don't know what's the right way to deal with published forks. Right, and isn't this exactly what is advocated here as one of the big advantages of git in specific and DCVS's in general?
In principle the solution is easy: Do not rewrite history of published repositories. This is mandated strongly by the Git-people. And this is also practical:
Only under very special circumstances the history of a public repository gets rewritten (for example some strong legal reasons), and then it's a big thing.
And this will only happen very rarely: Exactly since people have *local* repositories, they can handle their *local* repositories: First keep everything, every little silly mistake (you never know), and once its finished, they clone this local repository, the clone is carefully prepared for publication and review, and this is then pushed to the main repository.
In the above example, we have the case where a "local repository" is also a "public repository". This should then be flagged as such, and thus everybody will understand the rewriting of history (since it is the purpose of the local repository to get integrated into the main repository). One could also handle this via branching --- the version with the changed history becomes a new branch.
So one needs some policies about that, but that's natural. I'm sorry, you totally lost me ... So on the one hand, git is the tool to use in order to better
collaborate, but on the other hand it is totally unusable when you actually want to collaborate? Always remember, boost contributors are _not_ all behind corporate walls. We explicitly want to share. Also, collaboration and patches from other developers is what we try to encourage. And yes, as far as I understand this should be explicitly possible by other peoples public forks (as advertised by sites like github). This whole local squashing and rebasing sounds like a fun and very useful tool. I am totally ready to admit that. I can see that his history rewriting and pushing works well in a company development team with only 5-10 people involved. Which is what most people seem to have a excellent experience with using git. But let me remind you, $BOOST_ROOT/libs/maintainers.txt lists 112 maintainers! Not even speaking of the myriads of new contributors that will join us once the switch to git has been made!
But as this discussion evolves, I get the impression that it gets overly complicated when more than one public repository is involved. Utterly confused yours, Thomas

But as this discussion evolves, I get the impression that it gets overly complicated when more than one public repository is involved.
See how github.com handles it... I don't see what the problem is. There'd be one big central repository for boost, then a myriad of forks can exist if necessary and the maintainers would pick commits/patches from those forks as wanted. Philippe

On 03/21/2012 02:15 PM, Philippe Vaucher wrote:
But as this discussion evolves, I get the impression that it gets overly complicated when more than one public repository is involved.
See how github.com handles it... I don't see what the problem is.
I am sorry to tell you that the described scenario is from using github :P
There'd be one big central repository for boost, then a myriad of forks can exist if necessary and the maintainers would pick commits/patches from those forks as wanted.
Yeah ... right ... So those forks would be "private" public, right? What if, if I as a user choose to not use the officiall repo, because it is not maintained properly or because a feature is there i really care about, but the maintainer doesn't want to integrate it properly. I take it that those repositories would constantly rewrite history publicly? How is that usable then?
Philippe

Thomas Heller <thom.heller@googlemail.com> writes:
On 03/21/2012 01:35 PM, Oliver Kullmann wrote:
And this will only happen very rarely: Exactly since people have *local* repositories, they can handle their *local* repositories: First keep everything, every little silly mistake (you never know), and once its finished, they clone this local repository, the clone is carefully prepared for publication and review, and this is then pushed to the main repository.
In the above example, we have the case where a "local repository" is also a "public repository". This should then be flagged as such, and thus everybody will understand the rewriting of history (since it is the purpose of the local repository to get integrated into the main repository). One could also handle this via branching --- the version with the changed history becomes a new branch.
So one needs some policies about that, but that's natural.
I'm sorry, you totally lost me ... So on the one hand, git is the tool to use in order to better collaborate, but on the other hand it is totally unusable when you actually want to collaborate?
Collaborate means pulling commits from another repo, creating some more commits and pushing them back. Commits are meant to be immutable in both Git and Mercurial and you collaborate by adding more changesets to the ever-growing public history. Both tools give you the option of changing your local commits. So you can create changesets x-z: ... a --- b --- x --- y --- z You then change your mind and want to fix a typo in y. So you edit y and when you are done you have created y' which is simily to y (but not exactly the same because of your change). You also end up with a different z since the history of z has changed: ... a --- b --- x --- y' --- z' Now, if someone else has already pulled x-y from you, he will still have ... a --- b --- x --- y --- z in his repository. If he pulls *again*, then he ends up with: ... a --- b --- x --- y --- z \ y' --- z' The similarity between y and y' and between z and z' is not recognized by either tool so he gets a kind of doppelgänger changesets. This is not really dangerous, but it's a mess and it defeats the purpose: you tried to edit y since you wanted to publish a better version of y. You now ended up publishing both the old y and a new y and made the history more messy as a result. The rule is that you should not modify changesets that have "escaped" into other repositories. Mercurial 2.1 will track if a changeset has been published or not and commands that modify history (such as rebase) can take this into account and abort if you're modifying public changes. -- Martin Geisler aragost Trifork Professional Mercurial support http://www.aragost.com/mercurial/

On 21/03/12 13:22, Thomas Heller wrote:
To be honest I don't know what's the right way to deal with published forks.
Right, and isn't this exactly what is advocated here as one of the big advantages of git in specific and DCVS's in general?
Is it? I didn't follow all discussions, I wasn't aware the ability to publish forks was advocated as "one of the big advantages of Git". To me, it's certainly way past the big advantages.

On 03/21/2012 02:04 PM, Mathias Gaunard wrote:
On 21/03/12 13:22, Thomas Heller wrote:
To be honest I don't know what's the right way to deal with published forks.
Right, and isn't this exactly what is advocated here as one of the big advantages of git in specific and DCVS's in general?
Is it? I didn't follow all discussions, I wasn't aware the ability to publish forks was advocated as "one of the big advantages of Git".
To me, it's certainly way past the big advantages.
That is what I understood, yes. The first big advantage i took out of the discussion was the ability to do local commits. Where git has the advantage over mercurial (maybe not anymore, but doesn't really matter) that you can polish up those local commits before you push them to some public repository. This public repository might not be the same as the official upstream repository, which brings me to the next big advantage: When different people work on a project, they might need to share their current work. This will be done by somehow notifying the others to pull from their repository, right? of course, in order to ensure minimal disruption of your peer, you want him to pull a version which is compatible to what he published already. Which is what i tried, and what was confirmed by Mathias to not work properly. Additionally, of course, when different forks, with different features lie around the interwebs, the user will want to choose the repository which is maintained best. Thus a "private" public fork may suddenly become public and the havok is complete. Maybe this is a complete misunderstanding of the workflows, but that is what I personally took out of these discussions.

Thomas Heller <thom.heller@googlemail.com> writes:
The first big advantage i took out of the discussion was the ability to do local commits. Where git has the advantage over mercurial (maybe not anymore, but doesn't really matter) that you can polish up those local commits before you push them to some public repository.
Please don't say that Git has an advantage over Mercurial here. Mercurial has shipped the MQ extension[1] for ages, which is frequently used to refine changesets before they are pushed. Mercurial later grew a rebase extension, a histedit extension (works like 'git rebase -i'), and various other extensions for editing changesets. [1]: http://hgbook.red-bean.com/read/managing-change-with-mercurial-queues.html -- Martin Geisler aragost Trifork Professional Mercurial support http://www.aragost.com/mercurial/

on Wed Mar 21 2012, Mathias Gaunard <mathias.gaunard-AT-ens-lyon.org> wrote:
Of course, once you changed history, you weren't able to push your changes back to your fork, since history was not the same between the two. A forced push would have fixed it, but that isn't really accepted practice for anything that has been published.
The important lesson here is to never rebase across boundaries of published repositories. Only rebase if only local repositories will be affected by it.
It's a little more complicated than that. IIUC you can rewrite published history, but it's got to be marked as volatile so people don't expect it to remain static. I usually do this by naming the branch volatile/<something>, but I think it might be reasonable to consider all feature or topic branches to be volatile. In git-flow, those are named feature/<something>. In topgit, they're named t/<something>. Seems everyone uses that kind of naming convention. You might do this, for example, when you submit a pull request and the upstream maintainer tells you to change something about the way you wrote your code (e.g. naming convention). In order to keep a clean and understandable revision history in the main repository, you might go back and rewrite your commits, the force-push the new branch head and resubmit the pull request.
Or if you want to treat forks as just a snapshot of your local repo, just force push, but people won't be able to maintain a clone of your fork.
They can, if they know how to deal with it. They have to rebase any work on the upstream master. The easy way is to "git pull --rebase" when incorporating upstream history. You can set this up to happen automatically with autosetuprebase (http://www.espians.com/getting-started-with-git.html#rebase) -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Thomas Heller wrote:
*SIGH* you keep assuming that i never tried git. My last adventure with trying to use git was around half a year ago. I still have nightmares from that.
I'm sorry, I thought you didn't try it because you said you didn't go through the transition yet and that you were searching for arguments to make such a change. My (naieve) assumption was that if you'd tried it you'd at least know about the arguments and either agree or disagree with them, but either way you wouldn't be searching for the arguments anymore. The description of your nightmare in your next post was very illuminating, thank you for that.
[...] Maybe switching to a DCVS might increase the quantity of contributions. quantity != quality.
I couldn't agree more. Though quantity can be good too, as long as the quality doesn't go down. That seems to be mostly a matter of peer-review.
[...] With that being said, I am ready to admit that something like git might improve the handling of patches etc. but it should be clear that this is totally unrelated to actually applying and verifying those patches.
How is handling patches not related to applying and verifying patches?
FWIW, I am the last person who will oppose such a change.
*Nuff said*.
Alright, I get your point. -Julian

On 3/20/2012 7:03 AM, Julian Gonggrijp wrote: ... snip
Well, allow me to present some fair reasoning to you.
With regard to git versus svn: I think enough fair reasons have been given why git (or a DVCS in general) is better than svn. I'm not going to repeat those arguments here.
I have never heard a single technical argument, in all the endless mentions of Git among the people riding that bandwagon, why Git is better than SVN, or even why any DVCS is better than a centralized SCCS. I consider this whole move to Git and/or DVCS among "hip" programmers little more than a move to conform with what others are doing and feel "cool". I am perfectly willing to read well-chosen technical arguments but not from people already sold on one side or the other. But I really despair of anyone being able to present such arguments in the atmosphere created by Git fanatics and DVCS fanatics. The only thing I have gotten from all this is "I've tried it, I like it, and therefore its superior". Feel free, anyone, to point me to a purely technical discussion, article, whatnot, explaining the practical reasons why using a DVCS, or Git, is more productive and more pleasurable than using a centralized SCCS like Subversion.

Technical? Well, both systems do track revisions... This sounds like a "Turing Completeness" argument held by a Pascal programmer when hearing about that "cool" language called C a few decades ago. Ask people who have extensively used both, and they will tell you. C is better. Period. Git is better. There exists a simple litmus test for these kinds of comparisons: if a developer has used X and Y extensively and is to embark on a new venture, with no legacy or political ties enforcing one or the other; which one would he use? I am pretty certain that for X and Y being Git and SVN, the answer would dominantly be Git. And, I am not a "cool programmer," unless you consider having developed code for 30+ years in 20 languages, commercially, cool. /David On Mar 20, 2012, at 8:41 PM, Edward Diener wrote:
On 3/20/2012 7:03 AM, Julian Gonggrijp wrote: ... snip
Well, allow me to present some fair reasoning to you.
With regard to git versus svn: I think enough fair reasons have been given why git (or a DVCS in general) is better than svn. I'm not going to repeat those arguments here.
I have never heard a single technical argument, in all the endless mentions of Git among the people riding that bandwagon, why Git is better than SVN, or even why any DVCS is better than a centralized SCCS. I consider this whole move to Git and/or DVCS among "hip" programmers little more than a move to conform with what others are doing and feel "cool".
I am perfectly willing to read well-chosen technical arguments but not from people already sold on one side or the other. But I really despair of anyone being able to present such arguments in the atmosphere created by Git fanatics and DVCS fanatics. The only thing I have gotten from all this is "I've tried it, I like it, and therefore its superior".
Feel free, anyone, to point me to a purely technical discussion, article, whatnot, explaining the practical reasons why using a DVCS, or Git, is more productive and more pleasurable than using a centralized SCCS like Subversion.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Ed(ward), I can give you an anecdote, for DVCSes, though: I spend around 50 hours in the air every month; yes, a full working week. I rarely have - or want - Internet access on those trips. Git (and most other DVCSes) allows me to work revisioned, trying our various paths of development, during that working week. Related, I like to test branches and ideas without having anyone else observing my moves or caring about what I do; so, I can do that, locally, instead of creating obscure or sacred branches in SVN in a common repository. There are a bunch of more technical reasons why Git is superior to SVN, related to the snapshot representation of files and trees of files, and commits, instead of deltas (such as in SVN.) If a delta chain (or poset...) gets out of sync, you are kind of lost; very hard to recreate the current (or any) consistent state of files. /David On Mar 20, 2012, at 9:31 PM, David Bergman wrote:
Technical? Well, both systems do track revisions...
This sounds like a "Turing Completeness" argument held by a Pascal programmer when hearing about that "cool" language called C a few decades ago.
Ask people who have extensively used both, and they will tell you. C is better. Period. Git is better.
There exists a simple litmus test for these kinds of comparisons: if a developer has used X and Y extensively and is to embark on a new venture, with no legacy or political ties enforcing one or the other; which one would he use? I am pretty certain that for X and Y being Git and SVN, the answer would dominantly be Git.
And, I am not a "cool programmer," unless you consider having developed code for 30+ years in 20 languages, commercially, cool.
/David
On Mar 20, 2012, at 8:41 PM, Edward Diener wrote:
On 3/20/2012 7:03 AM, Julian Gonggrijp wrote: ... snip
Well, allow me to present some fair reasoning to you.
With regard to git versus svn: I think enough fair reasons have been given why git (or a DVCS in general) is better than svn. I'm not going to repeat those arguments here.
I have never heard a single technical argument, in all the endless mentions of Git among the people riding that bandwagon, why Git is better than SVN, or even why any DVCS is better than a centralized SCCS. I consider this whole move to Git and/or DVCS among "hip" programmers little more than a move to conform with what others are doing and feel "cool".
I am perfectly willing to read well-chosen technical arguments but not from people already sold on one side or the other. But I really despair of anyone being able to present such arguments in the atmosphere created by Git fanatics and DVCS fanatics. The only thing I have gotten from all this is "I've tried it, I like it, and therefore its superior".
Feel free, anyone, to point me to a purely technical discussion, article, whatnot, explaining the practical reasons why using a DVCS, or Git, is more productive and more pleasurable than using a centralized SCCS like Subversion.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

I spend around 50 hours in the air every month; yes, a full working week. I rarely have - or want - Internet access on those trips.
You are an exception. Very, very few coders travel by plane that much.
Git (and most other DVCSes) allows me to work revisioned, trying our various paths of development, during that working week.
Local work is not a unique property of DVCSes. (See shelving / checkpointing planned features in SVN 1.8).
Related, I like to test branches and ideas without having anyone else observing my moves or caring about what I do; so, I can do that, locally, instead of creating obscure or sacred branches in SVN in a common repository.
This is a very good point. Though it is still a specific need. The VCS is here to help the team. If individuals want to play on their own, it's only "nice to have" IMHO and shouldn't make the other part of the process more complex.
There are a bunch of more technical reasons why Git is superior to SVN, related to the snapshot representation of files and trees of files, and commits, instead of deltas (such as in SVN.) If a delta chain (or poset...) gets out of sync, you are kind of lost; very hard to recreate the current (or any) consistent state of files.
One could argue that it takes a bug in SVN to get stuff out of sync while it only requires a user error in git, but yes that's a valid argument. Not as strong as you say because whether you have a SVN or git repos it would be very stupid not to have automatic backups for it. I'd be happy to see more details on that point. Julien

On 3/20/2012 8:52 PM, Julien Nitard wrote:
Related, I like to test branches and ideas without having anyone else observing my moves or caring about what I do; so, I can do that, locally, instead of creating obscure or sacred branches in SVN in a common repository.
This is a very good point. Though it is still a specific need. The VCS is here to help the team. If individuals want to play on their own, it's only "nice to have" IMHO and shouldn't make the other part of the process more complex.
I would argue that "hiding" changes is detrimental to software development. In particular it prevents sufficient software auditing and accountability. It would also curtail active review of the work such that it could end up that one would waste time pursuing development avenues that others have already discounted. Because they would not see the path you are taking and warn you about the cliff you are about to walk off into. Hence I would be suspect about a VCS that "encourages" or "facilitates" the practice of sequestering work. That is I don't consider non-accessible branching as a qualified value proposition for DCVSs (or more accurately termed replicated VCSs -- but that's just semantics ;-). -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

On Mar 20, 2012, at 11:03 PM, Rene Rivera wrote:
On 3/20/2012 8:52 PM, Julien Nitard wrote:
Related, I like to test branches and ideas without having anyone else observing my moves or caring about what I do; so, I can do that, locally, instead of creating obscure or sacred branches in SVN in a common repository.
This is a very good point. Though it is still a specific need. The VCS is here to help the team. If individuals want to play on their own, it's only "nice to have" IMHO and shouldn't make the other part of the process more complex.
I would argue that "hiding" changes is detrimental to software development.
Rene, it is not about hiding, it is about a thought process; of course one should make sure to (i) have a backup of such branches - but that does not necessarily have to happen via the version control system itself - and (ii) communicate when appropriate (which is not at every key stroke in my mental meta model.)
In particular it prevents sufficient software auditing and accountability.
Wow
It would also curtail active review of the work such that it could end up that one would waste time pursuing development avenues that others have already discounted. Because they would not see the path you are taking and warn you about the cliff you are about to walk off into. Hence I would be suspect about a VCS that "encourages" or "facilitates" the practice of sequestering work. That is I don't consider non-accessible branching as a qualified value proposition for DCVSs (or more accurately termed replicated VCSs -- but that's just semantics ;-).
We evidently have different styles of formal solving; mine is a balance between an internal - or semi-internal - process and an "accountable" collaborative effort. I do not see the value of everybody seeing every single key stroke I make, as long as they see certain sync points; actually, quite analogously to the operational semantics of C++ - that certain points at the execution have to follow some rules... /David
-- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 3/20/2012 10:15 PM, David Bergman wrote:
On Mar 20, 2012, at 11:03 PM, Rene Rivera wrote:
On 3/20/2012 8:52 PM, Julien Nitard wrote:
Related, I like to test branches and ideas without having anyone else observing my moves or caring about what I do; so, I can do that, locally, instead of creating obscure or sacred branches in SVN in a common repository.
This is a very good point. Though it is still a specific need. The VCS is here to help the team. If individuals want to play on their own, it's only "nice to have" IMHO and shouldn't make the other part of the process more complex.
I would argue that "hiding" changes is detrimental to software development.
Rene, it is not about hiding, it is about a thought process; of course one should make sure to (i) have a backup of such branches - but that does not necessarily have to happen via the version control system itself - and (ii) communicate when appropriate (which is not at every key stroke in my mental meta model.)
I didn't say anything about knowing your every action.. Although there are companies that would love to go into that much detail ;-)
In particular it prevents sufficient software auditing and accountability.
Wow
Is there more to that response?
It would also curtail active review of the work such that it could end up that one would waste time pursuing development avenues that others have already discounted. Because they would not see the path you are taking and warn you about the cliff you are about to walk off into. Hence I would be suspect about a VCS that "encourages" or "facilitates" the practice of sequestering work. That is I don't consider non-accessible branching as a qualified value proposition for DCVSs (or more accurately termed replicated VCSs -- but that's just semantics ;-).
We evidently have different styles of formal solving; mine is a balance between an internal - or semi-internal - process and an "accountable" collaborative effort. I do not see the value of everybody seeing every single key stroke I make, as long as they see certain sync points; actually, quite analogously to the operational semantics of C++ - that certain points at the execution have to follow some rules...
Hm.. I must be not understanding something.. Are you arguing that not all commits/check-ins you do to a local/private repository are important enough to merit the benefits of collaboration? I ask because my contention is that if it's important enough for you to put something into a VCS history, it's important enough for you collaborators to inspect it.. for perpetuity. And that the sooner that inspection happens the better it is for everyone. Hence that deleting such history is counter to collaboration. Note, I'm not arguing against DVCSs.. Just against the ones that encourage such a practice. Or alternatively, warning that having a process that encourages the practice of deleting history is something that should be avoided, regardless of VCS. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

On Mar 20, 2012, at 11:50 PM, Rene Rivera wrote: [snip]
On 3/20/2012 10:15 PM, David Bergman wrote:
We evidently have different styles of formal solving; mine is a balance between an internal - or semi-internal - process and an "accountable" collaborative effort. I do not see the value of everybody seeing every single key stroke I make, as long as they see certain sync points; actually, quite analogously to the operational semantics of C++ - that certain points at the execution have to follow some rules...
Hm.. I must be not understanding something.. Are you arguing that not all commits/check-ins you do to a local/private repository are important enough to merit the benefits of collaboration?
Yes. In my experience as a member and lead, there are certain parts of the thinking - even the more concretely formal part of it - that is better done in isolation. What I have seen is that a tool like Git, where people can easily create branches without anyone else seeing them immediately, people do get rid of their fears of trying. This especially goes for junior (i.e., those who have not done their 10k hours of programming yet) developers.
I ask because my contention is that if it's important enough for you to put something into a VCS history, it's important enough for you collaborators to inspect it.. for perpetuity.
That is not my contention at all. I do not see auditing as the primary reason for version control systems, but instead a tool to take development forward in a - yes - accountable and consistent way.
And that the sooner that inspection happens the better it is for everyone. Hence that deleting such history is counter to collaboration.
Note, I'm not arguing against DVCSs.. Just against the ones that encourage such a practice. Or alternatively, warning that having a process that encourages the practice of deleting history is something that should be avoided, regardless of VCS.
As I said, we have different views on software development. For *my* approach to formal problem solving, the conceptual - and actual - locality and lightness of branch creation is a big plus. And, yes, to bring stuff forward in a responsible way rather than keep an accurate audit of everything that has happened. /David
-- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Tue, Mar 20, 2012 at 11:50 PM, Rene Rivera <grafikrobot@gmail.com> wrote:
Hm.. I must be not understanding something.. Are you arguing that not all commits/check-ins you do to a local/private repository are important enough to merit the benefits of collaboration? I ask because my contention is that if it's important enough for you to put something into a VCS history, it's important enough for you collaborators to inspect it.. for perpetuity. And that the sooner that inspection happens the better it is for everyone. Hence that deleting such history is counter to collaboration.
I would guess, yes, the argument is that not all local commits are important enough. I hit Ctrl-S more often than I commit to a (central) VCS. I do a local commit at a frequency somewhere between Ctrl-S and central-commit. I see no problem there. Many people find that mid-frequency commit quite an attractive feature. P.S. I am currently in the "I hate Git" camp. I'm trying to use it, but one of us is not yielding to the other. I typically understand things quite quickly (except women - still working on that), but I find git hard to wrap my head around. And I don't think VC should be hard to get started with - no other system I've used has been as hard. They can be hard for complicated things, and maybe git excels at those, but I find it harder than necessary for relatively simple things. I assume I will eventually appreciate how it works. Eventually. I also find all the docs, tutorials, etc atrocious. I hope to be able to write something better once I understand it, but I fear by then I will have gone over to the other side and will just write the same only-makes-sense-if-you-already-understand-it stuff that everyone else has written. Tony

... Eventually. I also find all the docs, tutorials, etc atrocious. I hope to be able to write something better once I understand it, ...
Mandatory reference to losen up the mood: http://xkcd.com/927/ Julien

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi! I can hardly catch up on the discussion. There seem to appear more messages each day than I can read. Many participants and messages are a good thing, though. I might miss some things now, but I'd like to participate before the discussion is over. Am 21.03.12 05:07, schrieb Gottlob Frege:
I would guess, yes, the argument is that not all local commits are important enough.
I hit Ctrl-S more often than I commit to a (central) VCS. I do a local commit at a frequency somewhere between Ctrl-S and central-commit. I see no problem there. Many people find that mid-frequency commit quite an attractive feature.
I like the idea, too. I'm using SVN a lot and I commit quite often. I'm also using branches for most things. For those who see a limitation in SVN regarding "local" commits I recommend looking into the "git-svn" [1] bridge. It lets you use git locally and svn as the public repo. So one can commit, branch, revert, change history locally using the power of git, and then push the changes to the svn repo. Currently I'd like Boost to stick with svn and suggest people who like git shall try "git-svn" or "git svn" (git seems to have native support for svn.) [1] https://git.wiki.kernel.org/articles/g/i/t/GitSvnCrashCourse_512d.html Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.17 (Darwin) Comment: GPGTools - http://gpgtools.org Comment: keyserver x-hkp://pool.sks-keyservers.net iEYEARECAAYFAk9rojcACgkQhAOUmAZhnmoQLwCeK16ieCVXuKJSbitYf+LYcfkF OVAAnAooY8Qwsp66KERvf91G5pDKP7hr =+dPX -----END PGP SIGNATURE-----

Frank Birbacher <bloodymir.crap@gmx.net> writes:
Currently I'd like Boost to stick with svn and suggest people who like git shall try "git-svn" or "git svn" (git seems to have native support for svn.)
The Mercurial fans should look into hgsubversion: http://mercurial.selenic.com/wiki/HgSubversion http://mercurial.aragost.com/kick-start/en/hgsubversion/ This is a Mercurial extensions. -- Martin Geisler Mercurial links: http://mercurial.ch/

On 3/23/12 7:05 AM, Martin Geisler wrote:
Frank Birbacher<bloodymir.crap@gmx.net> writes:
Currently I'd like Boost to stick with svn and suggest people who like git shall try "git-svn" or "git svn" (git seems to have native support for svn.)
The Mercurial fans should look into hgsubversion:
http://mercurial.selenic.com/wiki/HgSubversion http://mercurial.aragost.com/kick-start/en/hgsubversion/
This is a Mercurial extensions.
I tried both git-svn and hgsubversion. Both take ages (measured in hours for me) for the first clone of the Boost repo. When git-svn finally finished, I got a blank local repo. I don't know why. Yeah! blank. As in nothing happened. With Hg-svn, it reported an error at one point and stopped cloning. I had to do it again, but I got frustrated and lost interest. Perhaps it's user error on my part, but it's not clear what I did wrong. Have any of you guys successfully used git-svn and/or hgsubversion with the Boost repo? Could you try and give me some hints on how to go about it? I'm not an expert on either Git or Hg and if a mistake/error will cost many hours of waiting, with indeterminate hit-or-miss results, then it's simply not worth trying. Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

on Thu Mar 22 2012, Joel de Guzman <joel-AT-boost-consulting.com> wrote:
On 3/23/12 7:05 AM, Martin Geisler wrote:
Frank Birbacher<bloodymir.crap@gmx.net> writes:
Currently I'd like Boost to stick with svn and suggest people who like git shall try "git-svn" or "git svn" (git seems to have native support for svn.)
The Mercurial fans should look into hgsubversion:
http://mercurial.selenic.com/wiki/HgSubversion http://mercurial.aragost.com/kick-start/en/hgsubversion/
This is a Mercurial extensions.
I tried both git-svn and hgsubversion. Both take ages (measured in hours for me) for the first clone of the Boost repo. When git-svn finally finished, I got a blank local repo. I don't know why. Yeah! blank. As in nothing happened. With Hg-svn, it reported an error at one point and stopped cloning. I had to do it again, but I got frustrated and lost interest. Perhaps it's user error on my part, but it's not clear what I did wrong.
Have any of you guys successfully used git-svn
This repository was created with git-svn https://github.com/ryppl/boost-svn It was being kept up-to-date at least until recently. You can clone that as a starting point and should be able to start using it with git-svn.
and/or hgsubversion with the Boost repo? Could you try and give me some hints on how to go about it? I'm not an expert on either Git or Hg and if a mistake/error will cost many hours of waiting, with indeterminate hit-or-miss results, then it's simply not worth trying.
Regards,
-- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Thu, Mar 22, 2012 at 08:09:48PM -0400, Dave Abrahams wrote:
This repository was created with git-svn https://github.com/ryppl/boost-svn
It was being kept up-to-date at least until recently. You can clone that as a starting point and should be able to start using it with git-svn.
Should I read that as that this repository is unreliable and will not get updates in a timely manner, if at all? -- Lars Viklund | zao@acc.umu.se

on Fri Mar 23 2012, Lars Viklund <zao-AT-acc.umu.se> wrote:
On Thu, Mar 22, 2012 at 08:09:48PM -0400, Dave Abrahams wrote:
This repository was created with git-svn https://github.com/ryppl/boost-svn
It was being kept up-to-date at least until recently. You can clone that as a starting point and should be able to start using it with git-svn.
Should I read that as that this repository is unreliable and will not get updates in a timely manner, if at all?
I don't know how you should read it. We might have decided that it's obsolete because https://github.com/ryppl/boost-history is more complete (but it won't work bidirectionally with git-svn). Or, it might just be that some cron job needs to be restarted on our server. John, could you take a look? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 03/23/2012 12:51 AM, Joel de Guzman wrote:
Have any of you guys successfully used git-svn and/or hgsubversion with the Boost repo? Could you try and give me some hints on how to go about it? I'm not an expert on either Git or Hg and if a mistake/error will cost many hours of waiting, with indeterminate hit-or-miss results, then it's simply not worth trying.
There have been several posts on this list of people setting up public git repositories synchronized with git-svn for testing. Status of those are not known to me, but I remember trying to clone from one or two of them which worked fine. So I assume they succeeded in using git-svn. I have for my own boost repository used git-svn with success against official boost svn. It is slow, but has worked fine for me from a kubuntu box. It is slow with anything involving use of SVN, especially the clone as it is actually fetching all boost history (90000 + commits), but it works. I have really only used it to track trunk, and I have never tried to push as I have no boost commit access. So as far as a two-way system I really have no idea how well it works except for what bold statements you can find on the net. I have other tracked remote branches in my repo as well. But I see I see now that I have not fetched data into remote/release branch since 2010. That is probably just becouse I have only fetched trunk, nothing wrong with the tool. I find tracking multiple branches with git-svn to be somewhat broken, at least with regard to follow merging from trunk to release in boost. I have not looked into why that is so. It may be that I am missing something, but trunk fit my needs so I left it with that. All in all I do not feel like recommending an official git or mercurial gateway to boost svn. It feels wrong for many reasons. For personal or team work it may work OK. However the main reason is simply that I see no reason why the official boost repository or repositories should not be a git repository made public by the release team. That will just work so much better. I am sure Mercurial would work fine as well, but I have little experience with it so I am more reluctant to say. -- Bjørn

2012/3/23 Bjørn Roald <bjorn@4roald.org>:
I have for my own boost repository used git-svn with success against official boost svn. It is slow, but has worked fine for me from a kubuntu box. It is slow with anything involving use of SVN, especially the clone as it is actually fetching all boost history (90000 + commits), but it works.
It's possible that since the point at which you created your clone, the boost repo crossed the line where it has became too much for git-svn to cope with. It's faster to create a shallow clone which only contains recent commits. Something like: mkdir boost cd boost git svn init https://svn.boost.org/svn/boost/trunk/ # Fetch revsion 77000, and don't try to find its history. # This still takes a little while (not hours) and should result # in a copy of the specified revision. If using a different # revision, pick one that actually changes trunk. git svn fetch --no-follow-parent -r77000 # Bring the clone up to date, and then rebase the repo # on to the latest version. git svn rebase This are lots of "Couldn't find revmap" errors because it's trying to reconcile the merge meta-data with revisions it doesn't have.
I have really only used it to track trunk, and I have never tried to push as I have no boost commit access. So as far as a two-way system I really have no idea how well it works except for what bold statements you can find on the net.
It works okay, it does occasionally break history for a moved file when it doesn't detect the move. As you probably know, it's very bad for collaborative work since it's constantly rebasing (rewriting history because changes that were in git are now in subversion, and they're not treated as the same thing). I wouldn't really recommend using git-svn for someone new to git, it's better to first understand how it's meant to work. Bazaar is supposed to have smarter subversion integration, but to do so it tracks a lot more data, so it has no chance of coping with the boost repo.
I find tracking multiple branches with git-svn to be somewhat broken, at least with regard to follow merging from trunk to release in boost. I have not looked into why that is so. It may be that I am missing something, but trunk fit my needs so I left it with that.
It's probably because we don't do proper merges, but cherry pick or do sub-tree merges, or even just copy changes over. If you look at the merge meta-data it's a real mess. I track a few branches in a single repo, it's very useful for comparing them. For example, to see the changes to spirit that are currently in trunk: git diff release trunk -- boost/spirit libs/spirit/

On 3/23/2012 4:52 PM, Daniel James wrote:
2012/3/23 Bjørn Roald <bjorn@4roald.org>:
I have for my own boost repository used git-svn with success against official boost svn. It is slow, but has worked fine for me from a kubuntu box. It is slow with anything involving use of SVN, especially the clone as it is actually fetching all boost history (90000 + commits), but it works.
It's possible that since the point at which you created your clone, the boost repo crossed the line where it has became too much for git-svn to cope with. It's faster to create a shallow clone which only contains recent commits. Something like:
mkdir boost cd boost git svn init https://svn.boost.org/svn/boost/trunk/
# Fetch revsion 77000, and don't try to find its history. # This still takes a little while (not hours) and should result # in a copy of the specified revision. If using a different # revision, pick one that actually changes trunk. git svn fetch --no-follow-parent -r77000
# Bring the clone up to date, and then rebase the repo # on to the latest version. git svn rebase
This are lots of "Couldn't find revmap" errors because it's trying to reconcile the merge meta-data with revisions it doesn't have.
Ok, that works. Thank you, Daniel. Now, I created a master Spirit-3 branch off this that I can publish into, say, Github. The purpose is to allow people to pull from and push into (those whom I give write access to) this. Wanting to make it modular, I searched around and found git filter-branch. Hence, I was able to create a modular Spirit-3 branch without the other boost libraries. All is well until I did a downstream merge to track the changes that's going into the Boost trunk into my Spirit-3 branch. boost-trunk (pull)--> spirit3 And hah! It pulls in everything (all boost libraries) again. So, how do you do what I intend to do? All I want is to have this repo structure: spirit3 boost/spirit libs/spirit that can do a merge both ways (upstream and downstream to and from the boost trunk); needless to say, with all the histories intact. With SVN, it is very easy to extract sub-directories while still tracking changes both ways to and from the source. In Git, everything seems to be one whole global repository. This is one thing I dislike and which SVN has better control over: modularity. Sure, you can make many "modular" git repositories instead of one big one like boost. But the reality is, you don't predict up front how a library is modularized. Spirit itself spawned at least 3 libraries (Phoenix Wave Fusion) that stand on their own now. At one point, in the life of a library, you may want to refactor and decouple parts somewhere else. Doing this in Git is not straightforward at all, unless I am missing something obviously simple (?). I'd love to hear from the Mercurial experts as well. Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

Now, I created a master Spirit-3 branch off this that I can publish into, say, Github. The purpose is to allow people to pull from and push into (those whom I give write access to) this.
Wanting to make it modular, I searched around and found git filter-branch. Hence, I was able to create a modular Spirit-3 branch without the other boost libraries. All is well until I did a downstream merge to track the changes that's going into the Boost trunk into my Spirit-3 branch.
boost-trunk (pull)--> spirit3
And hah! It pulls in everything (all boost libraries) again.
So, how do you do what I intend to do? All I want is to have this repo structure:
spirit3 boost/spirit libs/spirit
that can do a merge both ways (upstream and downstream to and from the boost trunk); needless to say, with all the histories intact.
With SVN, it is very easy to extract sub-directories while still tracking changes both ways to and from the source. In Git, everything seems to be one whole global repository. This is one thing I dislike and which SVN has better control over: modularity. Sure, you can make many "modular" git repositories instead of one big one like boost. But the reality is, you don't predict up front how a library is modularized. Spirit itself spawned at least 3 libraries (Phoenix Wave Fusion) that stand on their own now. At one point, in the life of a library, you may want to refactor and decouple parts somewhere else. Doing this in Git is not straightforward at all, unless I am missing something obviously simple (?).
The git-subtree program (at https://github.com/apenwarr/git-subtree, see the git-subtree.txt file for documentation) seems to be designed to address this problem (IIUC, I only used it for a basic case), which is maybe not that straightforward!

Rafaël Fourquet wrote:
[...]
With SVN, it is very easy to extract sub-directories while still tracking changes both ways to and from the source. In Git, everything seems to be one whole global repository. This is one thing I dislike and which SVN has better control over: modularity. Sure, you can make many "modular" git repositories instead of one big one like boost. But the reality is, you don't predict up front how a library is modularized. Spirit itself spawned at least 3 libraries (Phoenix Wave Fusion) that stand on their own now. At one point, in the life of a library, you may want to refactor and decouple parts somewhere else. Doing this in Git is not straightforward at all, unless I am missing something obviously simple (?).
The git-subtree program (at https://github.com/apenwarr/git-subtree, see the git-subtree.txt file for documentation) seems to be designed to address this problem (IIUC, I only used it for a basic case), which is maybe not that straightforward!
I would suggest git-subtree too. Does it solve the issue, Joel? Perhaps it doesn't cooperate with git-svn, that would be a shame. -Julian

Joel de Guzman <joel@boost-consulting.com> writes:
Wanting to make it modular, I searched around and found git filter-branch. Hence, I was able to create a modular Spirit-3 branch without the other boost libraries. All is well until I did a downstream merge to track the changes that's going into the Boost trunk into my Spirit-3 branch.
boost-trunk (pull)--> spirit3
And hah! It pulls in everything (all boost libraries) again.
So, how do you do what I intend to do? All I want is to have this repo structure:
spirit3 boost/spirit libs/spirit
that can do a merge both ways (upstream and downstream to and from the boost trunk); needless to say, with all the histories intact.
With SVN, it is very easy to extract sub-directories while still tracking changes both ways to and from the source. In Git, everything seems to be one whole global repository. This is one thing I dislike and which SVN has better control over: modularity. Sure, you can make many "modular" git repositories instead of one big one like boost. But the reality is, you don't predict up front how a library is modularized. Spirit itself spawned at least 3 libraries (Phoenix Wave Fusion) that stand on their own now. At one point, in the life of a library, you may want to refactor and decouple parts somewhere else. Doing this in Git is not straightforward at all, unless I am missing something obviously simple (?).
I'm pretty sure you're not missing anything -- Git/Mercurial want you to operate on the entire repository.
I'd love to hear from the Mercurial experts as well.
You can split a Git or Mercurial repository into two new repositories and extract a sub-tree but you're rewriting history when you do it. Rewriting history implies getting new SHA-1 hash values for the changesets (since their content changes) and so you get *new* and unrelated repositories out of this. I agree with you that SVN is easier to work with here since you can checkout subtrees if you want. You can just toss everything into a single company-wide repository and it will work fine. However, in practice, it doesn't seem to be a big problem. One common solution is to just rename things at some point if you decide to split a repository: cd your-big-repo hg remove x-dir y-dir hg rename z-dir/* . hg commit -m "We now focus on z-dir only" You still have the history for the x-dir and y-dir, but the files from z-dir are now at the top-level of the repository. Because you only renamed the files, you can still push/pull from other clones that have the x-dir and y-dir files. If you want to re-write history you enable the standard convert extension with [extensions] convert = in your ~/.hgrc file, create a filemap.txt file with include z-dir rename z-dir . and run hg convert --filemap filemap.txt your-big-repo your-z-repo That's the equivalent of Git's filter branch. -- Martin Geisler Mercurial links: http://mercurial.ch/

On 26 March 2012 06:48, Martin Geisler <mg@aragost.com> wrote:
You can split a Git or Mercurial repository into two new repositories and extract a sub-tree but you're rewriting history when you do it. Rewriting history implies getting new SHA-1 hash values for the changesets (since their content changes) and so you get *new* and unrelated repositories out of this.
git-subtree does annotate commits (in the commit message) so that it can track transferring changes between the two repositories. I don't know how well it works for transferring changes in both directions. But I think it only allows you to extract a single directory; spirit would need to extract two: boost/spirit and libs/spirit.

On 3/26/2012 6:14 PM, Daniel James wrote:
On 26 March 2012 06:48, Martin Geisler <mg@aragost.com> wrote:
You can split a Git or Mercurial repository into two new repositories and extract a sub-tree but you're rewriting history when you do it. Rewriting history implies getting new SHA-1 hash values for the changesets (since their content changes) and so you get *new* and unrelated repositories out of this.
git-subtree does annotate commits (in the commit message) so that it can track transferring changes between the two repositories. I don't know how well it works for transferring changes in both directions. But I think it only allows you to extract a single directory; spirit would need to extract two: boost/spirit and libs/spirit.
It's frustrating to have to go through all these hoops to do such a (IMO) trivial and necessary task. I'm now forming a conclusion that current DVCS systems (both Git and Hg) fail in this regard. Needless to say, this task is equally frustrating to do with SVN as well. All I want to do is to extract a modular Spirit repo from the monolithic Boost repo, with the ability to easily do upstream/ downstream merges. But I can't do it *easily* with any of the DVCS mentioned. I guess it's back to SVN. The amount of hair pulling is the same whatever D/VCS I try. Oh well... Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

Joel, I was following this discussion for quite some time now and didn't want to get involved. Just one point here. On Tue, Mar 27, 2012 at 5:57 AM, Joel de Guzman <joel@boost-consulting.com> wrote:
All I want to do is to extract a modular Spirit repo from the monolithic Boost repo, with the ability to easily do upstream/ downstream merges. But I can't do it *easily* with any of the DVCS mentioned.
This can be done with git submodules really well. I was actually doing this in a major project (the size of which outmatches boost) quite recently. There was a massive code base hosted in svn and I transitioned to bgit and at least partially I broke it down into submodules (using subtree by the way, which works like a charm) and assembled a new "main tree" out of those submodules while maintaining a build system that enables build, dependency management and testing of either seperate submodules (that pull in their dependencies) or the whole tree. Essentially assuming identity of "entity you can depend on" and "submodule". Boost strikes me as a textbook use case of this procedure. Especially since modularity is already inherent in the code base and only needs to be cast into physical form. All in all, this was incredibly cool and I enjoy working with the resulting system every day. The only small downside that is left is the fact that I haven't gotten around to automatically generating the main tree by parsing a dependency graph in a little python script that puts the right submodules in the right refs at the right places. Which would work nice with the git python bindings. With all this being said, the reason why I didn't bring all this up in this discussion is the unfortunate fact that all this is fruitless in large project as long as there are sentiments against it.
I guess it's back to SVN. The amount of hair pulling is the same whatever D/VCS I try. Oh well...
This is just what I have learned from all this. I have spent years trying to craft the perfect system for this but at the end of the day, the quaility and power of the system doesn't matter as much as you'd think it would. Emotion is key. You have to have everyone (or at least 90%) of the contibutors ready and eager to give it a shot. As soon as you have to discuss it and convince people, it is doomed. People emotionally rejecting it will always find reasons why they can't work with it properly. No matter what technology attempts in terms of power, performance, security, ease of use, safety and so on, emotional sentiment will triumph. I have learned this the hard way. The simple fact that there is a heated discussion about this here which is going on for almost a year now means that it's not gonna work. As simple as that. You should not try it. If you have a small project with a few people all in for it then of course it's a great way to work and it makes so many everyday things so much nicer but as soon as you have a substantial amount of users not buying into it, then the mere trial will only cause bad blood. it just isn't possible to actually really convince somebody else of this. Everyone has different feelings of what is ease of use, what is power, what is quality. And then there is a huge amount of fear among those who have no experience with DVCS and don't want to admit to it. So they will argue against it using undefineable terms and emotional responses. Or, even worse, not use it or circumvent usage. Been there, seen that. Stick with SVN. Forget about all this. Plus, as many have pointed out during this discussion, there are ways of using the power of git without going the whole nine yards. git-svn and friends. Even though this is just a shadow of what could be done, it's still a great way of actually getting folks to try it and therefore lose their sentiment. Cheers, and excuse my wandering slightly off topic... Stephan

On 3/27/2012 4:08 PM, Stephan Menzel wrote:
Joel,
I was following this discussion for quite some time now and didn't want to get involved. Just one point here.
On Tue, Mar 27, 2012 at 5:57 AM, Joel de Guzman <joel@boost-consulting.com> wrote:
All I want to do is to extract a modular Spirit repo from the monolithic Boost repo, with the ability to easily do upstream/ downstream merges. But I can't do it *easily* with any of the DVCS mentioned.
This can be done with git submodules really well. I was actually doing this in a major project (the size of which outmatches boost) quite recently. There was a massive code base hosted in svn and I transitioned to bgit and at least partially I broke it down into submodules (using subtree by the way, which works like a charm) and assembled a new "main tree" out of those submodules while maintaining a build system that enables build, dependency management and testing of either seperate submodules (that pull in their dependencies) or the whole tree. Essentially assuming identity of "entity you can depend on" and "submodule". Boost strikes me as a textbook use case of this procedure. Especially since modularity is already inherent in the code base and only needs to be cast into physical form.
All in all, this was incredibly cool and I enjoy working with the resulting system every day. The only small downside that is left is the fact that I haven't gotten around to automatically generating the main tree by parsing a dependency graph in a little python script that puts the right submodules in the right refs at the right places. Which would work nice with the git python bindings.
With all this being said, the reason why I didn't bring all this up in this discussion is the unfortunate fact that all this is fruitless in large project as long as there are sentiments against it.
I guess it's back to SVN. The amount of hair pulling is the same whatever D/VCS I try. Oh well...
[snip anti-git rant]
Plus, as many have pointed out during this discussion, there are ways of using the power of git without going the whole nine yards. git-svn and friends. Even though this is just a shadow of what could be done, it's still a great way of actually getting folks to try it and therefore lose their sentiment.
Cheers, and excuse my wandering slightly off topic...
Believe me, I tried git-svn as suggested in this thread. And believe me, I want to make it work, but failed to do so, and in the process lost a weekend which could have been better allocated for C++ coding. Ok, I open my challenge to you and any Git/Hg fans. Again, all I want is to: 1) Extract this modular repo structure from Boost: spirit boost spirit libs spirit 2) that can merge both ways (upstream and downstream to and from the boost SVN trunk); needless to say, with all the histories intact. If anyone can do this and offer a way that's **actually tested**, I'm all ears. Emphasis: I don't want to waste any more time following dead ends! I want a procedure that's actually tested with the Boost repo and the Spirit library in particular (using git-svn or whatnot). Better yet, if someone can actually put the modular repo somewhere (github, gitorious), with specific and sane usage instructions, then I'd immediately use that. Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

1) Extract this modular repo structure from Boost:
spirit boost spirit libs spirit
2) that can merge both ways (upstream and downstream to and from the boost SVN trunk); needless to say, with all the histories intact.
To do that you first need to split the current repository where each lib has its own repository... making point 2 impossible or just too painful. Your workflow would be possible in a simple manner once we moved to git/hg where each lib has its own repository, which would be better imho (have lib-specific history instead of history from all the libs). Philippe

Joel, On Tue, Mar 27, 2012 at 10:52 AM, Joel de Guzman <joel@boost-consulting.com> wrote:
[snip anti-git rant]
I think I was being misunderstood. I am indeed a great fan and didn't want to rant against git. I just came to accept that as long as there's sentiment there's no transition possible.
Ok, I open my challenge to you and any Git/Hg fans.
Again, all I want is to:
1) Extract this modular repo structure from Boost:
spirit boost spirit libs spirit
Easily doable
2) that can merge both ways (upstream and downstream to and from the boost SVN trunk); needless to say, with all the histories intact.
Not at all doable. As I said, one may be able to hack this or make it work but it's not a defined or supported way of working with it. And using such a system in a way that defies its intended way of being used is pointless and will only support those who claim git itself is not a good thing. Sure, you may be able to cut your fingernails with a chainsaw but life is just better and a little safer if you don't do it and it adds to the chainsaw's reputation to a way that it doesn't deserve.
If anyone can do this and offer a way that's **actually tested**, I'm all ears. Emphasis: I don't want to waste any more time following dead ends! I want a procedure that's actually tested with the Boost repo and the Spirit library in particular (using git-svn or whatnot).
Better yet, if someone can actually put the modular repo somewhere (github, gitorious), with specific and sane usage instructions, then I'd immediately use that.
git-svn specifically mentions that usage as being not supported. Two way svn syncs with changes on both ends are a no-go. I have tried that in the past and it didn't work out. Which means: Challenge not accepted! ;-) I have substantial experience with transistions like that in several projects in my past and I have the strong opinion that either you do it right or you don't do it at all. Any compromise along the git-svn lines except an initial export is bogus. As I said, it would have been nice, but existing sentiment forbids chasing that any further. The boost community should just accept this as consens and be done with it. Cheers, Stephan

On 3/27/2012 5:58 PM, Stephan Menzel wrote:
git-svn specifically mentions that usage as being not supported. Two way svn syncs with changes on both ends are a no-go. I have tried that in the past and it didn't work out. Which means: Challenge not accepted! ;-) I have substantial experience with transistions like that in several projects in my past and I have the strong opinion that either you do it right or you don't do it at all. Any compromise along the git-svn lines except an initial export is bogus.
As I said, it would have been nice, but existing sentiment forbids chasing that any further. The boost community should just accept this as consens and be done with it.
On 3/27/2012 5:54 PM, Philippe Vaucher wrote:
To do that you first need to split the current repository where each lib has its own repository... making point 2 impossible or just too painful.
Your workflow would be possible in a simple manner once we moved to git/hg where each lib has its own repository, which would be better imho (have lib-specific history instead of history from all the libs).
Ok, so if it can't be done with git-svn. Is there no other way to do it then? What you guys want is for the whole of Boost to migrate en-masse to Git because it's the Git way? All or nothing? In one fell swoop? I'm willing to migrate. I appreciate the benefits of a DVCS. I can convince my fellow Spirit/Phoenix/Fusion devs to migrate. But if you are telling me that I also have to convince all the other Boost libraries to migrate en-masse, then that is simply absurd! Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

On Tue, Mar 27, 2012 at 1:15 PM, Joel de Guzman <joel@boost-consulting.com> wrote:
What you guys want is for the whole of Boost to migrate en-masse to Git because it's the Git way? All or nothing? In one fell swoop?
Yes.
I'm willing to migrate. I appreciate the benefits of a DVCS. I can convince my fellow Spirit/Phoenix/Fusion devs to migrate. But if you are telling me that I also have to convince all the other Boost libraries to migrate en-masse, then that is simply absurd!
My point exactly. Sad. Very sad, but true. I guess that allows me to crawl back under my rock ;-) Stephan

On 27 March 2012 12:15, Joel de Guzman <joel@boost-consulting.com> wrote:
Ok, so if it can't be done with git-svn. Is there no other way to do it then?
What you guys want is for the whole of Boost to migrate en-masse to Git because it's the Git way? All or nothing? In one fell swoop?
I'm willing to migrate. I appreciate the benefits of a DVCS. I can convince my fellow Spirit/Phoenix/Fusion devs to migrate. But if you are telling me that I also have to convince all the other Boost libraries to migrate en-masse, then that is simply absurd!
That's what happened for the CVS to subversion migration, and it's pretty much what the boost steering committee was established for. But it isn't too hard to cope with transferring changes in a single direction. So if *all* development for a library does migrate to git, it's relatively easy to transfer changes from git to subversion. It depends on how accurate you need the subversion history to be. The problem is that subversion history needs to be linear, and git history generally isn't. I think it wouldn't be too hard to generate a linear series of patches with merged branches represented by a single commit, and then apply those patches to subversion. I'm not sure if that is a good idea just yet, because if there is a big migration, then your git repository will need to reconciled with the new git repository.

Joel de Guzman <joel@boost-consulting.com> writes:
I'm willing to migrate. I appreciate the benefits of a DVCS. I can convince my fellow Spirit/Phoenix/Fusion devs to migrate. But if you are telling me that I also have to convince all the other Boost libraries to migrate en-masse, then that is simply absurd!
It only seems absurb because you've ordered things in a particular way. It's important, I think, to separate making boost modular from choosing the VCS. If boost transitions to a new VCS before modularizing, then of course every library will have to transition. It's one big repository. There's nothing particularly absurd about this. If bost modularizes first then I see no reason why each library/subproject can't transition to a new VCS on its own time. Sure, it will be a bit more tedious to work with more than one VCS during the transition time but it certainly is doable. -Dave

On 27-3-2012 13:15, Joel de Guzman wrote:
I'm willing to migrate. I appreciate the benefits of a DVCS. I can convince my fellow Spirit/Phoenix/Fusion devs to migrate. But if you are telling me that I also have to convince all the other Boost libraries to migrate en-masse, then that is simply absurd!
Did not follow all of this thread (only some messages). But just to let know, I'm willing to migrate too. I really love git and since I do that I see much of the shortcomings of SVN (especially svn-merging is a waste of time), so I'm looking forward to the git-convenience (or mercurial). Regards, Barend (Boost.Geometry)

Stephan Menzel wrote:
On Tue, Mar 27, 2012 at 10:52 AM, Joel de Guzman <joel@boost-consulting.com> wrote:
Ok, I open my challenge to you and any Git/Hg fans.
Again, all I want is to:
1) Extract this modular repo structure from Boost:
spirit boost spirit libs spirit
Easily doable
2) that can merge both ways (upstream and downstream to and from the boost SVN trunk); needless to say, with all the histories intact.
Not at all doable. As I said, one may be able to hack this or make it work but it's not a defined or supported way of working with it. And using such a system in a way that defies its intended way of being used is pointless and will only support those who claim git itself is not a good thing. Sure, you may be able to cut your fingernails with a chainsaw but life is just better and a little safer if you don't do it and it adds to the chainsaw's reputation to a way that it doesn't deserve.
Just because I could think of a way, here's a "cut your fingernails with a chainsaw" approach (excellent formulation by the way). Using multiple git repositories: - a git-svn mirror of the Boost trunk; - a subtree repository for just /boost/spirit - a subtree repository for just /libs/spirit - optionally, more subtree repositories; - the actual modular repo for Spirit, on which you do your work, To merge upstream (assuming you don't edit non-Spirit subtrees): (on modular repo) git-subtree-push to the /boost/spirit repo git-subtree-push to the /libs/spirit repo (on trunk mirror) git-subtree-pull from the /boost/spirit repo git-subtree-pull from the /libs/spirit repo git-svn-push to the Boost trunk To merge downstream: (on trunk mirror) git-svn-pull from the Boost trunk git-substree-push to the /boost/spirit repo git-substree-push to the /libs/spirit repo git-substree-push to the other subtree repos (on modular repo) git-subtree-pull from the /boost/spirit repo git-subtree-pull from the /libs/spirit repo git-subtree-pull from the other subtree repos It would need some scripts (perhaps triggers) to make it doable, and then it would still not be elegant, but I believe it could work. It would take some care because each commit must edit only a single subtree (I don't think the system would break otherwise, but you might introduce sibling commits with the same commit message as an artifact). I also expect new commits will appear in a different order after you merge upstream or downstream, i.e. stably sorted by subtree, but history will not be altered afterwards on either side of the "subtree bridge". This might need clarification---if so, please poke me. I've not implemented this idea, both because I'm short on time and because I think it's best to wait for insights of other people first. I wouldn't be surprised if the entire idea is hopeless for reasons I haven't thought of. As Daniel pointed out, the entire thing will be slightly easier to handle if people only contribute to Spirit through the modular git repo. -Julian

Julian Gonggrijp <j.gonggrijp@gmail.com> writes:
Just because I could think of a way, here's a "cut your fingernails with a chainsaw" approach (excellent formulation by the way). Using multiple git repositories:
I'm setting up something very similar to this in the "real world" and it's working so far. It's not pretty at all but it does allow developers to work on a modular project in git. We don't have the additional complexity of re-merging two modular subtrees into a third subtree but that's almost incidental. Again, I really wouldn't recommend this for Boost, where the community has the power to switch entirely to git (or mercurial, or whatever). git-svn really hampers a lot of stuff. -Dave

On 27 March 2012 04:57, Joel de Guzman <joel@boost-consulting.com> wrote:
It's frustrating to have to go through all these hoops to do such a (IMO) trivial and necessary task. I'm now forming a conclusion that current DVCS systems (both Git and Hg) fail in this regard. Needless to say, this task is equally frustrating to do with SVN as well.
If it was a trivial and necessary task, then it'd be supported in at least one of the new version control tools. It's certainly not trivial, and most people don't seem to miss it. Anyway, as I said before, git-svn is really bad for collaborative work.

Frank Birbacher <bloodymir.crap@gmx.net> writes:
Currently I'd like Boost to stick with svn and suggest people who like git shall try "git-svn" or "git svn" (git seems to have native support for svn.)
I've used git-svn extensively. It's not the same. In particular, one has to be very careful in using rebase even in a local repository. That is very un-git. -Dave

(NB: it took me some time to write this post and in the meanwhile some of the issues I'm addressing have been covered. Hopefully what I wrote is still useful by making the central things very explicit.) Rene Rivera wrote:
On 3/20/2012 10:15 PM, David Bergman wrote:
We evidently have different styles of formal solving; mine is a balance between an internal - or semi-internal - process and an "accountable" collaborative effort. I do not see the value of everybody seeing every single key stroke I make, as long as they see certain sync points; actually, quite analogously to the operational semantics of C++ - that certain points at the execution have to follow some rules...
Hm.. I must be not understanding something.. Are you arguing that not all commits/check-ins you do to a local/private repository are important enough to merit the benefits of collaboration? I ask because my contention is that if it's important enough for you to put something into a VCS history, it's important enough for you collaborators to inspect it.. for perpetuity. And that the sooner that inspection happens the better it is for everyone. Hence that deleting such history is counter to collaboration.
This already received several comments, but I think there is something very deep about this that deserves more attention. It revolves around the following fundamental question: What is the meaning of a commit? One possible interpretation is that a commit is a snapshot of your project. A snapshot is something that you store for future reference. Because in a sense it's a form of documentation, one will take care to submit well-crafted commits that include enough useful changes to license a new snapshot. In principle, every commit is assumed to introduce some form of progress compared to the previous. Making changes to such a history of snapshots is almost necessarily a form of fraud. This is the kind of mental model of a commit that is stimulated by svn. You can see it from the terminology: making a commit causes the repository to move to the next revision number. Another possible interpretation is that a commit represents a unit of work. This tends to favour many small commits over few big commits. Since anything you do before you're sure that it's the right thing is also work, shabby commits are part of the deal. The consequence is that it must be very cheap to isolate any messy state in temporary side tracks. Now the VCS is not only a collection of snapshots, but also a tool to manage your recent pieces of work before you finally commit* to some of them. This kind of mental model is stimulated by git. It explains why git users make a fuss about amending, rebasing and efficient branching and merging. There is no point in arguing that one mental model is superior to the other until you fully grasp both of them. I urge anyone who feels tempted to make agitated remarks to let this sink in for at least a few hours. That said, I'm confident enough to think that I can give two solid arguments why the units-of-work model is ultimately more productive. The first argument is provided by historical evidence, and nicely illustrated by Christof Donat's most recent post in this thread. The units-of-work model was first: local VCSs of the early generation invited developers to commit often. In the centralised VCSs of the next generation, committing became too expensive for such a workflow and developers adapted to the snapshot model instead. From that perspective the snapshot model was a workaround rather than a preferred solution. Distributed VCSs of the current generation specifically intend to address that problem by making commits cheap again. Developers are now using the opportunity to switch back to the units-of-work model. The second argument is more technical, and perhaps more convincing. It works even without branches or collaborators. All we need is a single developer who makes some changes to their working copy of a project. 1. If the developer applies the snapshot model, they'll implement all changes in one go and spend some time to verify that they seem to make sense. After that they'll probably make a commit. 2. If the developer applies the units-of-work model, they'll commit each change directly after implementing it. Let's say five commits are made in total. A little later, our developer finds that they want to undo one of the changes. 1. In de snapshot scenario, they look up the pieces of code that were affected by the faulty change and edit them again. 2. In the units-of-work scenario, they cut the faulty commit out of history and they're done. Result: the units-of-work developer is spending less time to get the same thing done with less opportunity for errors. Note that the pieces of history that tend to get altered in a units- of-work model generally don't make it into version control in a snapshot model at all. -Julian ---- *) Commit as in, make a commitment. Pun not entirely unintended.

On 03/21/2012 04:56 PM, Julian Gonggrijp wrote:
(NB: it took me some time to write this post and in the meanwhile some of the issues I'm addressing have been covered. Hopefully what I wrote is still useful by making the central things very explicit.)
Rene Rivera wrote:
On 3/20/2012 10:15 PM, David Bergman wrote:
We evidently have different styles of formal solving; mine is a balance between an internal - or semi-internal - process and an "accountable" collaborative effort. I do not see the value of everybody seeing every single key stroke I make, as long as they see certain sync points; actually, quite analogously to the operational semantics of C++ - that certain points at the execution have to follow some rules... Hm.. I must be not understanding something.. Are you arguing that not all commits/check-ins you do to a local/private repository are important enough to merit the benefits of collaboration? I ask because my contention is that if it's important enough for you to put something into a VCS history, it's important enough for you collaborators to inspect it.. for perpetuity. And that the sooner that inspection happens the better it is for everyone. Hence that deleting such history is counter to collaboration. This already received several comments, but I think there is something very deep about this that deserves more attention. It revolves around the following fundamental question:
What is the meaning of a commit?
One possible interpretation is that a commit is a snapshot of your project. A snapshot is something that you store for future reference. Because in a sense it's a form of documentation, one will take care to submit well-crafted commits that include enough useful changes to license a new snapshot. In principle, every commit is assumed to introduce some form of progress compared to the previous. Making changes to such a history of snapshots is almost necessarily a form of fraud.
This is the kind of mental model of a commit that is stimulated by svn. You can see it from the terminology: making a commit causes the repository to move to the next revision number.
Another possible interpretation is that a commit represents a unit of work. This tends to favour many small commits over few big commits. Since anything you do before you're sure that it's the right thing is also work, shabby commits are part of the deal. The consequence is that it must be very cheap to isolate any messy state in temporary side tracks. Now the VCS is not only a collection of snapshots, but also a tool to manage your recent pieces of work before you finally commit* to some of them.
This kind of mental model is stimulated by git. It explains why git users make a fuss about amending, rebasing and efficient branching and merging.
There is no point in arguing that one mental model is superior to the other until you fully grasp both of them. I urge anyone who feels tempted to make agitated remarks to let this sink in for at least a few hours.
That said, I'm confident enough to think that I can give two solid arguments why the units-of-work model is ultimately more productive.
The first argument is provided by historical evidence, and nicely illustrated by Christof Donat's most recent post in this thread. The units-of-work model was first: local VCSs of the early generation invited developers to commit often. In the centralised VCSs of the next generation, committing became too expensive for such a workflow and developers adapted to the snapshot model instead. From that perspective the snapshot model was a workaround rather than a preferred solution. Distributed VCSs of the current generation specifically intend to address that problem by making commits cheap again. Developers are now using the opportunity to switch back to the units-of-work model.
The second argument is more technical, and perhaps more convincing. It works even without branches or collaborators. All we need is a single developer who makes some changes to their working copy of a project.
1. If the developer applies the snapshot model, they'll implement all changes in one go and spend some time to verify that they seem to make sense. After that they'll probably make a commit. 2. If the developer applies the units-of-work model, they'll commit each change directly after implementing it. Let's say five commits are made in total.
A little later, our developer finds that they want to undo one of the changes.
1. In de snapshot scenario, they look up the pieces of code that were affected by the faulty change and edit them again. 2. In the units-of-work scenario, they cut the faulty commit out of history and they're done.
Result: the units-of-work developer is spending less time to get the same thing done with less opportunity for errors.
Note that the pieces of history that tend to get altered in a units- of-work model generally don't make it into version control in a snapshot model at all.
-Julian
---- *) Commit as in, make a commitment. Pun not entirely unintended.
Sure, this all makes sense. Except that failures often only materialize _after_ you made your changes public. As discussed in this thread already, rewriting public history should be avoided. With boost this is even more critical. As mentioned in another thread, we want, and need, to test on various platforms. In order to do that, we need to make changes public. So, to repeat, this all sounds nice and dandy, but after digging deeper, it doesn't sound like it is generally applicable. Unless you can test _everything_ on your local machine, or you push onto a volatile branch, which opens a completely other can of worms (from what i understand).

Some of the thread has eluded to the software development process, of which, version control is only one component and is not a silver bullet for all of the other pieces. Specifically, version control is not revision control. Version control is concerned with versioning state. Revision control is concerned with authorizing changes. Version control systems have access rights but aren't oriented toward revision control processes. Much of the discussion necessarily compares centralized vs. distributed though the terms are intermixed in the discussion but mean something much different each context. I think if version control can be decoupled from the rest of the software development process, version control issues can be satisfied quite easily. The rest of the software development process seems much less defined to me for Boost and I think fuels much of the frustration sometimes seen.

Thomas Heller wrote:
On 03/21/2012 04:56 PM, Julian Gonggrijp wrote:
[...]
The second argument is more technical, and perhaps more convincing. It works even without branches or collaborators. All we need is a single developer who makes some changes to their working copy of a project.
[...]
Result: the units-of-work developer is spending less time to get the same thing done with less opportunity for errors.
Note that the pieces of history that tend to get altered in a units- of-work model generally don't make it into version control in a snapshot model at all.
Sure, this all makes sense. Except that failures often only materialize _after_ you made your changes public. [...] So, to repeat, this all sounds nice and dandy, but after digging deeper, it doesn't sound like it is generally applicable.
This is not just about bugs or private/public, this is about making changes to work. Even just changing your mind before you make something public already happens often enough to make local unit-of- work commits a feature. If a bug is found after the faulty code has gone public, small commits still help to better narrow down the changes that caused it and to reduce the amount of work that has to be done to solve the issue. Unit-of-work commits make it easier to find and review past work, reduce the burden on the developer to keep track of what they're doing until they're ready to publish it, and enable you to keep unfinished but versioned work around while working on other, more publish-ready changes. Unit-of-work commits really help you to manage and keep track of work, contrary to snapshot commits which mostly just provide a backup facility. This is way more generally applicable than you seem to be willing to admit.
Unless you can test _everything_ on your local machine, or you push onto a volatile branch, which opens a completely other can of worms (from what i understand).
What can of worms? I don't recall reading any post that described a can of worms associated with volatile branches. Besides, /if/ you're altering volatile branches anyway, that's again way easier to manage with unit-of-work commits than with snapshot commits. You seem to suggest in addition that what we've been discussing here has something to do with cans of worms. Do you actually intend to suggest that unit-of-work commits introduce problems that don't exist for snapshot commits? -Julian

Thomas Heller wrote:
On 03/21/2012 04:56 PM, Julian Gonggrijp wrote:
[...]
The second argument is more technical, and perhaps more convincing. It works even without branches or collaborators. All we need is a single developer who makes some changes to their working copy of a project.
[...]
Result: the units-of-work developer is spending less time to get the same thing done with less opportunity for errors.
Note that the pieces of history that tend to get altered in a units- of-work model generally don't make it into version control in a snapshot model at all. Sure, this all makes sense. Except that failures often only materialize _after_ you made your changes public. [...] So, to repeat, this all sounds nice and dandy, but after digging deeper, it doesn't sound like it is generally applicable. This is not just about bugs or private/public, this is about making changes to work. Even just changing your mind before you make something public already happens often enough to make local unit-of- work commits a feature. If a bug is found after the faulty code has gone public, small commits still help to better narrow down the changes that caused it and to reduce the amount of work that has to be done to solve the issue.
Unit-of-work commits make it easier to find and review past work, reduce the burden on the developer to keep track of what they're doing until they're ready to publish it, and enable you to keep unfinished but versioned work around while working on other, more publish-ready changes. Unit-of-work commits really help you to manage and keep track of work, contrary to snapshot commits which mostly just provide a backup facility.
This is way more generally applicable than you seem to be willing to admit. Sure, this all makes perfect sense. But this is not restricted to a DCVS,
Unless you can test _everything_ on your local machine, or you push onto a volatile branch, which opens a completely other can of worms (from what i understand). What can of worms? I don't recall reading any post that described a can of worms associated with volatile branches. Besides, /if/ you're altering volatile branches anyway, that's again way easier to manage with unit-of-work commits than with snapshot commits.
You seem to suggest in addition that what we've been discussing here has something to do with cans of worms. Do you actually intend to suggest that unit-of-work commits introduce problems that don't exist for snapshot commits? No, I am saying that altering history is dangerous! Which you described as one of the advantages of "the git approach". I completely lost track. I described my experience I had with. I was told that this was not the way to go, I am not sure if it is directly related to "changing history". But the problems existed. Now, force pushing (as i understand this is the only way to rewrite
On 03/21/2012 11:02 PM, Julian Gonggrijp wrote: this can be done any version control system (be it centralized or not). It is a matter of good habit. Though, as has been pointed out numerous times now, every approach comes with its own set(s) of problems. published history) essentially breaks every other clone of that public repository. This is exactly the can of worms I am referring to. In the context of boost, as a loosly coupled organisation, where i migh want to seemlessly switch, merge and whatever with other peoples work, this looks a serious problem. This is exactly the can of worms i was mentioning. Or to formulate it differently: When is my public repository, which i intended for my use only, not private anymore?
-Julian
_______________________________________________ Unsubscribe& other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Thomas Heller wrote:
On 03/21/2012 11:02 PM, Julian Gonggrijp wrote:
This is not just about bugs or private/public, this is about making changes to work. Even just changing your mind before you make something public already happens often enough to make local unit-of- work commits a feature. If a bug is found after the faulty code has gone public, small commits still help to better narrow down the changes that caused it and to reduce the amount of work that has to be done to solve the issue.
Unit-of-work commits make it easier to find and review past work, reduce the burden on the developer to keep track of what they're doing until they're ready to publish it, and enable you to keep unfinished but versioned work around while working on other, more publish-ready changes. Unit-of-work commits really help you to manage and keep track of work, contrary to snapshot commits which mostly just provide a backup facility.
This is way more generally applicable than you seem to be willing to admit. Sure, this all makes perfect sense. But this is not restricted to a DCVS, this can be done any version control system (be it centralized or not).
Nobody is going to make many small commits in a CVCS, because it adds too much overhead and because commits on a CVCS have to be "good enough". I've already said this. The "you can do this in svn too" argument really doesn't hold here.
It is a matter of good habit.
A good habit people are forced to give up when using a CVCS.
Though, as has been pointed out numerous times now, every approach comes with its own set(s) of problems.
unit-of-work commits by themselves do not introduce any new problems in comparison with snapshot commits. It's rather the other way round.
Unless you can test _everything_ on your local machine, or you push onto a volatile branch, which opens a completely other can of worms (from what i understand). What can of worms? I don't recall reading any post that described a can of worms associated with volatile branches. Besides, /if/ you're altering volatile branches anyway, that's again way easier to manage with unit-of-work commits than with snapshot commits.
You seem to suggest in addition that what we've been discussing here has something to do with cans of worms. Do you actually intend to suggest that unit-of-work commits introduce problems that don't exist for snapshot commits? No, I am saying that altering history is dangerous! Which you described as one of the advantages of "the git approach".
We have to distinguish between published and unpublished history. The power to alter unpublished history isn't dangerous at all, it's a convenience that boosts productivity and which is better facilitated by smaller commits. Altering published history may admittedly come with caveats, but I think this is orthogonal to the size of commits; whether you alter published history or not, smaller commits will always help you to do a better job at what you do.
I completely lost track. I described my experience I had with. I was told that this was not the way to go, I am not sure if it is directly related to "changing history". But the problems existed.
I read your nightmare story and I appreciate it. You're definitely right that the forking business isn't trivial. However, with the right preparation it can be fun and rewarding rather than a nightmare.
Now, force pushing (as i understand this is the only way to rewrite published history) essentially breaks every other clone of that public repository.
No, as Dave explained it doesn't have to break other repositories. If your peers know what to expect they'll pull --rebase from the volatile branch and nothing breaks.
[...] This is exactly the can of worms i was mentioning. Or to formulate it differently: When is my public repository, which i intended for my use only, not private anymore?
A public repository is never private? If you want a repository to be private then don't make it public. Perhaps you meant to ask something else. -Julian

On Thu, Mar 22, 2012 at 11:01 AM, Julian Gonggrijp <j.gonggrijp@gmail.com> wrote:
Nobody is going to make many small commits in a CVCS, because it adds too much overhead and because commits on a CVCS have to be "good enough". I've already said this. The "you can do this in svn too" argument really doesn't hold here.
That's not entirely true. If it's fast enough (and if the team is small enough) I still do tons of tiny commits. But yeah, history-wise that's not good for others. Olaf

Thomas Heller <thom.heller@googlemail.com> writes:
On 03/21/2012 11:02 PM, Julian Gonggrijp wrote:
Thomas Heller wrote:
Unit-of-work commits make it easier to find and review past work, reduce the burden on the developer to keep track of what they're doing until they're ready to publish it, and enable you to keep unfinished but versioned work around while working on other, more publish-ready changes. Unit-of-work commits really help you to manage and keep track of work, contrary to snapshot commits which mostly just provide a backup facility.
This is way more generally applicable than you seem to be willing to admit.
Sure, this all makes perfect sense. But this is not restricted to a DCVS, this can be done any version control system (be it centralized or not). It is a matter of good habit.
Indeed -- working in SVN or Git makes no difference here: you can (and should!) make small and self-contained commits in al systems.
You seem to suggest in addition that what we've been discussing here has something to do with cans of worms. Do you actually intend to suggest that unit-of-work commits introduce problems that don't exist for snapshot commits?
No, I am saying that altering history is dangerous! Which you described as one of the advantages of "the git approach".
Altering local (=unpublished) history can be convenient. It's considered an advanced feature in Mercurial -- you need to enable extensions for this. Git has a *bias* towards more history rewriting since it comes with these features enabled by default -- but it's still frowned upon if you rewrite public history in git.
In the context of boost, as a loosly coupled organisation, where i migh want to seemlessly switch, merge and whatever with other peoples work, this looks a serious problem. This is exactly the can of worms i was mentioning.
It's not a serious problem in practice. DVCS sounds like anerchy at first, but it's not much different from a centralized setup. You have a main repository (possibly on boost.org) and things that are pushed there are by definition final/immutable/frozen: they've been published and you must assume that people depend on them. So you just don't rewrite those changes. If you push there and find a bug, then you do the same as in SVN: you make a new commit that fixes the bug. Collaboration with a DVCS is really a question of making incremental append-only changes to a code base. That hasn't changed much from centralized VCS. The D in DVCS does allows you to pull changes directly from Alice or Bob if you like. That can be convenient for working on a feature outside of the main repo. But when the changes go to the main repo, they are just as immutable as in SVN.
Or to formulate it differently: When is my public repository, which I intended for my use only, not private anymore?
In principle, it ceases to be private when you put it on a public server, tell others about it, and they begin basing their own work on your volatile changes. If you publish a repository on GitHub and tell me about then I might look at the commits there and give you feedback. If I'm not basing any work on the changes, then it's no problem if you later destroy the commits and even delete the repository. But if you push the commits to a main repository on boost.org you cannot just change your mind like that: you must expect that others will have pulled the changesets and the cat is out of the bag. -- Martin Geisler aragost Trifork Professional Mercurial support http://www.aragost.com/mercurial/

On 03/22/2012 11:32 AM, Martin Geisler wrote:
Thomas Heller<thom.heller@googlemail.com> writes:
On 03/21/2012 11:02 PM, Julian Gonggrijp wrote:
Thomas Heller wrote:
Unit-of-work commits make it easier to find and review past work, reduce the burden on the developer to keep track of what they're doing until they're ready to publish it, and enable you to keep unfinished but versioned work around while working on other, more publish-ready changes. Unit-of-work commits really help you to manage and keep track of work, contrary to snapshot commits which mostly just provide a backup facility.
This is way more generally applicable than you seem to be willing to admit. Sure, this all makes perfect sense. But this is not restricted to a DCVS, this can be done any version control system (be it centralized or not). It is a matter of good habit. Indeed -- working in SVN or Git makes no difference here: you can (and should!) make small and self-contained commits in al systems.
You seem to suggest in addition that what we've been discussing here has something to do with cans of worms. Do you actually intend to suggest that unit-of-work commits introduce problems that don't exist for snapshot commits? No, I am saying that altering history is dangerous! Which you described as one of the advantages of "the git approach". Altering local (=unpublished) history can be convenient. It's considered an advanced feature in Mercurial -- you need to enable extensions for this. Git has a *bias* towards more history rewriting since it comes with these features enabled by default -- but it's still frowned upon if you rewrite public history in git.
In the context of boost, as a loosly coupled organisation, where i migh want to seemlessly switch, merge and whatever with other peoples work, this looks a serious problem. This is exactly the can of worms i was mentioning. It's not a serious problem in practice. DVCS sounds like anerchy at first, but it's not much different from a centralized setup. You have a main repository (possibly on boost.org) and things that are pushed there are by definition final/immutable/frozen: they've been published and you must assume that people depend on them.
So you just don't rewrite those changes. If you push there and find a bug, then you do the same as in SVN: you make a new commit that fixes the bug.
Collaboration with a DVCS is really a question of making incremental append-only changes to a code base. That hasn't changed much from centralized VCS. The D in DVCS does allows you to pull changes directly from Alice or Bob if you like. That can be convenient for working on a feature outside of the main repo. But when the changes go to the main repo, they are just as immutable as in SVN. Right, makes perfect sense. I don't know what the plans of Dave Abrahams are, but if i remember he imagines a loosely couple collection of repositories of distincts libraries, where any of those is a potential candidate to either superseed an exisiting repository, or to be eventually be part of boost. I don't know if i got that correctly, I guess it would be best to wait for Beman's talk to know the details here.
Or to formulate it differently: When is my public repository, which I intended for my use only, not private anymore? In principle, it ceases to be private when you put it on a public server, tell others about it, and they begin basing their own work on your volatile changes.
If you publish a repository on GitHub and tell me about then I might look at the commits there and give you feedback. If I'm not basing any work on the changes, then it's no problem if you later destroy the commits and even delete the repository. Right, the *second* i hit the "fork" button on github, everyone sees my new repository.
But if you push the commits to a main repository on boost.org you cannot just change your mind like that: you must expect that others will have pulled the changesets and the cat is out of the bag.

On 22/03/2012 10:43, Thomas Heller wrote:
On 03/22/2012 11:32 AM, Martin Geisler wrote: ...
If you publish a repository on GitHub and tell me about then I might look at the commits there and give you feedback. If I'm not basing any work on the changes, then it's no problem if you later destroy the commits and even delete the repository. Right, the *second* i hit the "fork" button on github, everyone sees my new repository.
right. The usual way to use (any) DVCS is that this publicly available fork repository (i.e. github) is considered shared work, meaning you only append your work there (no rewriting history). If you want to use DVCS in distributed manner, your commits would go first to your local repository (i.e. filesystem on your local machine) where you can tweak them to your heart's content before pushing them to shared repository (after which point these commits are shared and thus should not be changed any more). Since everyone else (with right permissions) does the same, it's very easy to keep track of other's work because you will frequently rebase your local repository from the public one, with someone elses work appended to it. This trigers automatic merging of changes in your repository which you have not shared yet (on top of current head of shared repository, as copied to your repository). Most of the time this merging process is fast and totaly transparent. And when it is not, getting out of trouble is not really that difficult - normally just manually fix files when git asks you to do so, or at worst you do "git rebase --abort" and try again. B.

On 03/22/2012 12:11 PM, Bronek Kozicki wrote:
On 22/03/2012 10:43, Thomas Heller wrote:
On 03/22/2012 11:32 AM, Martin Geisler wrote: ...
If you publish a repository on GitHub and tell me about then I might look at the commits there and give you feedback. If I'm not basing any work on the changes, then it's no problem if you later destroy the commits and even delete the repository. Right, the *second* i hit the "fork" button on github, everyone sees my new repository.
right. The usual way to use (any) DVCS is that this publicly available fork repository (i.e. github) is considered shared work, meaning you only append your work there (no rewriting history).
If you want to use DVCS in distributed manner, your commits would go first to your local repository (i.e. filesystem on your local machine) where you can tweak them to your heart's content before pushing them to shared repository (after which point these commits are shared and thus should not be changed any more).
Since everyone else (with right permissions) does the same, it's very easy to keep track of other's work because you will frequently rebase your local repository from the public one, with someone elses work appended to it. This trigers automatic merging of changes in your repository which you have not shared yet (on top of current head of shared repository, as copied to your repository). Most of the time this merging process is fast and totaly transparent. And when it is not, getting out of trouble is not really that difficult - normally just manually fix files when git asks you to do so, or at worst you do "git rebase --abort" and try again.
B. I can see what advantages a DCVS setup can bring. And I understand the implications. What I am still opposed to is a tool that makes screw ups, even if they just happen locally, possible. I think it does not speak for a tool that it is quite easily possible to get in trouble in the first place. Maybe that's just me. For now, i will resign from this thread and see what actually will get proposed and judge again then.

On 3/22/2012 7:41 AM, Thomas Heller wrote:
I can see what advantages a DCVS setup can bring. And I understand the implications. What I am still opposed to is a tool that makes screw ups, even if they just happen locally, possible. I think it does not speak for a tool that it is quite easily possible to get in trouble in the first place. Maybe that's just me. For now, i will resign from this thread and see what actually will get proposed and judge again then.
Errr ... is this an argument for or against svn, or just an aside applicable to all the tools under discussion? I do think it is undeniably true that any "trouble" one gets into by interacting incorrectly with the primary, shared repository is more broadly damaging than localized trouble, and, pretty much by definition, that is the *only* kind of interaction one has with a CVCS. That encourages -- quite properly -- less frequent modification of the main repository, and thus, for a CVCS any use of the repository. Furthermore, with the alternative of more localized changes removed, the cost/benefit analysis of when to modify the central repository would logically -- all other things being equal -- shift at least a bit in the higher probability of global damage direction. Topher

Thomas Heller <thom.heller@googlemail.com> writes:
On 03/22/2012 11:32 AM, Martin Geisler wrote:
If you publish a repository on GitHub and tell me about then I might look at the commits there and give you feedback. If I'm not basing any work on the changes, then it's no problem if you later destroy the commits and even delete the repository.
Right, the *second* i hit the "fork" button on github, everyone sees my new repository.
That's true, but they see the new repository which is identical to the one you forked it from. After forking you will have to 1. Clone the fork back to your own machine 2. Make some work -- this work is local and unpublished. So you can play around all you want. 3. Push changes back to your fork -- this is point where you commit yourself to your changes and allow others to be affected by them. There are more nuances: you might mark the fork as private so that only you can pull from it. You can also put a big fat message in the description of the repository saying "I'm playing around -- don't base work work on this!". Using a branch named 'my-volative-changes' would communicate the same message. It all boils down to human communication at this point. DVCS allows you to fork left and right, but it's not so chaotic in real life. -- Martin Geisler aragost Trifork Professional Mercurial support http://www.aragost.com/mercurial/

On 3/22/2012 3:57 AM, Thomas Heller wrote:
No, I am saying that altering history is dangerous! Which you described as one of the advantages of "the git approach".
Altering *code* is dangerous. The issue is to aim at wise judgment in regards to cost vs benefit in choosing to make changes either to code or to history (or documentation, or specification, or policies, or ...). Extraneous detail in change history detracts from its usefulness. If I set a backup point in my local history so I can easily back out of a small experiment if it fails, it may be very unlikely that this "snapshot" (and this would be as a snapshot -- not a unit of work) would subsequently be of interest if the experiment succeeds (if it fails, it is somewhat more likely that the history may be worth keeping since it provides a record that the experiment was attempted, that it failed and why it did so; however, a simple log entry -- an underutilized tool since the advent of sophisticated VCSes -- might do as well, and the experiment might be of such limited scope that this might not be of any interest either). Extra detail obscures important detail, making it more difficult to find what is relevant in the history. I think that one of the most important areas where improvement could be made in VCSes (at least in any that I know) is in better understanding the contextual and hierarchical nature of history. Recent history should be much finer grained than older history. What is of importance to an individual developer during the coding of a "micro-task", is frequently of no interest to a group, and most of the history within a "sprint" (to use a term from agile methodology -- a unit of work requiring between one to three weeks to perform whether that is performed by the group or by an individual within the group) is irrelevant afterwards. Branching, tagging and local sandboxes are the traditional tools for this. The DVCS approach adds local repositories. This is certainly a step in the right direction as I see it. Can a non distributed VCS be used to effectively create a DVCS? Of course, that has been done pretty much since the inception of centralized VCS systems, just as individual code management systems were used as centralized VCSes from their inception. A tool designed with that use in mind, however, has the potential to perform the necessary functions, more smoothly, conveniently, and accurately than something cobbled together by conventions and ad hoc scripting. The question is whether particular tools have succeeded in doing this without sacrificing other important functionality. My limited experience with git (I have none with Mercurial) leads me to believe that it has. The statement has been made that (paraphrasing) "any change worth committing is worth preserving". This sees version history as a blunt instrument. Note the immediate and arguably harmful corollary (perhaps not even a corollary but simply a rephrasing): "only changes worth preserving (indefinitely) are worth committing". Personally, I try to locally commit (with whatever tool I have for such things, including simple directory copies) every few control-s'es (where few is a term relative to context). This provides me an opportunity for backtracking and to review recent changes. The chances that any but the last few of these routine backups of this kind (again there is a hierarchical or even fractal structure here, but lets keep it simple) are going to be of use to me are small, and the chances that they will be comprehensible as meaningful steps to anyone else without detailed study is virtually nil. That history rapidly becomes not only useless but harmful, since it obscures useful information. The statement was made in response to a challenge as to whether the claimant (I think it was Heller, but I'm not sure) recommended preserving a record of every save from the editor or every keystroke made during editing. It is a valid question which was answered based on the bald assumption that interaction with a VCS represents a fundamental difference in kind from other activities -- an assumption that I believe to be completely false. The VCS is a tool whose implementation *creates* a totally artificial boundary for workflow. There is only a difference in mechanism, domain/specialization and degree, not of nature between DCS commits, branches, tags, editor saves, commenting out code, todo comments, temporary flags, code comments about motivation and even temporary monitoring and logging insertions and debugger breakpoints. A VCS version serves a purpose (one or more of a finite list of possible purposes). What is of use to one developer may not be of any use to others. What serves a critical function now may become useless at some time in the future. A belief that all and any detail conventionally and conveniently captured by an SVN like system is precisely and immutably the correct level of detail to capture seems to me unlikely to be correct an a poor assumption on which to base decisions. (The same is absolutely true about git-like tools -- the issue is whether it tends to steer choices closer to the ideal and/or make informed judgements easier to carry out. The arguments of the anti-DVCSers on this list are leading me to believe that this is so where previously I thought it only an interesting claim whose truth I was neutral on). Topher

Julian Gonggrijp <j.gonggrijp@gmail.com> writes:
What is the meaning of a commit?
One possible interpretation is that a commit is a snapshot of your project. A snapshot is something that you store for future reference. Because in a sense it's a form of documentation, one will take care to submit well-crafted commits that include enough useful changes to license a new snapshot. In principle, every commit is assumed to introduce some form of progress compared to the previous. Making changes to such a history of snapshots is almost necessarily a form of fraud.
This is the kind of mental model of a commit that is stimulated by svn. You can see it from the terminology: making a commit causes the repository to move to the next revision number.
Another possible interpretation is that a commit represents a unit of work. This tends to favour many small commits over few big commits. Since anything you do before you're sure that it's the right thing is also work, shabby commits are part of the deal. The consequence is that it must be very cheap to isolate any messy state in temporary side tracks. Now the VCS is not only a collection of snapshots, but also a tool to manage your recent pieces of work before you finally commit* to some of them.
This kind of mental model is stimulated by git. It explains why git users make a fuss about amending, rebasing and efficient branching and merging.
I'm afraid I don't agree with this. The version control systems that came after CVS switched to storing repository-wide snapshots. CVS was a collection of RCS files and so completely file-centric. SVN, Mercurial, Git, ... are all repository-centric. Conceptually they work by storing a series (linear or not) of working copy snapshots. Darcs is a possible exception to this: I think it might fit more closely to your unit-of-work model since it models the repository state as a result of a number of patches (units-of-work).
There is no point in arguing that one mental model is superior to the other until you fully grasp both of them. I urge anyone who feels tempted to make agitated remarks to let this sink in for at least a few hours.
That said, I'm confident enough to think that I can give two solid arguments why the units-of-work model is ultimately more productive.
I also prefer people to do five units-of-work (commits) instead of one huge. Smaller commits should be more self-contained and will be easier to review. But I much prefer that the project is in a working state after every single unit-of-work. This is because I think of each commit as a snapshot that might be checked out alone. This commonly happens when using the bisect command: the tool (Git or Mercurial) will assist you in a binary search for a faulty commit. They update to the middle of the unchecked range and you have to build and test the commit. When doing that, it's annoying if you run into commits that fail because of other problems than the one you're investigating. Such false positives makes automated bisecting impossible. Each project must make up its own policy here. -- Martin Geisler aragost Trifork Professional Mercurial support http://www.aragost.com/mercurial/

Martin Geisler wrote:
Julian Gonggrijp <j.gonggrijp@gmail.com> writes:
There is no point in arguing that one mental model is superior to the other until you fully grasp both of them. I urge anyone who feels tempted to make agitated remarks to let this sink in for at least a few hours.
That said, I'm confident enough to think that I can give two solid arguments why the units-of-work model is ultimately more productive.
I also prefer people to do five units-of-work (commits) instead of one huge. Smaller commits should be more self-contained and will be easier to review.
But I much prefer that the project is in a working state after every single unit-of-work. This is because I think of each commit as a snapshot that might be checked out alone.
This commonly happens when using the bisect command: the tool (Git or Mercurial) will assist you in a binary search for a faulty commit. They update to the middle of the unchecked range and you have to build and test the commit. When doing that, it's annoying if you run into commits that fail because of other problems than the one you're investigating. Such false positives makes automated bisecting impossible.
Each project must make up its own policy here.
I think what you describe here is covered by integration branches on one hand, and the possibility to automate bisection with a custom program that checks only for a single aspect of the code on the other hand. Most projects will adopt those possibilities, especially the integration branch. Automated, specialised bisection is something that many people might not know about, but it's very useful. -Julian

On 3/22/2012 6:04 AM, Martin Geisler wrote:
This commonly happens when using the bisect command: the tool (Git or Mercurial) will assist you in a binary search for a faulty commit. They update to the middle of the unchecked range and you have to build and test the commit. When doing that, it's annoying if you run into commits that fail because of other problems than the one you're investigating. Such false positives makes automated bisecting impossible.
A perfect example of my assertion that preserving a historical state (to try to avoid tool-centric terminology) can be done for multiple reasons, and that inappropriate detail (inappropriate universally or for a particular task) interferes with the ability to make effective use of the historical record. VCS technology that cleanly distinguishes history appropriate to a particular task would be a great step forward. How to accomplish this -- without adding such a burden to the user (e.g., a long list of tags and annotations, to use the terms generically) that there would be strong motivation to partially or fully subvert the mechanism or its intent and reduces productivity if that temptation is resisted -- I haven't the faintest idea. Policy that forbids preserving states unless they support a particular subset of the possible uses is certainly a practical, useful and sometime strongly advisable solution, but it clearly has the downside of excluding or crippling other uses of the history that may be of value. Topher

on Thu Mar 22 2012, Martin Geisler <mg-AT-aragost.com> wrote:
This kind of mental model is stimulated by git. It explains why git users make a fuss about amending, rebasing and efficient branching and merging.
I'm afraid I don't agree with this. The version control systems that came after CVS switched to storing repository-wide snapshots. CVS was a collection of RCS files and so completely file-centric. SVN, Mercurial, Git, ... are all repository-centric. Conceptually they work by storing a series (linear or not) of working copy snapshots.
Darcs is a possible exception to this: I think it might fit more closely to your unit-of-work model since it models the repository state as a result of a number of patches (units-of-work).
People get hung up on this all the time, but whether the storage format is fundamentally snapshots or diffs is not really important. They're isomorphic to one another. Git, like any other modern tool, provides commands that support both views of the history. Rebase, for example, treats your commits as units-of-work and "replays" that work on a new base commit. Many other elements of the interface treat commits like snapshots. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave Abrahams <dave@boostpro.com> writes:
on Thu Mar 22 2012, Martin Geisler <mg-AT-aragost.com> wrote:
This kind of mental model is stimulated by git. It explains why git users make a fuss about amending, rebasing and efficient branching and merging.
I'm afraid I don't agree with this. The version control systems that came after CVS switched to storing repository-wide snapshots. CVS was a collection of RCS files and so completely file-centric. SVN, Mercurial, Git, ... are all repository-centric. Conceptually they work by storing a series (linear or not) of working copy snapshots.
Darcs is a possible exception to this: I think it might fit more closely to your unit-of-work model since it models the repository state as a result of a number of patches (units-of-work).
People get hung up on this all the time, but whether the storage format is fundamentally snapshots or diffs is not really important. They're isomorphic to one another.
Yes, that's mostly true. Both Git and Mercurial conceptually stores snapshots of your working copy. These snapshots are of course heavily delta compressed -- otherwise we couldn't do DVCS in the first place. But I say that they conceptually store snapshots since that's what commands like 'hg diff' operate on. When you 'hg diff -r 1:2', then Mercurial has to go out and re-compute the patch that brings you from revision 1 to 2. People often think that changeset 2 "contains" this diff, but it's more complicated that this: the deltas we store on disk don't correspond directly to this diff. This especially true for a merge changeset where the deltas on disk are made on a per-file basis against the parent that produces the smallest delta.
Git, like any other modern tool, provides commands that support both views of the history. Rebase, for example, treats your commits as units-of-work and "replays" that work on a new base commit. Many other elements of the interface treat commits like snapshots.
It actually don't replay anything: it does a series of three-way merges, and three-way merges are an inherently snapshot based thing. -- Martin Geisler Mercurial links: http://mercurial.ch/

on Thu Mar 22 2012, Martin Geisler <mg-AT-aragost.com> wrote:
Dave Abrahams <dave@boostpro.com> writes:
on Thu Mar 22 2012, Martin Geisler <mg-AT-aragost.com> wrote:
This kind of mental model is stimulated by git. It explains why git users make a fuss about amending, rebasing and efficient branching and merging.
I'm afraid I don't agree with this. The version control systems that came after CVS switched to storing repository-wide snapshots. CVS was a collection of RCS files and so completely file-centric. SVN, Mercurial, Git, ... are all repository-centric. Conceptually they work by storing a series (linear or not) of working copy snapshots.
Darcs is a possible exception to this: I think it might fit more closely to your unit-of-work model since it models the repository state as a result of a number of patches (units-of-work).
People get hung up on this all the time, but whether the storage format is fundamentally snapshots or diffs is not really important. They're isomorphic to one another.
Yes, that's mostly true. Both Git and Mercurial conceptually stores snapshots of your working copy. These snapshots are of course heavily delta compressed -- otherwise we couldn't do DVCS in the first place.
But I say that they conceptually store snapshots since that's what commands like 'hg diff' operate on. When you 'hg diff -r 1:2', then Mercurial has to go out and re-compute the patch that brings you from revision 1 to 2.
People often think that changeset 2 "contains" this diff,
In the sense that the commit contains its ancestry and a snapshot, yes, it does contain the diff.
but it's more complicated that this: the deltas we store on disk don't correspond directly to this diff. This especially true for a merge changeset where the deltas on disk are made on a per-file basis against the parent that produces the smallest delta.
But "deltas stored on-disk" are completely irrelevant to the user unless she's fiddling about with the porcelain (low-level guts of the DVCS). Even for most expert users, it is *always, always, always* an implementation detail. My point is that we shouldn't be talking about this stuff here; it will just confuse the less-experienced people and adds /nothing/. One of the big problems with the way Git is often explained is that the explainers get into this stuff. Can't speak for Mercurial. There's absolutely no difference conceptually between storing the latest state plus a chain of diffs and storing a bunch of snapshots, except for performance issues like how long it takes you to reach a given snapshot or how much space things take up on disk, and every good VCS does all kind of implementation-detail-y tricks to smooth out the deficiencies of its storage format.
Git, like any other modern tool, provides commands that support both views of the history. Rebase, for example, treats your commits as units-of-work and "replays" that work on a new base commit. Many other elements of the interface treat commits like snapshots.
It actually don't replay anything: it does a series of three-way merges, and three-way merges are an inherently snapshot based thing.
I wouldn't say that. If A is the ancestor of B and C a 3-way merge can be done by taking the diffs B-A and C-A, finding the overlapping regions and marking those as conflicts, and applying the remaining diffs one by one. The line numbering of those diffs needs to be adjusted as you go along. At least, that's how I coded it 15 years ago in MPW (I was using these tools: http://www.mactech.com/articles/mactech/Vol.05/05.09/SADEDebugging/index.htm...). And it worked perfectly. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave Abrahams <dave@boostpro.com> writes:
on Thu Mar 22 2012, Martin Geisler <mg-AT-aragost.com> wrote:
Dave Abrahams <dave@boostpro.com> writes:
on Thu Mar 22 2012, Martin Geisler <mg-AT-aragost.com> wrote:
This kind of mental model is stimulated by git. It explains why git users make a fuss about amending, rebasing and efficient branching and merging.
I'm afraid I don't agree with this. The version control systems that came after CVS switched to storing repository-wide snapshots. CVS was a collection of RCS files and so completely file-centric. SVN, Mercurial, Git, ... are all repository-centric. Conceptually they work by storing a series (linear or not) of working copy snapshots.
People get hung up on this all the time, but whether the storage format is fundamentally snapshots or diffs is not really important. They're isomorphic to one another.
Yes, that's mostly true. Both Git and Mercurial conceptually stores snapshots of your working copy. These snapshots are of course heavily delta compressed -- otherwise we couldn't do DVCS in the first place.
But I say that they conceptually store snapshots since that's what commands like 'hg diff' operate on. When you 'hg diff -r 1:2', then Mercurial has to go out and re-compute the patch that brings you from revision 1 to 2.
People often think that changeset 2 "contains" this diff,
In the sense that the commit contains its ancestry and a snapshot, yes, it does contain the diff.
but it's more complicated that this: the deltas we store on disk don't correspond directly to this diff. This especially true for a merge changeset where the deltas on disk are made on a per-file basis against the parent that produces the smallest delta.
But "deltas stored on-disk" are completely irrelevant to the user unless she's fiddling about with the porcelain (low-level guts of the DVCS). Even for most expert users, it is *always, always, always* an implementation detail. My point is that we shouldn't be talking about this stuff here; it will just confuse the less-experienced people and adds /nothing/.
You're right -- I got carried away in details :-) I'm only here to try and help people who are confused about what DVCS is and what it brings to the table. The finer details are best discussed on the tools' own mailinglists. -- Martin Geisler Mercurial links: http://mercurial.ch/

On 21/03/2012 03:03, Rene Rivera wrote:
On 3/20/2012 8:52 PM, Julien Nitard wrote:
Related, I like to test branches and ideas without having anyone else observing my moves or caring about what I do; so, I can do that, locally, instead of creating obscure or sacred branches in SVN in a common repository.
This is a very good point. Though it is still a specific need. The VCS is here to help the team. If individuals want to play on their own, it's only "nice to have" IMHO and shouldn't make the other part of the process more complex.
I would argue that "hiding" changes is detrimental to software development. In particular it prevents sufficient software auditing and accountability. It would also curtail active review of the work such that it could end up that one would waste time pursuing development avenues that others have already discounted.
this is good point, but in practice I find it is just the opposite :) All members of my team have defined remote repositories of other members, and "fetch" these repositories daily. Sometimes they peek into the repositories without invitation, as one would read commits in shared repository. More often, they invite colleagues to review their recent work and, what's more important, any review feedback can be easily incorporated into private repository without creating confusion in shared one. Quite often the feedback (invited or not) is something along the lines "this file should not belong to this commit but that one, the commit comment is not clear enough, you obviously have an artifact from older version of code here". Making such changes in shared repository would create confusion, but thanks to editable history of private repository this really helps to keep shared repository clean and readable. The result of this style is such that developers do commit often (and also run regression and unit tests often, as is habitual before every commit) and they also often look back at own and other's commits to ensure best quality of the final, shared result (which BTW is synchronized one-way to P4). Personally I find this style very empowering - I am not only free to experiment but can also rely on reviews from my colleagues. Or in other words, this style is more similar to writing a book than building a house. The history that you present to others is the one you intended, not the random result of your past mistakes and fixes. Others within your team have access (and are indeed invited) to see your mistakes, to help you improve the history before it is publicly presented. B.

This sounds like a "Turing Completeness" argument held by a Pascal programmer when hearing about that "cool" language called C a few decades ago.
Ask people who have extensively used both, and they will tell you. C is better. Period. Apples and oranges -- Pascal was invented solely to teach good
On 3/20/2012 9:31 PM, David Bergman wrote: programming practices, C was invented solely as a somewhat higher level language than assembly for doing systems programming. Both were advocated for use well beyond their original intent -- and used successfully. The Turing Completeness argument was an answer to C fanatics who claimed that Pascal was a "toy language" that was incapable of doing things that could be done in C. There were many of us who used both extensively who felt that both had their place. C was better for writing compact efficient programs (though no one doubted that the other contender for a high level systems programming language, BLISS produced much higher performance than any existing C compiler*), while it was easier to write clear, maintainable programs in Pascal. Ultimately C survived largely because of UNIX, while Pascal was superceded by other languages it inspired for the same niche and others (such as ... C). Topher Cooper * I have to admit to some bias on that issue, since I was one of compiler writers for BLISS at DEC, and had been a sometime student of Bill Wulf at CMU before that. However, the level of optization produced by the BLISS compilers (after the original BLISS-10) is pretty indisputable. No credit to me -- we just extended the use of the optimization algorithms developed by Wulf and the grad students he was thesis advisor to, and used them in ports to other hardware.

On 3/21/2012 8:41 AM, Edward Diener wrote:
On 3/20/2012 7:03 AM, Julian Gonggrijp wrote: ... snip
Well, allow me to present some fair reasoning to you.
With regard to git versus svn: I think enough fair reasons have been given why git (or a DVCS in general) is better than svn. I'm not going to repeat those arguments here.
I have never heard a single technical argument, in all the endless mentions of Git among the people riding that bandwagon, why Git is better than SVN, or even why any DVCS is better than a centralized SCCS. I consider this whole move to Git and/or DVCS among "hip" programmers little more than a move to conform with what others are doing and feel "cool".
I am perfectly willing to read well-chosen technical arguments but not from people already sold on one side or the other. But I really despair of anyone being able to present such arguments in the atmosphere created by Git fanatics and DVCS fanatics. The only thing I have gotten from all this is "I've tried it, I like it, and therefore its superior".
Feel free, anyone, to point me to a purely technical discussion, article, whatnot, explaining the practical reasons why using a DVCS, or Git, is more productive and more pleasurable than using a centralized SCCS like Subversion.
From the beginning, Spirit had it's own community, mildly detached from Boost. Spirit contributors come and go. We once develop code using SourceForge (using CVS, then SVN). I give contributors write access as needed. Once, stable, I move or merge the code to Boost. It so happens that we had a more frequent release cycle
I'm not a Git, or DVCS fanatic. I'll just use whatever tool required to get the job done. I'm OK with SVN. It works. If you read my previous comments on this topic (from last year when this was heavily discussed), you'll see that I question the Git move. To me, and I mentioned this before: "A Good Craftsman Never Blames His Tools". I know a very good luthier who craft world-class guitars using only a pocket knife. And I take that to heart with crafting code as well. I find it funny when people blame SVN, the C++ compiler, etc, etc, for inadequacies in order to flaunt these new shiny tools (Git, Java in the 90s or name-your-new- compiler-here). That is to say, I am not among the "hip". I tend to use the simplest of tools: the most basic text editor and a decent compiler, at the very least. Having presented my neutrality, let me present a case *FOR* DVCS... than Boost (at the time). Each move was soo frustratingly difficult and time consuming (not to mention that I lose and never bothered about the commit histories when moving code to Boost from SF; it just was not worth the hassle. After all, SF was the master with all the histories and the one in Boost was just a copy). That was fine, but there was something in Boost that we need: regular testing by multiple people on different platforms and compilers. At one point, because of that need, we stopped using SF and finally moved to Boost for development. One drawback that I sorely miss from being independent from Boost is the right to give write access. Now, whenever a new contributor comes along, I have to ask permission from the Boost-Owners for write access to the Boost repo. And, a write access privs gives everyone access to the whole boost repo, instead of being limited to Spirit only. Also, I often wondered about past Spirit-devs who are inactive now. They still have write access, but I just let them be. Not being in control is a major disadvantage for me. It's my library and I want to have more control. What I want is to have a system where I can decouple Spirit from the Boost central repository again. I want to regain the right to give Spirit developers write access to this decoupled repository. I want Spirit-devs to develop code, create branches, etc. in this repository. I want to be able to commit upstream into the Boost repo on a regular basis and thus take advantage of Boost testing. I want the commit histories of my upstream merge be intact on all moves and merges. It is obvious to me now that what Spirit needs is DVCS. I don't care which (Git or Hg). I tried both on my own and I find both satisfactory to my minimal needs. I can certainly craft something using a pocket knife and a chisel, but I certainly wouldn't mind a dremel power tool :-) (PS. I tried Git-SVN and hgsubversion without luck. I simply can't get them to work. I'm guessing that these facilities are not well supported. In my experience, they simply bork out when I try to clone the Boost repository) Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

On 3/20/2012 10:56 PM, Joel de Guzman wrote:
On 3/21/2012 8:41 AM, Edward Diener wrote:
On 3/20/2012 7:03 AM, Julian Gonggrijp wrote: ... snip
Well, allow me to present some fair reasoning to you.
With regard to git versus svn: I think enough fair reasons have been given why git (or a DVCS in general) is better than svn. I'm not going to repeat those arguments here.
I have never heard a single technical argument, in all the endless mentions of Git among the people riding that bandwagon, why Git is better than SVN, or even why any DVCS is better than a centralized SCCS. I consider this whole move to Git and/or DVCS among "hip" programmers little more than a move to conform with what others are doing and feel "cool".
I am perfectly willing to read well-chosen technical arguments but not from people already sold on one side or the other. But I really despair of anyone being able to present such arguments in the atmosphere created by Git fanatics and DVCS fanatics. The only thing I have gotten from all this is "I've tried it, I like it, and therefore its superior".
Feel free, anyone, to point me to a purely technical discussion, article, whatnot, explaining the practical reasons why using a DVCS, or Git, is more productive and more pleasurable than using a centralized SCCS like Subversion.
I'm not a Git, or DVCS fanatic. I'll just use whatever tool required to get the job done. I'm OK with SVN. It works. If you read my previous comments on this topic (from last year when this was heavily discussed), you'll see that I question the Git move. To me, and I mentioned this before: "A Good Craftsman Never Blames His Tools". I know a very good luthier who craft world-class guitars using only a pocket knife. And I take that to heart with crafting code as well. I find it funny when people blame SVN, the C++ compiler, etc, etc, for inadequacies in order to flaunt these new shiny tools (Git, Java in the 90s or name-your-new- compiler-here).
That is to say, I am not among the "hip". I tend to use the simplest of tools: the most basic text editor and a decent compiler, at the very least.
Having presented my neutrality, let me present a case *FOR* DVCS...
From the beginning, Spirit had it's own community, mildly detached from Boost. Spirit contributors come and go. We once develop code using SourceForge (using CVS, then SVN). I give contributors write access as needed. Once, stable, I move or merge the code to Boost. It so happens that we had a more frequent release cycle than Boost (at the time). Each move was soo frustratingly difficult and time consuming (not to mention that I lose and never bothered about the commit histories when moving code to Boost from SF; it just was not worth the hassle. After all, SF was the master with all the histories and the one in Boost was just a copy).
That was fine, but there was something in Boost that we need: regular testing by multiple people on different platforms and compilers. At one point, because of that need, we stopped using SF and finally moved to Boost for development. One drawback that I sorely miss from being independent from Boost is the right to give write access. Now, whenever a new contributor comes along, I have to ask permission from the Boost-Owners for write access to the Boost repo. And, a write access privs gives everyone access to the whole boost repo, instead of being limited to Spirit only. Also, I often wondered about past Spirit-devs who are inactive now. They still have write access, but I just let them be. Not being in control is a major disadvantage for me. It's my library and I want to have more control.
What I want is to have a system where I can decouple Spirit from the Boost central repository again. I want to regain the right to give Spirit developers write access to this decoupled repository. I want Spirit-devs to develop code, create branches, etc. in this repository. I want to be able to commit upstream into the Boost repo on a regular basis and thus take advantage of Boost testing. I want the commit histories of my upstream merge be intact on all moves and merges.
It is obvious to me now that what Spirit needs is DVCS. I don't care which (Git or Hg). I tried both on my own and I find both satisfactory to my minimal needs.
I can certainly craft something using a pocket knife and a chisel, but I certainly wouldn't mind a dremel power tool :-)
(PS. I tried Git-SVN and hgsubversion without luck. I simply can't get them to work. I'm guessing that these facilities are not well supported. In my experience, they simply bork out when I try to clone the Boost repository)
Thanks ! I respect you as a programmer and the experience you have related. You have given practical advantages to a DVCS in your explanation. But now let me argue the other side. I am pretty sure you can take an SVN repository and give users access to whatever part of it you want while restricting access to the rest. When you do that what is the difference from users having a DVCS to play with and their own branch of an SVN repository to manipulate ? Conceptually in my mind it is the same thing. Yes, I recognize that psychologically the feeling that one has one's own local repository to play with, and then merge with other repositories, is enticing to users. Bu how is this different from: 1) Creating a local SVN repository and importing some branches from another SVN repository. and/or 2) Having one's own branch of an SVN repository as one's own. What I object to about the DVCS people is that they seem to assert that because DVCS has a model they like, where there is no concept of a central repository, that this is automatically superior in some non-practical and perhaps personal way. I do not doubt that DVCS systems may have some very good tools for merging together various local repositories into some other one(s), but what does this freedom really amount to ? The end-user feels better because it feels like one can work separately more easily and then join one's work with others, but in reality a central repository system has the same "tools". Furthermore merging work with others is NEVER as easy as people would like to think it is. I am so tired of hearing about how all this merging of code just automatically works, and works flawlessly. Who are we kidding ? I can understand your feeling of separating Spirit from Boost and then joining back into Boost as you wish, and perhaps indeed a DVCS has better tools to do this than Subversion, but can you really say this is a matter of DVCS's being inherently better than a centralized SCCS in some way to enable this ? How is this process different than merging whole branches or parts of branches back into Subversion. However it is done merging is very hard and careful work and it is impossible for me to believe that a DVCS has something inherently about it that automatically makes it better. I guess I am saying that on a practical basis a DVCS may be more flexible than a centralizes SCCS, but I see no inherent reason for this. Eddie

On 3/21/2012 11:45 AM, Edward Diener wrote:
Thanks ! I respect you as a programmer and the experience you have related. You have given practical advantages to a DVCS in your explanation. But now let me argue the other side.
Sorry, I am tired arguing. I can enumerate certain problems with the scheme you outlined, but I'd rather not. I'll leave it as I said it. Yes, I can (perhaps) do all that (somehow) using SVN alone. But a DVCS makes it so much more pleasurable at least to me. What we should probably do is simply to agree to disagree and move on. Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

On 21/03/2012 03:45, Edward Diener wrote:
On 3/20/2012 10:56 PM, Joel de Guzman wrote: ... [BK snipped here]
From the beginning, Spirit had it's own community, mildly detached from Boost. Spirit contributors come and go. We once develop code using SourceForge (using CVS, then SVN). I give contributors write access as needed. Once, stable, I move or merge the code to Boost. It so happens that we had a more frequent release cycle than Boost (at the time). Each move was soo frustratingly difficult and time consuming (not to mention that I lose and never bothered about the commit histories when moving code to Boost from SF; it just was not worth the hassle. After all, SF was the master with all the histories and the one in Boost was just a copy). ... [snip again]
Yes, I recognize that psychologically the feeling that one has one's own local repository to play with, and then merge with other repositories, is enticing to users. Bu how is this different from:
1) Creating a local SVN repository and importing some branches from another SVN repository.
and/or
2) Having one's own branch of an SVN repository as one's own.
What I object to about the DVCS people is that they seem to assert that because DVCS has a model they like, where there is no concept of a central repository, that this is automatically superior in some non-practical and perhaps personal way. I do not doubt that DVCS systems
actually there IS a concept of central repository in DVCS - it is the one everyone else does "pull --rebase" from, and the one everyone else does "push" to share their work. It's not all anarchy as some seem to believe. The difference is that both "git pull" and "git push" are very efficient (as efficient as e.g. "p4 sync"), and by default local work does not require online access to central repository. Also back to the point - DVCS are written to provide best possible support for merging. This makes 1) or 2) you propose above very efficient. In SVN or other centralized systems it is not so, just look up the bit I left from Joel's message above. B.

I'm trying to use the xor_combine adaptor with boost 1.48, but get a compiler error. Any ideas? A second question: the xor_combine adaptor did not make it into C++11, right? Here is the error I get, and the code causing it. /usr/local/include/boost/random/detail/seed_impl.hpp: In function ‘void boost::random::detail::fill_array_int_impl(Iter&, Iter, UIntType (&)[n]) [with int w = 32, long unsigned int n = 624ul, Iter = boost::random::mersenne_twister_engine<unsigned int, 32ul, 624ul, 397ul, 31ul, 2567483615u, 11ul, 4294967295u, 7ul, 2636928640u, 15ul, 4022730752u, 18ul, 1812433253u>, UIntType = unsigned int]’: /usr/local/include/boost/random/detail/seed_impl.hpp:324: instantiated from ‘void boost::random::detail::fill_array_int_impl(Iter&, Iter, IntType (&)[n], mpl_::false_) [with int w = 32, long unsigned int n = 624ul, Iter = boost::random::mersenne_twister_engine<unsigned int, 32ul, 624ul, 397ul, 31ul, 2567483615u, 11ul, 4294967295u, 7ul, 2636928640u, 15ul, 4022730752u, 18ul, 1812433253u>, IntType = unsigned int]’ /usr/local/include/boost/random/detail/seed_impl.hpp:330: instantiated from ‘void boost::random::detail::fill_array_int(Iter&, Iter, IntType (&)[n]) [with int w = 32, long unsigned int n = 624ul, Iter = boost::random::mersenne_twister_engine<unsigned int, 32ul, 624ul, 397ul, 31ul, 2567483615u, 11ul, 4294967295u, 7ul, 2636928640u, 15ul, 4022730752u, 18ul, 1812433253u>, IntType = unsigned int]’ /usr/local/include/boost/random/mersenne_twister.hpp:173: instantiated from ‘void boost::random::mersenne_twister_engine<UIntType, w, n, m, r, a, u, d, s, b, t, c, l, f>::seed(It&, It) [with It = boost::random::mersenne_twister_engine<unsigned int, 32ul, 624ul, 397ul, 31ul, 2567483615u, 11ul, 4294967295u, 7ul, 2636928640u, 15ul, 4022730752u, 18ul, 1812433253u>, UIntType = unsigned int, long unsigned int w = 32ul, long unsigned int n = 624ul, long unsigned int m = 397ul, long unsigned int r = 31ul, UIntType a = 2567483615u, long unsigned int u = 11ul, UIntType d = 4294967295u, long unsigned int s = 7ul, UIntType b = 2636928640u, long unsigned int t = 15ul, UIntType c = 4022730752u, long unsigned int l = 18ul, UIntType f = 1812433253u]’ /usr/local/include/boost/random/mersenne_twister.hpp:112: instantiated from ‘boost::random::mersenne_twister_engine<UIntType, w, n, m, r, a, u, d, s, b, t, c, l, f>::mersenne_twister_engine(It&, It) [with It = boost::random::mersenne_twister_engine<unsigned int, 32ul, 624ul, 397ul, 31ul, 2567483615u, 11ul, 4294967295u, 7ul, 2636928640u, 15ul, 4022730752u, 18ul, 1812433253u>, UIntType = unsigned int, long unsigned int w = 32ul, long unsigned int n = 624ul, long unsigned int m = 397ul, long unsigned int r = 31ul, UIntType a = 2567483615u, long unsigned int u = 11ul, UIntType d = 4294967295u, long unsigned int s = 7ul, UIntType b = 2636928640u, long unsigned int t = 15ul, UIntType c = 4022730752u, long unsigned int l = 18ul, UIntType f = 1812433253u]’ /usr/local/include/boost/random/xor_combine.hpp:87: instantiated from ‘boost::random::xor_combine_engine<URNG1, s1, URNG2, s2>::xor_combine_engine(It&, It) [with It = boost::random::mersenne_twister_engine<unsigned int, 32ul, 624ul, 397ul, 31ul, 2567483615u, 11ul, 4294967295u, 7ul, 2636928640u, 15ul, 4022730752u, 18ul, 1812433253u>, URNG1 = boost::random::mersenne_twister_engine<unsigned int, 32ul, 624ul, 397ul, 31ul, 2567483615u, 11ul, 4294967295u, 7ul, 2636928640u, 15ul, 4022730752u, 18ul, 1812433253u>, int s1 = 0, URNG2 = boost::random::mersenne_twister_engine<unsigned int, 32ul, 624ul, 397ul, 31ul, 2567483615u, 11ul, 4294967295u, 7ul, 2636928640u, 15ul, 4022730752u, 18ul, 1812433253u>, int s2 = 1]’ /usr/local/include/boost/random/xor_combine.hpp:194: instantiated from ‘boost::random::xor_combine<URNG1, s1, URNG2, s2, v>::xor_combine(It&, It) [with It = main()::ENG, URNG1 = boost::random::mersenne_twister_engine<unsigned int, 32ul, 624ul, 397ul, 31ul, 2567483615u, 11ul, 4294967295u, 7ul, 2636928640u, 15ul, 4022730752u, 18ul, 1812433253u>, int s1 = 0, URNG2 = boost::random::mersenne_twister_engine<unsigned int, 32ul, 624ul, 397ul, 31ul, 2567483615u, 11ul, 4294967295u, 7ul, 2636928640u, 15ul, 4022730752u, 18ul, 1812433253u>, int s2 = 1, typename URNG1::result_type v = 0u]’ .../sandbox/xor_combine_parallel_random.cpp:28: instantiated from here /usr/local/include/boost/random/detail/seed_impl.hpp:308: error: no ‘operator++(int)’ declared for postfix ‘++’, trying prefix operator instead /usr/local/include/boost/random/detail/seed_impl.hpp:308: error: no match for ‘operator++’ in ‘++first’ ================================================= #include <boost/random.hpp> namespace rndlib = boost::random; typedef rndlib::mt19937 ENG; typedef rndlib::xor_combine<ENG,0,ENG,1> XORENG; ENG eng1; ENG eng2; XORENG xoreng1(eng1,eng2); // <== LINE 28 ERROR "instantiated from here"

On Wed, Mar 21, 2012 at 11:34:31AM +0100, Thijs (M.A.) van den Berg wrote:
I'm trying to use the xor_combine adaptor with boost 1.48, but get a compiler error. Any ideas?
When posting to the list, do not reply to an existing message. That will cause your message to be threaded in the middle of whatever discussion the original message was part of. In this case, it's a GIGANTIC thread on which one of Git and Mercurial tastes better, so your post will most probably be utterly ignored. When you wish to start a new thread, send a fresh message to the list address. Additionally, your message may be better off on the boost-users@ list, as it is about using a library, not development of Boost itself. -- Lars Viklund | zao@acc.umu.se

On 3/21/2012 5:55 AM, Bronek Kozicki wrote:
On 21/03/2012 03:45, Edward Diener wrote:
On 3/20/2012 10:56 PM, Joel de Guzman wrote: ... [BK snipped here]
From the beginning, Spirit had it's own community, mildly detached from Boost. Spirit contributors come and go. We once develop code using SourceForge (using CVS, then SVN). I give contributors write access as needed. Once, stable, I move or merge the code to Boost. It so happens that we had a more frequent release cycle than Boost (at the time). Each move was soo frustratingly difficult and time consuming (not to mention that I lose and never bothered about the commit histories when moving code to Boost from SF; it just was not worth the hassle. After all, SF was the master with all the histories and the one in Boost was just a copy). ... [snip again]
Yes, I recognize that psychologically the feeling that one has one's own local repository to play with, and then merge with other repositories, is enticing to users. Bu how is this different from:
1) Creating a local SVN repository and importing some branches from another SVN repository.
and/or
2) Having one's own branch of an SVN repository as one's own.
What I object to about the DVCS people is that they seem to assert that because DVCS has a model they like, where there is no concept of a central repository, that this is automatically superior in some non-practical and perhaps personal way. I do not doubt that DVCS systems
actually there IS a concept of central repository in DVCS - it is the one everyone else does "pull --rebase" from, and the one everyone else does "push" to share their work. It's not all anarchy as some seem to believe. The difference is that both "git pull" and "git push" are very efficient (as efficient as e.g. "p4 sync"), and by default local work does not require online access to central repository.
Also back to the point - DVCS are written to provide best possible support for merging. This makes 1) or 2) you propose above very efficient. In SVN or other centralized systems it is not so, just look up the bit I left from Joel's message above.
I do not believe for even a second that any product can do merging of the same source file automatically and flawlessly. So when you say that "DVCS are written to provide best possible support for merging" you are talking in a language I do not understand. I think it is an imaginary dream that a number of developers, all working on the same source file, can "merge" them together without breaking code completely without doing it manually in some way.

Hi,
I do not believe for even a second that any product can do merging of the same source file automatically and flawlessly.
None can do, of course. There are techniques to minimize the problems. The most important one is small and frequent commits. These reduce the possible conflicts that can not be automatically resolved a lot. Another "trick" that git managed projects use heavily is rebasing. That might sound strange to someone used to using svn, but when you go through it, you'll find out that rebasing really is a good thing to do, when working with a lot of small branches. And then of course you can invest a bit more or a bit less of your brainpower in the automatic merging algorithms when developing a VCS. Systems where branching and merging is seen as an advanced feature get away with less sophisticated algorithms. That is OK for them, because it is an area that is rarely used and only by "advanced" users. Actually svn can get away with beeing worse than cvs at the support of merging - at least that is the experience I have with both of them. Distributed Systems have the branching and merging as the main concurrency idiom. That is why they don't get away with less sophisticated algorithms. They simply have to be good at that in order to compete with the other DVCSs. If git was bad at it, e.g. monotone, bazaar, or mercurial would be a lot more successful. Christof -- okunah gmbh Software nach Maß Werner-Haas-Str. 8 www.okunah.de 86153 Augsburg cd@okunah.de Registergericht Augsburg Geschäftsführer Augsburg HRB 21896 Christof Donat UStID: DE 248 815 055

On Thu, Mar 22, 2012 at 10:01:32AM +0100, Christof Donat wrote:
Hi,
I do not believe for even a second that any product can do merging of the same source file automatically and flawlessly.
None can do, of course. There are techniques to minimize the problems. The most important one is small and frequent commits. These reduce the possible conflicts that can not be automatically resolved a lot.
As has been said, the final *algorithmic* merging-action is not affected by the number of commits leading to it (it seems there is no reasonable way to do something with this history). However, the likelihood of conflicts and the degree of the conflicts is diminished in case pushing to the main repository happens often enough. And, last but not least, once there is a conflict, then conflict resolution has to be done manually, and then also a more fine-grained history helps, since one can see better what happened, that is, understand better how the conflict arose. Oliver

On Wed, Mar 21, 2012 at 4:45 AM, Edward Diener <eldiener@tropicsoft.com> wrote:
I am pretty sure you can take an SVN repository and give users access to whatever part of it you want while restricting access to the rest. When you do that what is the difference from users having a DVCS to play with and their own branch of an SVN repository to manipulate ?
SVN requires the Boost repo admin to create accounts and set permissions. DVCS doesn't, as you can push to your personal branches.
Conceptually in my mind it is the same thing. Yes, I recognize that psychologically the feeling that one has one's own local repository to play with, and then merge with other repositories, is enticing to users. Bu how is this different from:
It's not about concepts, it's about practice. Do you have experience with DVCSs? If not, you probably don't fully understand their concepts and what they allow in practice.
1) Creating a local SVN repository and importing some branches from another SVN repository.
Seriously? Have you ever done that? How well did it work?
and/or
2) Having one's own branch of an SVN repository as one's own.
What I object to about the DVCS people is that they seem to assert that because DVCS has a model they like, where there is no concept of a central repository, that this is automatically superior in some non-practical and
Lots of people have experience with DVCSs. They tell you the advantages are practical and real.
perhaps personal way. I do not doubt that DVCS systems may have some very good tools for merging together various local repositories into some other one(s), but what does this freedom really amount to ? The end-user feels better because it feels like one can work separately more easily and then join one's work with others, but in reality a central repository system has the same "tools". Furthermore merging work with others is NEVER as easy as people would like to think it is. I am so tired of hearing about how all this merging of code just automatically works, and works flawlessly. Who are we kidding ?
I can understand your feeling of separating Spirit from Boost and then joining back into Boost as you wish, and perhaps indeed a DVCS has better tools to do this than Subversion, but can you really say this is a matter of DVCS's being inherently better than a centralized SCCS in some way to enable this ? How is this process different than merging whole branches or parts of branches back into Subversion. However it is done merging is very hard and careful work and it is impossible for me to believe that a DVCS has something inherently about it that automatically makes it better.
Why is that impossible to belief? Because you don't have any experience with DVCSs? Some VCSs really are far better at merging than others.
I guess I am saying that on a practical basis a DVCS may be more flexible than a centralizes SCCS, but I see no inherent reason for this.
So because *you* don't see the reason, others can't benefit from a DVCS? -- Olaf

On 3/21/2012 5:58 AM, Olaf van der Spek wrote:
On Wed, Mar 21, 2012 at 4:45 AM, Edward Diener<eldiener@tropicsoft.com> wrote:
I am pretty sure you can take an SVN repository and give users access to whatever part of it you want while restricting access to the rest. When you do that what is the difference from users having a DVCS to play with and their own branch of an SVN repository to manipulate ?
SVN requires the Boost repo admin to create accounts and set permissions.
I guess it takes a deep knowledge of molecular physics or string theory to do that.
DVCS doesn't, as you can push to your personal branches.
Conceptually in my mind it is the same thing. Yes, I recognize that psychologically the feeling that one has one's own local repository to play with, and then merge with other repositories, is enticing to users. Bu how is this different from:
It's not about concepts, it's about practice. Do you have experience with DVCSs? If not, you probably don't fully understand their concepts and what they allow in practice.
Wonderful. I don't know because I don't have experience, but no one can actually explain the technical benefits of these concepts.
1) Creating a local SVN repository and importing some branches from another SVN repository.
Seriously? Have you ever done that? How well did it work?
It worked with no problems. Why should it have problems ?
and/or
2) Having one's own branch of an SVN repository as one's own.
What I object to about the DVCS people is that they seem to assert that because DVCS has a model they like, where there is no concept of a central repository, that this is automatically superior in some non-practical and
Lots of people have experience with DVCSs. They tell you the advantages are practical and real.
If I tell you that the advantages of standing on your head 4 hours a day are "practical" and "real" would you find asking what this actually means a strange thing to do.
perhaps personal way. I do not doubt that DVCS systems may have some very good tools for merging together various local repositories into some other one(s), but what does this freedom really amount to ? The end-user feels better because it feels like one can work separately more easily and then join one's work with others, but in reality a central repository system has the same "tools". Furthermore merging work with others is NEVER as easy as people would like to think it is. I am so tired of hearing about how all this merging of code just automatically works, and works flawlessly. Who are we kidding ?
I can understand your feeling of separating Spirit from Boost and then joining back into Boost as you wish, and perhaps indeed a DVCS has better tools to do this than Subversion, but can you really say this is a matter of DVCS's being inherently better than a centralized SCCS in some way to enable this ? How is this process different than merging whole branches or parts of branches back into Subversion. However it is done merging is very hard and careful work and it is impossible for me to believe that a DVCS has something inherently about it that automatically makes it better.
Why is that impossible to belief? Because you don't have any experience with DVCSs? Some VCSs really are far better at merging than others.
I guess I am saying that on a practical basis a DVCS may be more flexible than a centralizes SCCS, but I see no inherent reason for this.
So because *you* don't see the reason, others can't benefit from a DVCS?
I do not see the reasons because no one appears to feel it important enough to actually enumerate them as technical arguments. All I get is what I have gotten from you: "try it, you'll love it, because I do and lots of others do." This is exactly what I wrote in my initial response. I do not care about what others do, but I am not doing anything unless I understand why I should do it.

Edward Diener <eldiener@tropicsoft.com> writes:
On 3/21/2012 5:58 AM, Olaf van der Spek wrote:
It's not about concepts, it's about practice. Do you have experience with DVCSs? If not, you probably don't fully understand their concepts and what they allow in practice.
Wonderful. I don't know because I don't have experience, but no one can actually explain the technical benefits of these concepts.
In short, it boils down to a more flexible way to work. DVCS is a superset of centralized VCS in that they allow you to do more than you could before. I tried to give a concrete technical example where merging is better in Mercurial/Git than in Subversion: http://lists.boost.org/Archives/boost/2012/03/191460.php I also described some workflow advantages of DVCS. The speed and flexibility is really the main theme. You have the history locally so you can use it better. Your commits are local so you can experiment more freely before you inflict the changes onto others. -- Martin Geisler aragost Trifork Professional Mercurial support http://www.aragost.com/mercurial/

on Wed Mar 21 2012, Edward Diener <eldiener-AT-tropicsoft.com> wrote:
On 3/21/2012 5:58 AM, Olaf van der Spek wrote:
On Wed, Mar 21, 2012 at 4:45 AM, Edward Diener<eldiener@tropicsoft.com> wrote:
1) Creating a local SVN repository and importing some branches from another SVN repository.
Seriously? Have you ever done that? How well did it work?
It worked with no problems. Why should it have problems ?
In fact, there are tools for this: http://en.wikipedia.org/wiki/SVK Of course, that's a DVCS too :-) -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 03/22/2012 06:19 PM, Dave Abrahams wrote:
on Wed Mar 21 2012, Edward Diener<eldiener-AT-tropicsoft.com> wrote:
On 3/21/2012 5:58 AM, Olaf van der Spek wrote:
On Wed, Mar 21, 2012 at 4:45 AM, Edward Diener<eldiener@tropicsoft.com> wrote:
1) Creating a local SVN repository and importing some branches from another SVN repository. Seriously? Have you ever done that? How well did it work? It worked with no problems. Why should it have problems ? In fact, there are tools for this: http://en.wikipedia.org/wiki/SVK Of course, that's a DVCS too :-)
SVK was nice in its time, but its maintenance was discontinued at some point with a pretty clear announcement that it's time was in the past. I think the developer saw no need for the tool any longer as simpler pure DVCS had come along. These could even synch with SVN as well as SVK could I think. -- Bjørn

Conceptually in my mind it is the same thing. Yes, I recognize that psychologically the feeling that one has one's own local repository to play with, and then merge with other repositories, is enticing to users. Bu how is this different from:
1) Creating a local SVN repository and importing some branches from another SVN repository.
The amount of work to do that in svn compared to the amount of work to do that with git/mercurial is ridiculous.
2) Having one's own branch of an SVN repository as one's own.
That could work, but it'll yield a repository where you have about 2-3 branches per developper (yes, people using git/mercurial often have lots of features/tests branches) that nobody cares about. Also, storing test branches on the public repo is just silly imho. What I object to about the DVCS people is that they seem to assert that
because DVCS has a model they like, where there is no concept of a central repository, that this is automatically superior in some non-practical and perhaps personal way. I do not doubt that DVCS systems may have some very good tools for merging together various local repositories into some other one(s), but what does this freedom really amount to ? The end-user feels better because it feels like one can work separately more easily and then join one's work with others, but in reality a central repository system has the same "tools".
Simply creating a branch and then merging it back was a nightmare with svn. If you typed your command wrong, or did an error, then everyone suffered of your mistake. You then had to correct it in a rush before it created problems for others, etc. With git/hg, when you do a mistake, you simply cancel your local merge and redo it again until you did the right thing, then you push.
I guess I am saying that on a practical basis a DVCS may be more flexible than a centralizes SCCS, but I see no inherent reason for this.
To be honest, I feel that all the people that "cannot see the advantages of a DVCS" are people who either didn't try it, or tried it just enough to reassure themselves it wasn't worth it. Any tool can suck if you're not willing to *really* see what it's worth. Philippe

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Am 21.03.12 12:12, schrieb Philippe Vaucher:
2) Having one's own branch of an SVN repository as one's own.
That could work, but it'll yield a repository where you have about 2-3 branches per developper (yes, people using git/mercurial often have lots of features/tests branches) that nobody cares about.
Why is that bad? At my work we are using svn and I have like 8 branches (not all of them are currently active I confess.) I put my branches not into /branches, but into /users/fbirbacher. This way noone has to care about them.
Also, storing test branches on the public repo is just silly imho.
I agree there is a difference between this approach on a company svn repo server and a public svn repo like boost. But still, even on boost I create my own sandbox branch to do development.
Simply creating a branch and then merging it back was a nightmare with svn. If you typed your command wrong, or did an error, then everyone suffered of your mistake. You then had to correct it in a rush before it created problems for others, etc.
With git/hg, when you do a mistake, you simply cancel your local merge and redo it again until you did the right thing, then you push.
This suggests 1st mistakes you do in svn cannot be repaired, and 2nd you will spot every mistake in git/hg before you push. As I understand, once you push to a public repo and then discover a mistake your mistakes will be just as visible as with svn. I agree there is a chance to find some errors before publishing, but with svn I spot errors in commands I run on my working copy easily before committing.
To be honest, I feel that all the people that "cannot see the advantages of a DVCS" are people who either didn't try it, or tried it just enough to reassure themselves it wasn't worth it. Any tool can suck if you're not willing to *really* see what it's worth.
I feel you didn't try enough of svn: creating a branch and merging it back is really a simple thing to do in svn. cd someemtpydir svn co svn://server/svn/trunk . # create branch: svn cp ^/trunk ^/branches/my-branch -m "branch" # edit branch: svn switch . ^/branches/my-branch echo "new" > new.txt svn add new.txt svn ci -m "new file" # switch back to trunk and merge: svn switch ^/trunk . svn merge ^/branches/my-branch . # revise working copy before commit, or revert and try again # svn revert -R . svn ci -m "merged branch" So how would that go with git or hg? Would it be easier? Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.17 (Darwin) Comment: GPGTools - http://gpgtools.org Comment: keyserver x-hkp://pool.sks-keyservers.net iEYEARECAAYFAk9zVywACgkQhAOUmAZhnmrxdwCcCdp/BgPDgncVv/EhzwiT7Kgz jTIAn15VrrdubgssGk0u4JvxsiXPUEYG =yE/F -----END PGP SIGNATURE-----

Frank Birbacher wrote:
I feel you didn't try enough of svn: creating a branch and merging it back is really a simple thing to do in svn.
cd someemtpydir svn co svn://server/svn/trunk . # create branch: svn cp ^/trunk ^/branches/my-branch -m "branch" # edit branch: svn switch . ^/branches/my-branch echo "new" > new.txt svn add new.txt svn ci -m "new file" # switch back to trunk and merge: svn switch ^/trunk . svn merge ^/branches/my-branch . # revise working copy before commit, or revert and try again # svn revert -R . svn ci -m "merged branch"
So how would that go with git or hg? Would it be easier?
In git (assuming that you configured aliases "co" and "ci"): git clone ssh://user@server:projectname.git cd projectname # create and switch to a branch: git co -b branch # edit branch: echo "new" > new.txt git add new.txt git ci -m "new file" # switch back to develop and merge+commit: git co develop git merge branch # update from server git pull # any conflict resolution here # publish your work git push Same number of commands in total, but you need to type less on average in git, especially for the branching operations. Apart from that, committing, forking, switching and merging will all be significantly faster in git. -Julian

Frank Birbacher <bloodymir.crap@gmx.net> writes:
Am 21.03.12 12:12, schrieb Philippe Vaucher:
To be honest, I feel that all the people that "cannot see the advantages of a DVCS" are people who either didn't try it, or tried it just enough to reassure themselves it wasn't worth it. Any tool can suck if you're not willing to *really* see what it's worth.
I feel you didn't try enough of svn: creating a branch and merging it back is really a simple thing to do in svn.
cd someemtpydir svn co svn://server/svn/trunk . # create branch: svn cp ^/trunk ^/branches/my-branch -m "branch" # edit branch: svn switch . ^/branches/my-branch echo "new" > new.txt svn add new.txt svn ci -m "new file" # switch back to trunk and merge: svn switch ^/trunk . svn merge ^/branches/my-branch . # revise working copy before commit, or revert and try again # svn revert -R . svn ci -m "merged branch"
So how would that go with git or hg? Would it be easier?
The equivalent Mercurial commands would be: hg clone http://server/repo cd repo # create branch similar to a SVN branch hg branch my-branch echo "new" > new.txt hg add hg ci -m "new file" # switch back to 'default' branch hg update default hg merge my-branch # revise working copy before commit, or re-try merge # hg resolve --all hg ci -m "merged branch" So, it's very similar for this example. But don't you need to add a --reintegrate flag if you want to merge your branch several times? It's mentioned here http://svnbook.red-bean.com/en/1.7/svn.branchmerge.basicmerging.html#svn.bra... that "Once a --reintegrate merge is done from branch to trunk, the branch is no longer usable for further work.". Neither Git nor Mercurial has has such a flag and both let you keep working with your branches after merging them (in any direction). Could you try repeating the above commands with a rename in your branch and an edit in trunk? This is a full example (from http://stackoverflow.com/a/2486662/110204): cd /tmp rm -rf svn-repo svn-checkout svnadmin create svn-repo svn checkout file:///tmp/svn-repo svn-checkout cd svn-checkout mkdir trunk branches echo 'Goodbye, World!' > trunk/hello.txt svn add trunk branches svn commit -m 'Initial import.' svn copy '^/trunk' '^/branches/rename' -m 'Create branch.' svn switch '^/trunk' . echo 'Hello, World!' > hello.txt svn commit -m 'Update on trunk.' svn switch '^/branches/rename' . svn rename hello.txt hello.en.txt svn commit -m 'Rename on branch.' svn switch '^/trunk' . svn merge '^/branches/rename' I get a tree conflict from the merge: --- Merging differences between repository URLs into '.': A hello.en.txt C hello.txt Summary of conflicts: Tree conflicts: 1 All of Bazaar, Mercurial, and Git agree that the edit of 'hello.txt' should be merged into 'hello.en.txt' and that there's no conflict here. This merge bug has been there since Subversion 1.5, and it isn't fixed in version 1.6.17 (the most recent packaged version for Debian). It's also mentioned in version 1.7 of the SVN book: http://svnbook.red-bean.com/en/1.7/svn.branchmerge.advanced.html#svn.branchm... -- Martin Geisler Mercurial links: http://mercurial.ch/

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi! Thanks very much for your feedback! Both of you. Am 28.03.12 22:49, schrieb Martin Geisler:
So, it's very similar for this example.
Yes, so to me there is no significant difference in the commands that create and merge branches. In the commands that is! You provided an example where a rename leads to a conflict in svn, but not in hg or git. That's a drawback on svn of course. On the other hand it is a reported bug and likely to be fixed (I wonder why this hasn't been fixed already, v1.5 has already been a while on stage.)
But don't you need to add a --reintegrate flag if you want to merge your branch several times? It's mentioned here [snip] Neither Git nor Mercurial has has such a flag and both let you keep working with your branches after merging them (in any direction).
I do not need the "--reintegrate" flag. svn 1.6 keeps track of merge information, and even before that I knew when the branch was created. I've been working with branches in svn for some time and only lately it occurred to me what the workflow using "--reintegrate" is. I guess this is easier in hg or git. So we will need a more complex example, I think. using Subversion 1.7.4: # setup: cd /tmp svnadmin create testrepo svn mkdir file:///tmp/testrepo/{trunk,branches} -m "default dirs" svn co file:///tmp/testrepo/trunk workingcopy cd workingcopy # create branch: svn cp ^/trunk ^/branches/my-branch -m "branch" # edit branch: svn switch ^/branches/my-branch echo "new" > new.txt svn add new.txt svn ci -m "new file" # switch back to trunk and merge: svn switch ^/trunk svn merge ^/branches/my-branch # revise working copy before commit, or revert and try again # svn revert -R . svn ci -m "merged branch" # REPEATING: svn switch ^/branches/my-branch echo "further" >> new.txt svn ci -m "added content" # merging: svn switch ^/trunk # optional inspection: # svn mergeinfo --show-revs=eligible ^/branches/my-branch svn merge ^/branches/my-branch # optional inspection: # svn pg svn:mergeinfo . svn ci -m "merged branch again" Running the above as a script takes some time: real 0m10.433s user 0m0.160s sys 0m0.253s
Could you try repeating the above commands with a rename in your branch and an edit in trunk?
I already tried your example before, and yes, I got a conflict. And I was not able to work around this. BTW, as I understand git: the working copy contains a hidden directory that stores all of the repository data. And the checkout will be placed at top-level. Is there a way to checkout multiple branches at the same time? Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.17 (Darwin) Comment: GPGTools - http://gpgtools.org Comment: keyserver x-hkp://pool.sks-keyservers.net iEYEARECAAYFAk9z+8AACgkQhAOUmAZhnmo4JQCgk6T8BHTvyKx7Q9oX5+RFjH6K zGEAn0Kh5JBUHke2fCDSDQ2fWsGvAdhf =ElJR -----END PGP SIGNATURE-----

Frank Birbacher <bloodymir.crap@gmx.net> writes:
Hi!
Thanks very much for your feedback! Both of you.
Am 28.03.12 22:49, schrieb Martin Geisler:
So, it's very similar for this example.
Yes, so to me there is no significant difference in the commands that create and merge branches.
Agreed, the differences are just syntax! Btw, I consider SVN a super centralized version control system, so this is not surprising.
In the commands that is! You provided an example where a rename leads to a conflict in svn, but not in hg or git. That's a drawback on svn of course. On the other hand it is a reported bug and likely to be fixed (I wonder why this hasn't been fixed already, v1.5 has already been a while on stage.)
I know Boost is C++ and not, say, Java, so maybe you don't run into this very often? When I talk to people using Java it sounds like they're moving files around all day :) So it's really important for them that they can merge a branch back to trunk even when files have been renamed.
But don't you need to add a --reintegrate flag if you want to merge your branch several times? It's mentioned here [snip] Neither Git nor Mercurial has has such a flag and both let you keep working with your branches after merging them (in any direction).
I do not need the "--reintegrate" flag. svn 1.6 keeps track of merge information, and even before that I knew when the branch was created. I've been working with branches in svn for some time and only lately it occurred to me what the workflow using "--reintegrate" is.
I had to re-read this section: http://svnbook.red-bean.com/en/1.7/svn-book.html#svn.branchemerge.basicmergi... It points out that the --reintegrate flag is critical for reintegrating changes from a branch back into trunk. They are talking about a scenario where you've continuously kept the branch up to date with changes from trunk and now want to merge back: trunk: a --- b --- c --- d --- e \ \ branch: r --- s --- t --- u Because of the t revision, you cannot just replay the r:u changes on top of e: the u revision already contain some of the changes that are in e (the b and c changes).
I guess this is easier in hg or git.
Yes, in those systems you would do a final merge from the mainline into your branch: default: a --- b --- c --- d --- e \ \ \ branch: r --- s --- t --- u --- v Merging back to default is now a no-op! Technically you try to do a merge between e and v. You find their common ancestor (e!) and you now have a degenerate three-way merge where the state in v wins every time. So you can create default: a --- b --- c --- d --- e - f \ \ \ / branch: r --- s --- t --- u --- v where the files in f look exactly like they do in v. There have often been a few changes on default since the last branch synchronization so you really start with default: a --- b --- c --- d --- e --- f --- g \ \ \ branch: r --- s --- t --- u --- v and merge g and v. Here the ancestor is e (close by!) so you only have to consider changes made in f and g. The final state is then: default: a --- b --- c --- d --- e --- f --- g --- h \ \ \ / branch: r --- s --- t --- u --- v ------------' Three-way merges are *symmetric* in Git and Mercurial and so it doesn't matter which way you merge -- you get the same amount of conflicts.
Could you try repeating the above commands with a rename in your branch and an edit in trunk?
I already tried your example before, and yes, I got a conflict. And I was not able to work around this.
BTW, as I understand git: the working copy contains a hidden directory that stores all of the repository data. And the checkout will be placed at top-level. Is there a way to checkout multiple branches at the same time?
Yes, you can do that with both systems. With Mercurial you make a new clone based on an existing local repository: hg clone my-repo my-second-repo cd my-second-repo hg update my-favorite-branch That gives you two independent working copies. By making a local clone you avoid downloading anything again. Furthermore, a local clone will make *hardlinks* between the files in the .hg/ directory. This means that both clones share the disk space: you only pay for creating a new working copy. So you end up with two checkouts and both have the full history inside their .hg/ directories. But the disk space is shared so the overhead is very low. With SVN you would have to make a new 'svn checkout' -- or I guess you can copy an existing checkout with 'cp' and then 'svn switch'? That way you avoid downloading the files that aren't affected by the switch. Notice a fundamental difference in design here: Mercurial (and Git) have branches. Subversion don't: http://svnbook.red-bean.com/en/1.7/svn-book.html#svn.branchmerge.using.conce... Instead, SVN has a cheap server-side copy mechanism and SVN allows you to checkout a single subdirectory at a time. SVN also allows you to merge changes made in a subdirectory into another subdirectory. These features let you "emulate" branches and tags, but they are not first-class citizens in the system. This in turn allows SVN to represent a richer history than Git and Mercurial. That is, I can do svn cp ^/trunk/foo/bar ^/tags/bar-1.0 -m "branch" to "tag" a random subdirectory. That operation doesn't make any sense in the other systems: there a tag references a commit and that's that. Depending on your viewpoint, you can say that Git and Mercurial models the history in a more clean way. You can also say that they lack a crucial feature :) -- Martin Geisler aragost Trifork Professional Mercurial support http://www.aragost.com/mercurial/

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi! Thank you all for the thorough explanations. I really enjoy the feedback here. Am 29.03.12 10:12, schrieb Martin Geisler:
It points out that the --reintegrate flag is critical for reintegrating changes from a branch back into trunk. They are talking about a scenario where you've continuously kept the branch up to date with changes from trunk and now want to merge back:
Correct.
trunk: a --- b --- c --- d --- e \ \ branch: r --- s --- t --- u
Because of the t revision, you cannot just replay the r:u changes on top of e: the u revision already contain some of the changes that are in e (the b and c changes).
My approach with svn without --reintegrate is: merge t into the trunk and use --record-only. This way the files stay unchanged, but the metadata (mergeinfo) will be updated to reflect that trunk now contains t. This is somewhat arkward, but in the end it enables a merge of the branch into the trunk. svn will then by itself patch r, s, and u into the trunk. So no three-way-merge here, but (maybe) a series of diffs that will get applied. The --reintegrate option will instead employ a three-way merge. I see how a merge of all of trunk into the branch just before merging the branch into trunk will help to reduce conflicts. So I consider - --reintegrate now. Maybe this is the point where git and hg have better handling of merges. What will happen in the above example with git or hg when merging the branch into trunk? Do you have to do a final merge from mainline into the branch? What will happen if you skip this step? I'm asking because I might want to cherry pick changes from either side and merge them into the other: some changes from trunk into the branch some other from the branch into trunk and at the end merge the whole branch into trunk. How is that supported?
That gives you two independent working copies.
By making a local clone you avoid downloading anything again. Furthermore, a local clone will make *hardlinks* between the files in the .hg/ directory. This means that both clones share the disk space: you only pay for creating a new working copy.
Is that supported on Windows as well?
With SVN you would have to make a new 'svn checkout' -- or I guess you can copy an existing checkout with 'cp' and then 'svn switch'? That way you avoid downloading the files that aren't affected by the switch.
Correct. And you will have to pay for duplicate files. SVN will keep a pristine copy of all files in its hidden directory. So every working copy will have its own set of pristine files, no hardlinks. With the pristine files you can view the current changes (svn diff) or revert files without contact to the repo.
Notice a fundamental difference in design here: Mercurial (and Git) have branches. Subversion don't:
http://svnbook.red-bean.com/en/1.7/svn-book.html#svn.branchmerge.using.conce...
Instead, SVN has a cheap server-side copy mechanism and SVN allows you to checkout a single subdirectory at a time. SVN also allows you to merge changes made in a subdirectory into another subdirectory. These features let you "emulate" branches and tags, but they are not first-class citizens in the system.
Yes, I always thought the emulation was an advantage of svn because you don't have to learn another concept. Just copying directories to create branches allows to employ whatever organization of branches and tags you like: create /branches/releases to hold release branches if you like, create /users/myusername to supply everyone with their own sandbox, or create /proj1/trunk and /proj2/trunk in the same repo.
This in turn allows SVN to represent a richer history than Git and Mercurial. That is, I can do
svn cp ^/trunk/foo/bar ^/tags/bar-1.0 -m "branch"
to "tag" a random subdirectory. That operation doesn't make any sense in the other systems: there a tag references a commit and that's that. Depending on your viewpoint, you can say that Git and Mercurial models the history in a more clean way. You can also say that they lack a crucial feature :)
At least they disallow committing to tags by design. With svn handling of branches and tags is pure convention. Sometimes people don't adhere to the convention. Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.17 (Darwin) Comment: GPGTools - http://gpgtools.org Comment: keyserver x-hkp://pool.sks-keyservers.net iEYEARECAAYFAk90qjEACgkQhAOUmAZhnmonSQCgiYUYNJmqDn6thIrVGOX0y8rl hcUAniNN1iPme/2TyOaHXGjzvQN9PORA =TLcK -----END PGP SIGNATURE-----

Frank Birbacher <bloodymir.crap@gmx.net> writes:
Hi!
Thank you all for the thorough explanations. I really enjoy the feedback here.
As you can probably tell, I enjoy talking about this too :)
Am 29.03.12 10:12, schrieb Martin Geisler:
It points out that the --reintegrate flag is critical for reintegrating changes from a branch back into trunk. They are talking about a scenario where you've continuously kept the branch up to date with changes from trunk and now want to merge back:
Correct.
trunk: a --- b --- c --- d --- e \ \ branch: r --- s --- t --- u
Because of the t revision, you cannot just replay the r:u changes on top of e: the u revision already contain some of the changes that are in e (the b and c changes).
My approach with svn without --reintegrate is: merge t into the trunk and use --record-only. This way the files stay unchanged, but the metadata (mergeinfo) will be updated to reflect that trunk now contains t. This is somewhat arkward, but in the end it enables a merge of the branch into the trunk. svn will then by itself patch r, s, and u into the trunk. So no three-way-merge here, but (maybe) a series of diffs that will get applied. The --reintegrate option will instead employ a three-way merge.
I see how a merge of all of trunk into the branch just before merging the branch into trunk will help to reduce conflicts. So I consider --reintegrate now. Maybe this is the point where git and hg have better handling of merges. What will happen in the above example with git or hg when merging the branch into trunk? Do you have to do a final merge from mainline into the branch?
When you merge u into e, you start a three-way merge with ancestor: c local: e remote: u Files from these three snapshots are compared with the normal three-way merge logic: a hunk that has only been changed in one way from c to e or from c to u is copied to the result. If a hunk has been changed in different ways it's a conflict -- you have to resolve this in your merge tool of choice.
What will happen if you skip this step?
If you don't merge, then the branches remain diverged.
I'm asking because I might want to cherry pick changes from either side and merge them into the other: some changes from trunk into the branch some other from the branch into trunk and at the end merge the whole branch into trunk. How is that supported?
When cherry-picking you're copying changes from one branch onto another. Let's say we have two long-running branches like this: default: ... a --- b --- c --- d --- e / / stable: ... --- x --------- y New features go onto the default branch and bugfixes go into the stable branch. The stable branch is always a *subset* of the default branch since the default branch has more features plus the bugfixes that we continuously merge in from the stable branch. If a bugfix ends up on the default branch by mistake, then we can cherry pick it onto the stable branch. Let's say c is such a bugfix. We run hg update stable # checkout stable branch (y) in working copy hg transplant c # re-apply b to c delta on top of y This gives us default: ... a --- b --- c --- d --- e / / stable: ... --- x --------- y --- z The diff between y and z is like the diff between b and c. We then merge stable into default again so that stable is a subset of default: default: ... a --- b --- c --- d --- e --- f / / / stable: ... --- x --------- y --------- z This merge is a no-op since a three-way merge doesn't care if a change has been copied into both branches. That is, the merge sees a hunk that changed from 'old' to 'new' in both branches. The merge result is naturally the 'new' hunk and there's no conflict. Blocking changes is harder because we always use three-way merges instead of re-playing patches. If you know that you don't need a particular changeset again, then you can back it out. This is just a way of applying the reverse patch from that changeset. +x +y -x a --- b --- c --- d --- f Here you can think of "+x" as meaning insert a line with "x" and "-x" as meaning remove the line. The buggy changeset c is backed out and this just means that you apply "-x" on top of d. The "y" line is still part of f. Since three-way merges only consider the final states of the branches, this can be used to block a changeset.
That gives you two independent working copies.
By making a local clone you avoid downloading anything again. Furthermore, a local clone will make *hardlinks* between the files in the .hg/ directory. This means that both clones share the disk space: you only pay for creating a new working copy.
Is that supported on Windows as well?
Yeah -- I was surprised too :-) NTFS supports hardlinks and has done so for more than a decade. But people still come and ask me about this when I give a Mercurial talk :)
With SVN you would have to make a new 'svn checkout' -- or I guess you can copy an existing checkout with 'cp' and then 'svn switch'? That way you avoid downloading the files that aren't affected by the switch.
Correct. And you will have to pay for duplicate files. SVN will keep a pristine copy of all files in its hidden directory. So every working copy will have its own set of pristine files, no hardlinks. With the pristine files you can view the current changes (svn diff) or revert files without contact to the repo.
Indeed. I measured the space taken up by the OpenOffice Mercurial repository: the working copy is 2.0 GB and the .hg/ folder is 2.3 GB. This means that you pay a 15% overhead for storing all 270,000 changesets locally -- compared to storing just one pristine copy like SVN done. The delta compression is amazingly efficient!
Notice a fundamental difference in design here: Mercurial (and Git) have branches. Subversion don't:
http://svnbook.red-bean.com/en/1.7/svn-book.html#svn.branchmerge.using.conce...
Instead, SVN has a cheap server-side copy mechanism and SVN allows you to checkout a single subdirectory at a time. SVN also allows you to merge changes made in a subdirectory into another subdirectory. These features let you "emulate" branches and tags, but they are not first-class citizens in the system.
Yes, I always thought the emulation was an advantage of svn because you don't have to learn another concept. Just copying directories to create branches allows to employ whatever organization of branches and tags you like: create /branches/releases to hold release branches if you like, create /users/myusername to supply everyone with their own sandbox, or create /proj1/trunk and /proj2/trunk in the same repo.
I think it's very clever that you can use a cheap server-side copy mechanism for this -- it gives you some extra freedom. -- Martin Geisler Mercurial links: http://mercurial.ch/

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi! Am 29.03.12 22:18, schrieb Martin Geisler:
Blocking changes is harder because we always use three-way merges instead of re-playing patches. If you know that you don't need a particular changeset again, then you can back it out. This is just a way of applying the reverse patch from that changeset.
+x +y -x a --- b --- c --- d --- f
Ok, this means removing the changes from a branch. But what for the following: consider a stable and a development branch, just as in your example. Bugfixes go to stable and features go to development. Once in a week someone merges the stable branch into dev to bring all bugfixes into dev. Now someone fixes a bug on stable which shall be fixed differently on dev, here shown as z and z': dev: ... a --- b --- c --- d --- z' / / stable: ... --- x --------- y --- z The z and z' are logically the same fix, but syntactically they are different. With svn you could block z from being merged into dev (effective when the merge will happen sometime in the future.) With a three-way merge this seems not easily possible. A merge from will reproduce the z in dev (ancestor is y currently.) So you will have to produce a new common ancestor of dev and stable right on the spot, meaning to do a merge and somehow remove z from it. You might ask why can z and z' be different in the first place. Well, it may be the coding on dev has changed due to new library functions, new compiler capabilities, or new design of something.
Yeah -- I was surprised too :-) NTFS supports hardlinks and has done so for more than a decade. But people still come and ask me about this when I give a Mercurial talk :)
:)
This means that you pay a 15% overhead for storing all 270,000 changesets locally -- compared to storing just one pristine copy like SVN done. The delta compression is amazingly efficient!
Wow, that's really cool. Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.17 (Darwin) Comment: GPGTools - http://gpgtools.org Comment: keyserver x-hkp://pool.sks-keyservers.net iEYEARECAAYFAk93JmEACgkQhAOUmAZhnmrROQCfRdWDNb6CFNs4xM/EImdQJck7 294Anj7031g8va2aBO1Cx/u0Zd4wBRf3 =d31f -----END PGP SIGNATURE-----

Frank Birbacher <bloodymir.crap@gmx.net> writes:
Hi!
Am 29.03.12 22:18, schrieb Martin Geisler:
Blocking changes is harder because we always use three-way merges instead of re-playing patches. If you know that you don't need a particular changeset again, then you can back it out. This is just a way of applying the reverse patch from that changeset.
+x +y -x a --- b --- c --- d --- f
Ok, this means removing the changes from a branch. But what for the following: consider a stable and a development branch, just as in your example. Bugfixes go to stable and features go to development. Once in a week someone merges the stable branch into dev to bring all bugfixes into dev. Now someone fixes a bug on stable which shall be fixed differently on dev, here shown as z and z':
dev: ... a --- b --- c --- d --- z' / / stable: ... --- x --------- y --- z
The z and z' are logically the same fix, but syntactically they are different. With svn you could block z from being merged into dev (effective when the merge will happen sometime in the future.) With a three-way merge this seems not easily possible. A merge from will reproduce the z in dev (ancestor is y currently.) So you will have to produce a new common ancestor of dev and stable right on the spot, meaning to do a merge and somehow remove z from it.
You might ask why can z and z' be different in the first place. Well, it may be the coding on dev has changed due to new library functions, new compiler capabilities, or new design of something.
Yeah, this situation is not uncommon with long-lived branches where a bugfix in version 1.x looks quite different when you forward-port it to the 2.x and 3.x series. With Git or Mercurial you do the merge of z' and z and then resolve the conflicts in favour of z'. There's no direct way to block a changeset from "flowing" into a given branch -- since changesets don't "flow" anywhere when you merge. With a simple scenario like the above it's not a big problem: you merge stable into dev after every one or two changes on stable. So you can do the merge, revert back to dev and then port the bugfix by hand: hg update dev hg merge stable hg revert --all --rev dev # now do the bugfix by hand hg commit -m "merge with stable, hand-ported bugfix #123" The advantage of doing a such a "dummy merge" where you throw away all changes from the other branch is that you record the merge in the history. So future three-way merges will not re-merge this change. -- Martin Geisler Mercurial links: http://mercurial.ch/

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi! Am 01.04.12 12:09, schrieb Martin Geisler:
With Git or Mercurial you do the merge of z' and z and then resolve the conflicts in favour of z'. There's no direct way to block a changeset from "flowing" into a given branch -- since changesets don't "flow" anywhere when you merge.
With a simple scenario like the above it's not a big problem: you merge stable into dev after every one or two changes on stable. So you can do the merge, revert back to dev and then port the bugfix by hand:
This means the person to do the fix has to do a merge, too. This means to educate everyone on the team to do proper merges, right? Ok, merging should be part of daily work, especially with hg/git, but this is not true for development teams in general.
The advantage of doing a such a "dummy merge" where you throw away all changes from the other branch is that you record the merge in the history. So future three-way merges will not re-merge this change.
Correct. But it is a somewhat more complex workflow compared to a svn blocking merge. So a disadvantage of git/hg compared to svn. A rather strong one IMHO, but maybe not for boost development. Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.17 (Darwin) Comment: GPGTools - http://gpgtools.org Comment: keyserver x-hkp://pool.sks-keyservers.net iEYEARECAAYFAk94yJAACgkQhAOUmAZhnmpO8QCbB3gm1bK9/80niqrqAAH4qtEO 3Z4AnR71YI+Fb/3SaSWBPavZn0IMETUF =QcXh -----END PGP SIGNATURE-----

Frank Birbacher <bloodymir.crap@gmx.net> writes: Hi again :)
Am 01.04.12 12:09, schrieb Martin Geisler:
With Git or Mercurial you do the merge of z' and z and then resolve the conflicts in favour of z'. There's no direct way to block a changeset from "flowing" into a given branch -- since changesets don't "flow" anywhere when you merge.
With a simple scenario like the above it's not a big problem: you merge stable into dev after every one or two changes on stable. So you can do the merge, revert back to dev and then port the bugfix by hand:
This means the person to do the fix has to do a merge, too.
Not necessarily: I can commit a bugfix to stable without being the one who merges stable in dev. In practice, I'm probably the one who are in the best position to do the merge since I understand my bugfix and so I can decide how to apply it on dev. But I can delay the merge or ask someone else to do it instead.
This means to educate everyone on the team to do proper merges, right? Ok, merging should be part of daily work, especially with hg/git, but this is not true for development teams in general.
If you're using branches (with any system) then I think you should educate the entire team about them. But you're right that people can use a CVCS for years without knowing about branches. That's a big change with DVCS: branches are first-class concepts in the system and people use them every day. This means that people stop being afraid of merges. But even with CVSC people are normally not afraid of merges: svn update is *merging* your changes into the latest changes on the server. Even svn commit is doing a merge -- this time it's a server-side merge between files you touched and files I touched. With a DVCS those merges are explicit and so people are more aware of them. But if you could do 'svn update' and 'svn commit' before, then you can also run 'hg merge' or 'git pull' -- it's the same thing. Because people learn about branches and merges with a DVCS they suddenly realize that they can pull of all sorts of "advanced" strategies like keeping long-term bugfix branches around. They are not really advanced, since it's just more of the same everyday commands that people are used to at that point: 'hg update default', hg merge stable'.
The advantage of doing a such a "dummy merge" where you throw away all changes from the other branch is that you record the merge in the history. So future three-way merges will not re-merge this change.
Correct. But it is a somewhat more complex workflow compared to a svn blocking merge. So a disadvantage of git/hg compared to svn. A rather strong one IMHO, but maybe not for boost development.
I don't know enough about how you organize the development to comment on that. I think you should try it out on a subsystem first and see if this is a problem. -- Martin Geisler Mercurial links: http://mercurial.ch/

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi! Am 02.04.12 10:49, schrieb Martin Geisler:
Frank Birbacher <bloodymir.crap@gmx.net> writes:
This means the person to do the fix has to do a merge, too.
Not necessarily: I can commit a bugfix to stable without being the one who merges stable in dev.
In practice, I'm probably the one who are in the best position to do the merge since I understand my bugfix and so I can decide how to apply it on dev. But I can delay the merge or ask someone else to do it instead.
Well, if there is one guy to do the merge of stable into dev and everyone on the team would send an email request that states which changes not to merge then this guy will have a hard time. In svn the blocking merge is recorded in the system. So whenever anyone will merge the stable into dev svn will know which changes not to merge. How does hg/git help in communicating this? Imagine a branch where many developers do bugfixes and someone will once a week merge things into dev. How shall he know which changes to skip? I can hardly imaging how such a workflow would be feasible with hg/git. Frank -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.17 (Darwin) Comment: GPGTools - http://gpgtools.org Comment: keyserver x-hkp://pool.sks-keyservers.net iEYEARECAAYFAk956jcACgkQhAOUmAZhnmpMsgCgluq/Pj8ZMUj8vumZtleEBCQ6 lToAniZxvgiPULR56FEzYAKNW5L7oOzg =LmTc -----END PGP SIGNATURE-----

Frank Birbacher <bloodymir.crap@gmx.net> writes:
Hi!
Am 02.04.12 10:49, schrieb Martin Geisler:
Frank Birbacher <bloodymir.crap@gmx.net> writes:
This means the person to do the fix has to do a merge, too.
Not necessarily: I can commit a bugfix to stable without being the one who merges stable in dev.
In practice, I'm probably the one who are in the best position to do the merge since I understand my bugfix and so I can decide how to apply it on dev. But I can delay the merge or ask someone else to do it instead.
Well, if there is one guy to do the merge of stable into dev and everyone on the team would send an email request that states which changes not to merge then this guy will have a hard time.
In svn the blocking merge is recorded in the system. So whenever anyone will merge the stable into dev svn will know which changes not to merge. How does hg/git help in communicating this? Imagine a branch where many developers do bugfixes and someone will once a week merge things into dev. How shall he know which changes to skip?
You cannot skip changes -- you really must merge them from stable into dev (really 'default' when using Mercurial). What you can do is to merge-and-ignore: hg update default hg merge stable hg revert --all --rev default The last step reverts all files to how they looked on default. You can then re-implement the bugfix the right way and commit the merge. In my experience, there isn't that many bugfix commits and/or the branches haven't drifted that far away from each other. The whole workflows obviously builds on the idea that you can normally merge stable into default and benefit from the merge, i.e., that the branches are close enough for this to make sense so that you don't get enormous merge conflicts on every stable->default merge. We've used this workflow for several years in Mercurial itself and it's very smooth. You can read a bit about it here: http://mercurial.selenic.com/wiki/StandardBranching I'll be on vacation for a week, so I'll probably reply very slowly, if at all. You're welcome to ask about this on mercurial@selenic.com where many other guys will be able to talk about this for hours :) -- Martin Geisler Mercurial links: http://mercurial.ch/

on Mon Apr 02 2012, Martin Geisler <mg-AT-aragost.com> wrote:
In svn the blocking merge is recorded in the system. So whenever anyone will merge the stable into dev svn will know which changes not to merge. How does hg/git help in communicating this? Imagine a branch where many developers do bugfixes and someone will once a week merge things into dev. How shall he know which changes to skip?
You cannot skip changes -- you really must merge them from stable into dev (really 'default' when using Mercurial). What you can do is to merge-and-ignore:
hg update default hg merge stable hg revert --all --rev default
For completeness, in Git this is spelled git merge -s ours stable -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Frank Birbacher wrote:
[...] So we will need a more complex example, I think.
using Subversion 1.7.4:
# setup: cd /tmp svnadmin create testrepo svn mkdir file:///tmp/testrepo/{trunk,branches} -m "default dirs" svn co file:///tmp/testrepo/trunk workingcopy cd workingcopy
# create branch: svn cp ^/trunk ^/branches/my-branch -m "branch" # edit branch: svn switch ^/branches/my-branch echo "new" > new.txt svn add new.txt svn ci -m "new file" # switch back to trunk and merge: svn switch ^/trunk svn merge ^/branches/my-branch # revise working copy before commit, or revert and try again # svn revert -R . svn ci -m "merged branch"
# REPEATING: svn switch ^/branches/my-branch echo "further" >> new.txt svn ci -m "added content" # merging: svn switch ^/trunk # optional inspection: # svn mergeinfo --show-revs=eligible ^/branches/my-branch svn merge ^/branches/my-branch # optional inspection: # svn pg svn:mergeinfo . svn ci -m "merged branch again"
Running the above as a script takes some time: real 0m10.433s user 0m0.160s sys 0m0.253s
First I want to point out that such a mircobenchmark is probably not very reliable, in part because this is a purely local repository. That said, I think it's incredible that svn is taking ten seconds to perform such trivial operations on such a tiny local test repository. Perhaps you should repeat your measurement to make sure this wasn't an outlier. For the sake of completeness I've still written what I believe to be the closest git equivalent, which you can find at the bottom of this email. Again it takes the same number of commands but with a larger fraction of them in the initial setup (suggesting that the real work will ultimately take fewer commands in git), and again fewer keystrokes per line on average. Time on an iMac from 2007: real 0m0.319s user 0m0.045s sys 0m0.124s This is fairly consistent over multiple runs.
BTW, as I understand git: the working copy contains a hidden directory that stores all of the repository data. And the checkout will be placed at top-level. Is there a way to checkout multiple branches at the same time?
Yes, as Martin explained that can be done quite conveniently with a second working copy. -Julian ----- # setup: # cd /tmp mkdir testrepo cd testrepo git init # creating standard branches (branches don't get created without content) git co -b master echo "old" > old.txt git add old.txt git ci -m "first commit" git co -b develop # create and switch to a branch: git co -b branch # edit branch: echo "new" > new.txt git add new.txt git ci -m "new file" # switch back to develop and merge+commit: git co develop git merge branch # insert hypothetical conflict resolution here # REPEATING: git co branch echo "further" >> new.txt git ci -am "added content" # switch and merge+commit: git co develop git merge branch # inspection: git status, git log, etcetera

On Thu, Mar 29, 2012 at 11:41 AM, Julian Gonggrijp <j.gonggrijp@gmail.com> wrote:
Yes, as Martin explained that can be done quite conveniently with a second working copy.
You can also have multiple branches in one working copy. Git calls that concept "tracked branches". You can have as many branches in a (local) repository as you want and some of them, if not all, can be tracking branches, which means when they created, they point to a remote branch in the repo you've cloned from (aka 'origin') or indeed any other, and pull in changes from there. Cheers, Stephan

Sorry, I have to filibuster myself here... On Thu, Mar 29, 2012 at 1:57 PM, Stephan Menzel <stephan.menzel@gmail.com> wrote:
You can also have multiple branches in one working copy. Git calls that concept "tracked branches". You can have as many branches in a (local) repository as you want and some of them, if not all, can be tracking branches, which means when they created, they point to a remote branch in the repo you've cloned from (aka 'origin') or indeed any other, and pull in changes from there.
The reason why I bring that up is because in all this discussion here, there'a a lot of comparing handling of branches in SVN and git and what is easier and whatnot. This is not really a sensible thing to do as the very concept of 'branches', despite equal naming, is in fact very different in git and svn. So different, that direct comparison does not make sense in my opinion, as above example shows. Cheers, Stephan

Why is that bad? At my work we are using svn and I have like 8 branches (not all of them are currently active I confess.) I put my branches not into /branches, but into /users/fbirbacher. This way noone has to care about them.
Because once you pushed it's public and you cannot rewrite public history. Means you cannot fix all the "oops" commits before merging with production.
With git/hg, when you do a mistake, you simply cancel your local
merge and redo it again until you did the right thing, then you push.
This suggests 1st mistakes you do in svn cannot be repaired, and 2nd you will spot every mistake in git/hg before you push. As I understand, once you push to a public repo and then discover a mistake your mistakes will be just as visible as with svn. I agree there is a chance to find some errors before publishing, but with svn I spot errors in commands I run on my working copy easily before committing.
You're right. I forgot that merges were local with svn and that the result wasn't commited yet. I was a bit heated about the discussion and went emotional, which is a bad thing to do. Anyway, what I really was after is the inability to fix "oops" commits before they go public. To be honest I think all the pros/cons that had to be made have been made now, and eventually it's the people who really work on boost that should decide because they're the firsts affected. Philippe

On Wed, Mar 21, 2012 at 3:56 AM, Joel de Guzman <joel@boost-consulting.com> wrote:
I'm not a Git, or DVCS fanatic. I'll just use whatever tool required to get the job done. I'm OK with SVN. It works. If you read my previous comments on this topic (from last year when this was heavily discussed), you'll see that I question the Git move. To me, and I mentioned this before: "A Good Craftsman Never Blames His Tools". I know a very good luthier who craft world-class guitars using only a pocket knife. And I take that to heart with crafting code as well. I find it funny when people blame SVN, the C++ compiler, etc, etc, for inadequacies in order to flaunt these new shiny tools (Git, Java in the 90s or name-your-new- compiler-here).
A good craftsman doesn't blame his tools, he blames himself for picking the wrong tool. ;) You say SVN works. For you. Or not. Others say it doesn't work as good as they'd like. For them. Numerous reasons have been given in this and other threads. -- Olaf

On 3/21/12 5:43 PM, Olaf van der Spek wrote:
On Wed, Mar 21, 2012 at 3:56 AM, Joel de Guzman <joel@boost-consulting.com> wrote:
I'm not a Git, or DVCS fanatic. I'll just use whatever tool required to get the job done. I'm OK with SVN. It works. If you read my previous comments on this topic (from last year when this was heavily discussed), you'll see that I question the Git move. To me, and I mentioned this before: "A Good Craftsman Never Blames His Tools". I know a very good luthier who craft world-class guitars using only a pocket knife. And I take that to heart with crafting code as well. I find it funny when people blame SVN, the C++ compiler, etc, etc, for inadequacies in order to flaunt these new shiny tools (Git, Java in the 90s or name-your-new- compiler-here).
A good craftsman doesn't blame his tools, he blames himself for picking the wrong tool. ;) You say SVN works. For you. Or not. Others say it doesn't work as good as they'd like. For them. Numerous reasons have been given in this and other threads.
LOL :-) Did you even read the rest of my post? Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

Edward Diener <eldiener@tropicsoft.com> writes:
On 3/20/2012 7:03 AM, Julian Gonggrijp wrote: ... snip
Well, allow me to present some fair reasoning to you.
With regard to git versus svn: I think enough fair reasons have been given why git (or a DVCS in general) is better than svn. I'm not going to repeat those arguments here.
I have never heard a single technical argument, in all the endless mentions of Git among the people riding that bandwagon, why Git is better than SVN, or even why any DVCS is better than a centralized SCCS. I consider this whole move to Git and/or DVCS among "hip" programmers little more than a move to conform with what others are doing and feel "cool".
I'm trying to sell Mercurial consulting, so I've also been searching for such an argument -- I need some concrete benefits to show to clients! One very concrete and important difference between Mercurial and Subversion is support for renames while merging. SVN 1.5 to 1.7 has a bug that makes it difficult to merge a branch with renamed files: http://stackoverflow.com/a/2486662/110204 You might say "That doesn't really have anything to do with DVCS" and you would be right! There's no reason why a centralized tool couldn't be super smart about merging. I think it's an indirect effect: a DVCS *must* be good at merging since there are lots of branches. Centralized tools can get away with having bugs here and there since there are fewer branches. As I see it, the main effects of a DVCS (Git or Mercurial, doesn't matter much here) are: * very fast access to data -- it's right there on your harddrive * private branches -- you can work offline, don't have to share immediately That's the immediate consequences of the D in DVCS. But what I like to stress are the secondary effects: * more useful history: since you have fast access to the history, people tend to use it more. Things like the bisect command in Git and Mercurial is a good example: it lets you do a binary search on your history to see when a bug was introduced. You could certainly do this with a centralized tool, but it's more expensive since you have to checkout data from a central server. * smaller commits: when commit takes a fraction of a second, people tend to commit more often. This makes the commits focused: there will be one commit with a fix to a comment, another commit with a new feature, and another commit with an update to a test. Smaller commits are easier to review and this leads to *better code*. * better commits: when you don't have to share your commit with the rest of the world, you suddenly have a chance to refine it. Both Git and Mercurial allow you to edit your local commits before pushing them to a public repository. This means fewer followup commits where you actually add the new file you intended to commit before... I hope this helps a bit. -- Martin Geisler Professional Mercurial support by aragost Trifork http://www.aragost.com/mercurial/

Hi,
I have never heard a single technical argument, in all the endless mentions of Git among the people riding that bandwagon, why Git is better than SVN, or even why any DVCS is better than a centralized SCCS.
OK, I've been using rcs, cvs, vss, svn, mercurial and git. all except vss where a real step forward at their time. - rcs simply gave me the ability to track changes for myself - I don't remember the time before rcs, because I was too young then. I committed often, because it was rather cheep to do so and the first time I needed to revert a change (which happened to me more often those times than now, but it is still not over yet), I really began to love it. - cvs gave me the ability not only to track my own changes, but also collaborate with others who even worked on other computers. When the repository was local, I committed more often than when teh repository was remote. Working remote worked well, but slower. So an update or commit was more expensive. That also was the time, when I learned the term "merging hell". Everybody said, you should commit as often as possible, but noone did as often as with rcs. - let's not talk about vss, I don't find enough friendly words for it. The best about it is, that as far as I can see noone uses it any more. - with the time I of course also noticed the glitches in cvs, like e.g. untracked directories and the fact that it gets slower with an increasing history - just like rcs, etc. That was when svn kicked in. It solved quite some of those problems and I really thought, this was cvs done right. Actually svn still is the best central VCS that I have seen. Then some people told me about DVCS and I thought - hey, I am always online, when I work, so what the heck shall I do with that. I though SVN did the job well, which actually was true. Then this Submarine project came along. One of the managers in the company I then was working in became adventurous and asked us to try to write a new software to replace the existing one. We did not have a svn and not even a spare server, where we could have installed svn. So we simply took the first DVCS that came along. That was mercurial. I did not want to go back to svn afterwards, because mercurial lets me almost work as efficient as with rcs, but with all the advantages of collaboration. We did not need a central server, because as a team of two we could exchange changes directly. The collegue I was working together with, actually wanted back to svn, because he was using eclipse and he found, that the mercurial plugin for eclipse was in a very unstable state at that time. We were not able to explore all the features of mercurial, before the project became an official one and we switched back to the companies central svn. With git I have worked mostly at home for my own projects. It gives me all the advantages I have learned from mercurial. For an additional backup I simply push to another machine. I clone locally to experiment without cluttering my main repository with hundrets of old branches. I commit often, I experiment a lot. And I can do all of that on the train, at my parents, when taking a break from hiking up in the swiss mountains, etc. So I also niticed, that for private projects I am not always online. There was also a project where we planed to use git professionally - mercurial would also have been OK, but there was more git experience in the team. We tried to establish a branch based development process. What the business guys see as a "change" usually is an eventually quite huge set of changes to the developer. So every business change would become a branch. Whenever a change would have been mature enough, we would have merged it with the next release branch and rebase all the others that were still in work. For the next release we only had to take the current state of the next release branch. That project stayed with the companies central svn due to a management decission and the branch based development process never made it into production though we have shown the whole process to work. Theoretically you can do the same with svn as well and I have seen something similar based on cvs (that information actually was the input that made us think about it). The bad side is, that both of them are not really the big kings, when it comes to merging. Especially the ability to rebase a branch with git makes this aproach very comfortable and helps a lot to prevend to be trapped into merging hell. Now, those are more organizational advantages of DVCS, not technical ones. One techincal advantage I noticed with git over svn (I don't remember if mercurial has that as well) is that it also tracks merges. So I can see in the history, that a specific branch has already been merged with another one. With that branch based development proces I have described above, this feature is very usefull. In the end, when you ask me. I could live with svn, but I'd prefer a DVCS. Which one I don't care. I have a bit more experience with git, but that is not an issue. I'd also learn to use bazaar, monotone, et al. Christof -- okunah gmbh Software nach Maß Werner-Haas-Str. 8 www.okunah.de 86153 Augsburg cd@okunah.de Registergericht Augsburg Geschäftsführer Augsburg HRB 21896 Christof Donat UStID: DE 248 815 055

on Tue Mar 20 2012, Edward Diener <eldiener-AT-tropicsoft.com> wrote:
On 3/20/2012 7:03 AM, Julian Gonggrijp wrote: ... snip
Well, allow me to present some fair reasoning to you.
With regard to git versus svn: I think enough fair reasons have been given why git (or a DVCS in general) is better than svn. I'm not going to repeat those arguments here.
I have never heard a single technical argument, in all the endless mentions of Git among the people riding that bandwagon, why Git is better than SVN, or even why any DVCS is better than a centralized SCCS. I consider this whole move to Git and/or DVCS among "hip" programmers little more than a move to conform with what others are doing and feel "cool".
Wait for Beman's talk at BoostCon/C++Now (http://cppnow.org/session/moving-boost-to-git/). Beman is among the last people I'd accuse of being preoccupied with what's "hip" and "cool"
I am perfectly willing to read well-chosen technical arguments but not from people already sold on one side or the other. But I really despair of anyone being able to present such arguments in the atmosphere created by Git fanatics and DVCS fanatics. The only thing I have gotten from all this is "I've tried it, I like it, and therefore its superior".
Feel free, anyone, to point me to a purely technical discussion, article, whatnot, explaining the practical reasons why using a DVCS, or Git, is more productive and more pleasurable than using a centralized SCCS like Subversion.
A purely technical discussion is unlikely to be able to give you much insight into pleasure. The biologists are still trying to plumb the depths of that one :-) -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 3/21/2012 8:18 AM, Dave Abrahams wrote:
on Tue Mar 20 2012, Edward Diener<eldiener-AT-tropicsoft.com> wrote:
On 3/20/2012 7:03 AM, Julian Gonggrijp wrote: ... snip
Well, allow me to present some fair reasoning to you.
With regard to git versus svn: I think enough fair reasons have been given why git (or a DVCS in general) is better than svn. I'm not going to repeat those arguments here.
I have never heard a single technical argument, in all the endless mentions of Git among the people riding that bandwagon, why Git is better than SVN, or even why any DVCS is better than a centralized SCCS. I consider this whole move to Git and/or DVCS among "hip" programmers little more than a move to conform with what others are doing and feel "cool".
Wait for Beman's talk at BoostCon/C++Now (http://cppnow.org/session/moving-boost-to-git/). Beman is among the last people I'd accuse of being preoccupied with what's "hip" and "cool"
I will be glad to listen to Beman's talk when it appears online.
I am perfectly willing to read well-chosen technical arguments but not from people already sold on one side or the other. But I really despair of anyone being able to present such arguments in the atmosphere created by Git fanatics and DVCS fanatics. The only thing I have gotten from all this is "I've tried it, I like it, and therefore its superior".
Feel free, anyone, to point me to a purely technical discussion, article, whatnot, explaining the practical reasons why using a DVCS, or Git, is more productive and more pleasurable than using a centralized SCCS like Subversion.
A purely technical discussion is unlikely to be able to give you much insight into pleasure. The biologists are still trying to plumb the depths of that one :-)
If someone technically savvy asks you what the advantages of programming with C++ are, do you answer that it gives you pleasure or do you enumerate the technical advantages which make it easier to use in programming tasks ? All I have been asking for is some good technical arguments by which using a DVCS is easier, more flexible, superior than using a centralized SCCS. What I keep getting back, unfortunately, is a philosophy, or a personal preference, or a try-it-you'll-like-it answer. I will be looking forward to understanding the techncial arguments from Beman's talk.

On 21 March 2012 23:11, Edward Diener <eldiener@tropicsoft.com> wrote:
All I have been asking for is some good technical arguments by which using a DVCS is easier, more flexible, superior than using a centralized SCCS. What I keep getting back, unfortunately, is a philosophy, or a personal preference, or a try-it-you'll-like-it answer. I will be looking forward to understanding the techncial arguments from Beman's talk.
It really isn't that easy to explain, with lots of aspects (not just distributed vs. centralized). These might help: http://www.ericsink.com/vcbe/html/dvcs_advantages.html http://www.ericsink.com/vcbe/html/dvcs_weaknesses.html Should be noted that they are chapters are from a book which is partly promoting another(!) new DVCS, so the author is perhaps a little biased, but they look pretty fair to me.

http://www.ericsink.com/vcbe/html/dvcs_advantages.html http://www.ericsink.com/vcbe/html/dvcs_weaknesses.html
IMHO this was the most constructive post so far. Thanks a lot. Very interesting information for both side indeed. Julien

on Wed Mar 21 2012, Edward Diener <eldiener-AT-tropicsoft.com> wrote:
Feel free, anyone, to point me to a purely technical discussion, article, whatnot, explaining the practical reasons why using a DVCS, or Git, is more productive and more pleasurable than using a ^^^^^^^^^^^^^^^^ centralized SCCS like Subversion.
A purely technical discussion is unlikely to be able to give you much insight into pleasure. The biologists are still trying to plumb the depths of that one :-)
If someone technically savvy asks you what the advantages of programming with C++ are, do you answer that it gives you pleasure or do you enumerate the technical advantages which make it easier to use in programming tasks ?
All I have been asking for is some good technical arguments by which using a DVCS is easier, more flexible, superior than using a centralized SCCS.
No, actually you mentioned pleasure. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Julian Gonggrijp <j.gonggrijp@gmail.com> writes: Hi, This discussion was mentioned on the Mercurial mailinglist. I'm a Mercurial developer, but let me start by saying that you will be very happy with both tools. DVCS is a great step up from SVN and people rarely look back.
FWIW, I am the last person who will oppose such a change. But currently, noone presented a fair reasoning in favor for git,
Well, allow me to present some fair reasoning to you.
With regard to git versus svn: I think enough fair reasons have been given why git (or a DVCS in general) is better than svn. I'm not going to repeat those arguments here.
With regard to git versus mercurial: given that it's probably a good idea to switch to a DVCS, git and mercurial seem to be the primary candidates. I think everyone in this thread should be more willing to admit that they're close competitors. In many ways they're about equally "good", and when they aren't the differences are quite moot:
- mercurial has native support for Windows, but git is also fairly well supported on Windows and seems to be rapidly improving;
Yes, both tools work on Windows.
- git allows you to edit history while mercurial doesn't, but which you like better is a matter of preference;
This statement is *false*. I'm not sure where you've read it? Both tools let you modify the history and with the same consequences. Editing history really means creating a new history and throwing away the old history and they both support this. There is a non-technical difference, though: Mercurial has a *bias* towards immutable history in its command set. Technically it can throw away your history just like Git can throw it away.
- git seems to be more "powerful" and less susceptible to errors, while mercurial is said to have better documentation -- while this doesn't make either objectively better than the other in the first place, they're also both catching up on their weaker side;
I fully believe that Git and Mercurial is equally powerful. Their internal concepts are very, very similar. You can map between the tools in a nearly loss-less way (see the hg-git extension). To me, arguing that Git is more powerful since it exposes more commands by default ('git rebase -i' is the typical example) is like arguing that Perl is more powerful than Python since Perl has a built-in regex operator -- we all know that both languages are Turing complete.
- they are built with very different architectures (many executables written in C versus a monolithic program in Python), but in the end both work well enough and both seem extensible enough for most purposes.
Both are great tools and both will work well for a big and complex project like Boost. Mercurial has some new features which isn't found in other systems: - Revision sets: a query language that lets you select commits. An example from [1] is hg log -r "1.3::1.5 and keyword(bug) and file('hgext/*')" This syntax can be used with all commands that take a -r flag. - File sets: similar to revsets, but for selecting files. An example from [2] would be hg revert "set:copied() and binary() and size('>1M')" I think formulating the equivalent query with find(1) would be hard. - Heavy-weight branches: Mercurial has named branches which are built into the changesets. This lets you track exactly where a branch started and where it ended. These branches are very *different* from branches in Git: they are permanent and global. In this way, named branches are much more similar to branches in CVS and SVN. The closest equivalent of Git's branch model is called bookmarks in Mercurial. See the glossary[3]. [1]: http://www.selenic.com/mercurial/hg.1.html#revset [2]: http://www.selenic.com/mercurial/hg.1.html#fileset [3]: http://www.selenic.com/mercurial/hg.1.html#glossary -- Martin Geisler Professional Mercurial support by aragost Trifork http://www.aragost.com/mercurial/

The two major surveys I know contradict this.
http://www.eclipse.org/org/community_survey/Eclipse_Survey_2011_Rep ort .pdf, page 16 http://blogs.forrester.com/application_development/2010/01/forreste r-d atabyte-developer-scm-tool-adoption-and-use.html
Contradict what?
Well, it contradicts your claim that 'Git is winning in the marketplace', which is total nonsense if you look at the surveys (SVN 50% vs. GIT 13% 'marketshare').
If you read the thread carefully, you'll see I was talking about the DVCS marketplace (in fact, just about Git vs Mercurial), where SVN is not a contender.
Ahh, so it's already a done deal to switch Boost to a DVCS? I was not aware of this. Interesting...
Please tone down the 'tude, friend.
The fact that you were referring to DVCS only was not obvious to me, even after rereading the thread from a-z. Regards Hartmut --------------- http://boost-spirit.com http://stellar.cct.lsu.edu

on Mon Mar 19 2012, Daryle Walker <darylew-AT-hotmail.com> wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git?
IMO, no. There are several reasons, but the main one is that Git is winning in the marketplace. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 3/19/2012 6:15 PM, Dave Abrahams wrote:
on Mon Mar 19 2012, Daryle Walker <darylew-AT-hotmail.com> wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git?
IMO, no. There are several reasons, but the main one is that Git is winning in the marketplace.
What reasons exactly? "Git is the most powerful versioning system today", "is increasingly popular", "Git has built-in support for more advanced features", "is more popular in the open-source world", "Git is winning in the marketplace"? Sounds more like a propaganda, which isn't even convincing.

On 19/03/2012, at 18:24, Sergiu Dotenco wrote:
On 3/19/2012 6:15 PM, Dave Abrahams wrote:
on Mon Mar 19 2012, Daryle Walker <darylew-AT-hotmail.com> wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git?
IMO, no. There are several reasons, but the main one is that Git is winning in the marketplace.
What reasons exactly? "Git is the most powerful versioning system today", "is increasingly popular", "Git has built-in support for more advanced features", "is more popular in the open-source world", "Git is winning in the marketplace"? Sounds more like a propaganda, which isn't even convincing.
The community around git completely overshadows any other DCVS. This is not propaganda but a fact. Popularity is the winning decision factor. Can you convince us why not?

On 3/19/2012 7:46 PM, Bruno Santos wrote:
On 19/03/2012, at 18:24, Sergiu Dotenco wrote:
On 3/19/2012 6:15 PM, Dave Abrahams wrote:
on Mon Mar 19 2012, Daryle Walker <darylew-AT-hotmail.com> wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git?
IMO, no. There are several reasons, but the main one is that Git is winning in the marketplace.
What reasons exactly? "Git is the most powerful versioning system today", "is increasingly popular", "Git has built-in support for more advanced features", "is more popular in the open-source world", "Git is winning in the marketplace"? Sounds more like a propaganda, which isn't even convincing.
The community around git completely overshadows any other DCVS. This is not propaganda but a fact. Popularity is the winning decision factor. Can you convince us why not?
The alleged fact is probably a fact only iff you mean the Linux (kernel) community. Besides, why is the popularity important considering that both version control systems are comparable?

on Mon Mar 19 2012, Sergiu Dotenco <sergiu.dotenco-AT-gmail.com> wrote:
On 3/19/2012 7:46 PM, Bruno Santos wrote:
The community around git completely overshadows any other DCVS. This is not propaganda but a fact. Popularity is the winning decision factor. Can you convince us why not?
The alleged fact is probably a fact only iff you mean the Linux (kernel) community. Besides, why is the popularity important considering that both version control systems are comparable?
More attention from the community, more support, more tools work with it, more money behind it, more people will be familiar with it in the long run, etc., etc. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 3/19/2012 9:34 PM, Dave Abrahams wrote:
on Mon Mar 19 2012, Sergiu Dotenco <sergiu.dotenco-AT-gmail.com> wrote:
On 3/19/2012 7:46 PM, Bruno Santos wrote:
The community around git completely overshadows any other DCVS. This is not propaganda but a fact. Popularity is the winning decision factor. Can you convince us why not?
The alleged fact is probably a fact only iff you mean the Linux (kernel) community. Besides, why is the popularity important considering that both version control systems are comparable?
More attention from the community, more support, more tools work with it, more money behind it, more people will be familiar with it in the long run, etc., etc.
So, this is the reason why Git developers treat Windows as a second class citizen, i.e., there's no official Windows support? Also, the last time I checked, TortoiseGit looked like a major hack compared to TortoiseHg.

On 3/19/2012 11:23 PM, Sergiu Dotenco wrote:
On 3/19/2012 9:34 PM, Dave Abrahams wrote:
on Mon Mar 19 2012, Sergiu Dotenco <sergiu.dotenco-AT-gmail.com> wrote:
On 3/19/2012 7:46 PM, Bruno Santos wrote:
The community around git completely overshadows any other DCVS. This is not propaganda but a fact. Popularity is the winning decision factor. Can you convince us why not?
The alleged fact is probably a fact only iff you mean the Linux (kernel) community. Besides, why is the popularity important considering that both version control systems are comparable?
More attention from the community, more support, more tools work with it, more money behind it, more people will be familiar with it in the long run, etc., etc.
So, this is the reason why Git developers treat Windows as a second class citizen, i.e., there's no official Windows support? Also, the last time I checked, TortoiseGit looked like a major hack compared to TortoiseHg.
TortoiseHg is also available for Gnome/Nautilus. I'm not aware of similar Git tools for Linux though.

On Mon, 19 Mar 2012 16:34:06 -0400 Dave Abrahams <dave@boostpro.com> wrote:
More attention from the community, more support, more tools work with it, more money behind it, more people will be familiar with it in the long run, etc., etc.
Both of hg and git have large enough communities so in this particular case popularity alone cannot be deciding factor. Wrt support and tools you're unlikely to notice much difference due to diminishing returns.

Dave Abrahams <dave@boostpro.com> writes:
More attention from the community, more support, more tools work with it, more money behind it, more people will be familiar with it in the long run, etc., etc.
Example: compare the number of commercial tools for working with Git, compared to those available for working with Mercurial. On OS X along I can use: SourceTree, Tower, GitHub, GitBox, GitDiary, QuickHub, Octopus, and that's only the commercial tools! By contrast, if I search for "mercurial" in the AppStore, I can only use SourceTree (which works with both). -- John Wiegley BoostPro Computing, Inc. http://www.boostpro.com

On 3/19/2012 6:22 PM, John Wiegley wrote:
Dave Abrahams<dave@boostpro.com> writes:
More attention from the community, more support, more tools work with it, more money behind it, more people will be familiar with it in the long run, etc., etc.
Example: compare the number of commercial tools for working with Git, compared to those available for working with Mercurial. On OS X along I can use: SourceTree, Tower, GitHub, GitBox, GitDiary, QuickHub, Octopus, and that's only the commercial tools!
The argument could also be made that the number of alternate Git tools available speaks to the lack of good tools for it. As people keep trying to make the "better" tool for dealing with it. Not sure how you searched.. But mercurial has a page listing some of the tools available for it <http://mercurial.selenic.com/wiki/OtherTools>. Seems like a rich set of external tools to me.
By contrast, if I search for "mercurial" in the AppStore, I can only use SourceTree (which works with both).
I don't see how searching in a mobile device application store is relevant. Did you mean something else? -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

on Tue Mar 20 2012, Rene Rivera <grafikrobot-AT-gmail.com> wrote:
By contrast, if I search for "mercurial" in the AppStore, I can only use SourceTree (which works with both).
I don't see how searching in a mobile device application store is relevant. Did you mean something else?
Apple's AppStore application for MacOS is for downloading MacOS applications. If you want to download iOS apps you need to use iTunes. Makes perfect sense, I know... ;-) -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 3/20/2012 3:34 AM, Dave Abrahams wrote:
on Tue Mar 20 2012, Rene Rivera<grafikrobot-AT-gmail.com> wrote:
By contrast, if I search for "mercurial" in the AppStore, I can only use SourceTree (which works with both).
I don't see how searching in a mobile device application store is relevant. Did you mean something else?
Apple's AppStore application for MacOS is for downloading MacOS applications. If you want to download iOS apps you need to use iTunes. Makes perfect sense, I know... ;-)
As a person having published games in the iPhone AppStore I know that difference.. But the iPhone AppStore came first. Hence you can understand my confusion. I naturally assigned the reference to the original instead. As most people "in the know" refer to the late comer as the "Mac AppStore". -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

On 19/03/2012, at 19:15, Sergiu Dotenco wrote:
On 3/19/2012 7:46 PM, Bruno Santos wrote:
On 19/03/2012, at 18:24, Sergiu Dotenco wrote:
On 3/19/2012 6:15 PM, Dave Abrahams wrote:
on Mon Mar 19 2012, Daryle Walker <darylew-AT-hotmail.com> wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git?
IMO, no. There are several reasons, but the main one is that Git is winning in the marketplace.
What reasons exactly? "Git is the most powerful versioning system today", "is increasingly popular", "Git has built-in support for more advanced features", "is more popular in the open-source world", "Git is winning in the marketplace"? Sounds more like a propaganda, which isn't even convincing.
The community around git completely overshadows any other DCVS. This is not propaganda but a fact. Popularity is the winning decision factor. Can you convince us why not?
The alleged fact is probably a fact only iff you mean the Linux (kernel) community. Besides, why is the popularity important considering that both version control systems are comparable?
Only the Linux community? are you serious?

On 20.03.2012 02:10, Bruno Santos wrote:
On 19/03/2012, at 19:15, Sergiu Dotenco wrote:
The alleged fact is probably a fact only iff you mean the Linux (kernel) community. Besides, why is the popularity important considering that both version control systems are comparable?
Only the Linux community? are you serious?
You bet I'm serious, unless you want to argue whether the surveys you refer to (?) are representative.

Hi,
IMO, no. There are several reasons, but the main one is that Git is winning in the marketplace.
What reasons exactly?
I was using mercurial in a project a while ago. I as a command line user was pretty happy with it, but our eclipse users where not. That may have changed, since my experience is not really up to date. I have also worked on a git project, but there we did not have eclipse users, so I don't know how the well the eclipse integration works. I could live happily with either of the two systems. From my gutt feeling I tend to prefer git a little bit. Christof -- okunah gmbh Software nach Maß Werner-Haas-Str. 8 www.okunah.de 86153 Augsburg cd@okunah.de Registergericht Augsburg Geschäftsführer Augsburg HRB 21896 Christof Donat UStID: DE 248 815 055

I was using mercurial in a project a while ago. I as a command line user was pretty happy with it, but our eclipse users where not. That may have changed, since my experience is not really up to date. I have also worked on a git project, but there we did not have eclipse users, so I don't know how the well the eclipse integration works.
There is now an eclipse plugin called eGit (http://www.eclipse.org/egit/) that integrates git support into eclipse. In fact, one of the major eclipse plugins (CDT, the C++ Development Tools) has recently transitioned from using CVS to using eGit. Regards, Nate

Hi,
I was using mercurial in a project a while ago. I as a command line user was pretty happy with it, but our eclipse users where not. That may have changed, since my experience is not really up to date. I have also worked on a git project, but there we did not have eclipse users, so I don't know how the well the eclipse integration works.
There is now an eclipse plugin called eGit (http://www.eclipse.org/egit/)
Yes, back then there was also a mercurial plugin, which I don't think has gone now. It is not about the existence of such a plugin but about its quality. Maybe some eclipse users around here (and of course those using other IDEs) can test either plugin and report about the quality they experience. For me as a CLI user git as well as mercurial are fine. Christof -- okunah gmbh Software nach Maß Werner-Haas-Str. 8 www.okunah.de 86153 Augsburg cd@okunah.de Registergericht Augsburg Geschäftsführer Augsburg HRB 21896 Christof Donat UStID: DE 248 815 055

I was using mercurial in a project a while ago. I as a command line user was pretty happy with it, but our eclipse users where not. That may have changed, since my experience is not really up to date. I have also worked on a git project, but there we did not have eclipse users, so I don't know how the well the eclipse integration works.
There is now an eclipse plugin called eGit (http://www.eclipse.org/egit/)
Yes, back then there was also a mercurial plugin, which I don't think has gone now. It is not about the existence of such a plugin but about its quality. Maybe some eclipse users around here (and of course those using other IDEs) can test either plugin and report about the quality they experience.
This is why I mentioned that a major eclipse project (CDT) has adopted eGit. I think that's an indication of good quality. Regards, Nate

on Mon Mar 19 2012, Daryle Walker <darylew-AT-hotmail.com> wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git?
IMO, no. There are several reasons, but the main one is that Git is winning in the marketplace.
You're kidding right? If not - let's switch Boost to Java - it's way more popular! Regards Hartmut --------------- http://boost-spirit.com http://stellar.cct.lsu.edu

Hartmut Kaiser wrote:
Dave Abrahams wrote:
IMO, no. There are several reasons, but the main one is that Git is winning in the marketplace.
You're kidding right? If not - let's switch Boost to Java - it's way more popular!
As a first step, boost could provide Java bindings similar to the existing python bindings. Anyone? Regards, Thomas

IMO, no. There are several reasons, but the main one is that Git is winning in the marketplace.
You're kidding right? If not - let's switch Boost to Java - it's way more popular!
As a first step, boost could provide Java bindings similar to the existing python bindings. Anyone?
What purpose would that serve? A lot of Boost's functionality is covered by the Java standard library, and for the rest, the concepts don't map well to Java (try writing Java bindings to MPL...). Regards, Nate

Nathan Ridge wrote:
You're kidding right? If not - let's switch Boost to Java - it's way more popular!
As a first step, boost could provide Java bindings similar to the existing python bindings. Anyone?
What purpose would that serve? A lot of Boost's functionality is covered by the Java standard library, and for the rest, the concepts don't map well to Java (try writing Java bindings to MPL...).
But wouldn't your arguments apply to the existing python bindings in the same way? Anyway, that suggestion wasn't meant too serious. (More precisely, I know too little about this subject for being able to judge whether this suggestion could make sense or not.) Regards, Thomas

Nathan Ridge wrote:
You're kidding right? If not - let's switch Boost to Java - it's way more popular!
As a first step, boost could provide Java bindings similar to the existing python bindings. Anyone?
What purpose would that serve? A lot of Boost's functionality is covered by the Java standard library, and for the rest, the concepts don't map well to Java (try writing Java bindings to MPL...).
But wouldn't your arguments apply to the existing python bindings in the same way?
Anyway, that suggestion wasn't meant too serious. (More precisely, I know too little about this subject for being able to judge whether this suggestion could make sense or not.)
I confess, I do not know anything about the existing python bindings. I'm aware of a library called Boost.Python, but my understanding is that this is a library for creating python bindings to any C++ library, rather than being python bindings for Boost itself. Could you provide a link to the existing python bindings? Thanks, Nate

on Tue Mar 20 2012, Nathan Ridge <zeratul976-AT-hotmail.com> wrote:
Nathan Ridge wrote:
You're kidding right? If not - let's switch Boost to Java - it's way more popular!
As a first step, boost could provide Java bindings similar to the
existing python bindings. Anyone?
What purpose would that serve? A lot of Boost's functionality is covered by the Java standard library, and for the rest, the concepts don't map well to Java (try writing Java bindings to MPL...).
But wouldn't your arguments apply to the existing python bindings in the same way?
Anyway, that suggestion wasn't meant too serious. (More precisely, I know too little about this subject for being able to judge whether this suggestion could make sense or not.)
I confess, I do not know anything about the existing python bindings. I'm aware of a library called Boost.Python, but my understanding is that this is a library for creating python bindings to any C++ library, rather than being python bindings for Boost itself. Could you provide a link to the existing python bindings?
Several Boost libraries have Python bindings in Boost using Boost.Python. IIRC, MPI, MultiArray, and Graph are examples. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 19/03/12 20:31, Hartmut Kaiser wrote:
on Mon Mar 19 2012, Daryle Walker<darylew-AT-hotmail.com> wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git?
IMO, no. There are several reasons, but the main one is that Git is winning in the marketplace.
You're kidding right? If not - let's switch Boost to Java - it's way more popular!
Using Java makes sense if Java is a suitable tools to your purposes. Clearly that's not the case of Boost. When in comes to versioning systems, however, both mercurial and git are suitable; I think choosing based on which is more popular is a good idea. I certainly don't want Boost to end up using a tool that few people are familiar with. Boost.Build is bad enough at that.

on Mon Mar 19 2012, "Hartmut Kaiser" <hartmut.kaiser-AT-gmail.com> wrote:
on Mon Mar 19 2012, Daryle Walker <darylew-AT-hotmail.com> wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git?
IMO, no. There are several reasons, but the main one is that Git is winning in the marketplace.
You're kidding right?
No.
If not - let's switch Boost to Java - it's way more popular!
Is switching to Java compatible with Boost's goals in some way that I haven't yet considered? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 03/19/2012 10:02 AM, Daryle Walker wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? They're kind-of like CVS vs. Subversion, except I think they came up in parallel. (While Subversion was designed as an updated CVS.) I think Git was made up of a bunch of script hacks, while Mercurial was a regimented single program. I don't have a preference, but I want to make sure we consider the rival options.
My organization recently switched from SVN to git. Pretty much everyone was very happy with the result. We also considered mercurial, and we all agreed that choosing between git and mercurial wasn't nearly as important as choosing either of them instead of svn. That said, there are differences, and I'm glad we picked git. I think the market- and mind-share arguments are real, and in git's favor. Anecdotally at least, it's interesting that bitbucket - which started as an all-mercurial hosting site - has more recently added git support: http://blog.bitbucket.org/2009/04/01/announcing-git-support/ And I think it's also telling that while git's branching model - one of the most important aspects of a VCS - is basically the same one it has always had, mercurial's branching model has evolved towards the git model (mercurial bookmarks == git branches). So I think there's circumstantial evidence that git is a better design in some ways. Mercurial is superficially more similar to svn, but given that they're both architecturally very different from svn, I think that could just as easily be seen as an advantage for git. Jim Bosch

On 19/03/12 21:12, Jim Bosch wrote:
On 03/19/2012 10:02 AM, Daryle Walker wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? They're kind-of like CVS vs. Subversion, except I think they came up in parallel. (While Subversion was designed as an updated CVS.) I think Git was made up of a bunch of script hacks, while Mercurial was a regimented single program. I don't have a preference, but I want to make sure we consider the rival options.
My organization recently switched from SVN to git. Pretty much everyone was very happy with the result. We also considered mercurial, and we all agreed that choosing between git and mercurial wasn't nearly as important as choosing either of them instead of svn.
I've had numerous problems with git, including getting my local git repo into a state where it would neither push to nor pull from the remote repo. On the other hand, I've had no problems with Mercurial, even though I've used it on more projects, with more branching and merging. In one case, I was having such difficulty with git that I used hg-git to import my git repo into mercurial, so I could deal with the branches and merges in a sane fashion, then exported back to git. All my problems basically boil down to one thing though: the user interface (command line) to git doesn't map cleanly to the way I think about stuff, or the operations I wish to do, whereas the user interface for mercurial does. For me, mercurial is intuitive, whereas git is not, in a big way. Technically, they're on a par, and there are additional scripts and plugins for git that help with the UI. As for mindshare and marketplace, there is widespread support and usage of both, and both are used by major projects. Given the choice, I'd go for Mercurial every time, but either is better than subversion. Anthony -- Author of C++ Concurrency in Action http://www.stdthread.co.uk/book/ just::thread C++11 thread library http://www.stdthread.co.uk Just Software Solutions Ltd http://www.justsoftwaresolutions.co.uk 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK. Company No. 5478976

on Mon Mar 19 2012, Anthony Williams <anthony.ajw-AT-gmail.com> wrote:
I've had numerous problems with git, including getting my local git repo into a state where it would neither push to nor pull from the remote repo. On the other hand, I've had no problems with Mercurial, even though I've used it on more projects, with more branching and merging.
In one case, I was having such difficulty with git that I used hg-git to import my git repo into mercurial, so I could deal with the branches and merges in a sane fashion, then exported back to git.
All my problems basically boil down to one thing though: the user interface (command line) to git doesn't map cleanly to the way I think about stuff, or the operations I wish to do, whereas the user interface for mercurial does. For me, mercurial is intuitive, whereas git is not, in a big way.
But for every story like that, there's an opposite one from the other community. For example, I find Mercurial's branch model completely insane. Multiple heads on a branch? What on earth were they thinking?! So on one project I used git-Hg to make the transition in the other direction. But seriously, if I thought Hg was winning in the DVCS marketplace I would choose it over Git, even though I find it difficult to use and ugly to think about. That's easy for me to say, I know. I'm just lucky that I perceive the marketplace winner to be the tool I like better. Oh, and please don't think me a Git zealot. There are some things about the design I quite disagree with, and the UI certainly can be harder to grasp than necessary. More than a little, that's the community's fault for not explaining Git well. But that situation is improving... -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Hi, May I ask: - what is the current state of git concerning error reporting? Last time I checked, git still had no reports when something gone wrong and that was a major concern to me. - what is the current state of git concerning windows support? As someone pointed out, there is no official efforts to make it easily portable to windows and tools are problematic on windows. I don't care if boost goes with git or mercurial, but AFAIK, error reporting and windows support is far worst than git's (hard to grasp) ui. I hope it have been fixed since last time I checked (in the middle of the last year). Joël Lamotte

On 03/20/2012 05:21 AM, Klaim - Joël Lamotte wrote:
Hi,
May I ask:
- what is the current state of git concerning error reporting? Last time I checked, git still had no reports when something gone wrong and that was a major concern to me.
I have no idea, what kind of missing error reporting are you referring too?
- what is the current state of git concerning windows support? As someone pointed out, there is no official efforts to make it easily portable to windows and tools are problematic on windows.
If it helps anything, we have been using Git on Windows in a small team but on a large codebase for several years. No real problems, and I must say it is huge improvement in usabillity over ClearCase, our legacy, which have quite complete GUI and Windows support. Some of the developers use TourtoiseGIT, I use it occasionally. Mostly I use git support in emacs and the command line support and built in gitk and git-gui. On emacs I have settled with egg.el which work on all platforms and support most of my tasks. So even if there are alternatives I have not tried, I have no need to look further. I would not be concerned with Windows support for Git. Why there is still no official release for Windows does however puzzle me a bit. Is there any official statements why they release the git windows ports labeled as previews?
I don't care if boost goes with git or mercurial, but AFAIK, error reporting and windows support is far worst than git's (hard to grasp) ui. I hope it have been fixed since last time I checked (in the middle of the last year).
Well, if I had to choose I would go for Git, without really having any strong opinion against Hg. One of my main reasons would be the fact that even as Git has been solid all along for us, it has improved a lot on its support for its weaker aspects over the last few years. I strongly feel any remaining problems, if any, that should concern boost will be fixed sooner rather than later. -- Bjørn

on Tue Mar 20 2012, Bjørn Roald <bjorn-AT-4roald.org> wrote:
On emacs I have settled with egg.el which work on all platforms and support most of my tasks. So even if there are alternatives I have not tried, I have no need to look further.
IMO you absolutely *must* try Magit if you are an emacs user. Oh, wow, egg calls itself a "clone of Marius' excellent magit." According to http://alexott.net/en/writings/emacs-vcs/EmacsGit.html#sec18 it adds Windows portability to magit, so that's something interesting. I wish the differences were more clearly laid out. Well, according to https://github.com/bogolisk/egg/commits/master, egg hasn't been changed in 3 years, while https://github.com/magit/magit/commits/master shows that magit is still quite actively maintained.
Well, if I had to choose I would go for Git, without really having any strong opinion against Hg. One of my main reasons would be the fact that even as Git has been solid all along for us, it has improved a lot on its support for its weaker aspects over the last few years. I strongly feel any remaining problems, if any, that should concern boost will be fixed sooner rather than later.
And this rate of improvement is in no small measure due to Git's popularity. It's a self-reinforcing cycle. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Tue, Mar 20, 2012 at 10:13 AM, Dave Abrahams <dave@boostpro.com> wrote:
Well, according to https://github.com/bogolisk/egg/commits/master, egg hasn't been changed in 3 years, while https://github.com/magit/magit/commits/master shows that magit is still quite actively maintained.
FYI, The "up to date" egg page is at https://github.com/byplayer/egg/commits/master Philippe

I must admit I haven't read every comment on this topic but I think that the initial question is just another dead-end question, like VI vs Emacs. The scope is reduced here to Mercurial vs Git, but why? There are so many more alternatives to these two tools, Bazaar, Veracity, Monotone, Fossil... Why not considering and arguing on these tools as well? The answer is very simple: nobody will never be able to make up an objective argumentation in favor of one of these tools (or at least an argumentation that everybody agree on). There will always be some people preferring one against the other, and giving very valid points in favor of it. I think this is just a choice that has to be done, and that can't be done in an objective way. In my opinion, the only thing that matters here, is not how hard it is to use the tool, because none of them are hard to use (seriously, it's just a matter of getting used to it) and because this will depend on individuals and how hard people try to understand how the tool works and what previous tools they were used to (coming from svn or from p4, etc...). The only thing that really matters is how easy it is for developers (old or potentially new to Boost) to find information, help or training about the tool, and how easy it integrates with any system. This is what tool popularity and marketshare reflects (2.300k results for "Git Version Control" vs 600k for "Mercurial Version Control" on Google). Regarding new developers, I would give my point of view as being part of this category: I'm not a Boost contributor, but I like to checkout open-source projects sources, and to build them from scratch. Having Boost running mercurial would certainly be a pain, because I don't have mercurial installed and I feel already tired of having to install another software to fetch Boost sources. For all open-source projects I'm following, none use mercurial, and more than half use Git, therefore I've got Git installed and I know how to use it (and I've got SVN for the other half). If Mercurial was more used/popular than Git, then I would have it instead of Git, and I would have learned to use it. It's not the case. My 2¢. -- Beren Minor

on Tue Mar 20 2012, Beren Minor <beren.minor+boost-AT-gmail.com> wrote:
I must admit I haven't read every comment on this topic but I think that the initial question is just another dead-end question, like VI vs Emacs.
The scope is reduced here to Mercurial vs Git, but why? There are so many more alternatives to these two tools, Bazaar, Veracity, Monotone, Fossil... Why not considering and arguing on these tools as well?
The answer is very simple: nobody will never be able to make up an objective argumentation in favor of one of these tools (or at least an argumentation that everybody agree on). There will always be some people preferring one against the other, and giving very valid points in favor of it.
That's right. As Eric wrote, it comes down (at least in part) to who's willing to do the work. So far, those people chose Git.
I think this is just a choice that has to be done, and that can't be done in an objective way.
In my opinion, the only thing that matters here, is not how hard it is to use the tool, because none of them are hard to use (seriously, it's just a matter of getting used to it) and because this will depend on individuals and how hard people try to understand how the tool works and what previous tools they were used to (coming from svn or from p4, etc...). The only thing that really matters is how easy it is for developers (old or potentially new to Boost) to find information, help or training about the tool, and how easy it integrates with any system. This is what tool popularity and marketshare reflects (2.300k results for "Git Version Control" vs 600k for "Mercurial Version Control" on Google).
Also right. Of course, as I'm sure you'll acknowledge, different things matter to other people.
Regarding new developers, I would give my point of view as being part of this category: I'm not a Boost contributor, but I like to checkout open-source projects sources, and to build them from scratch. Having Boost running mercurial would certainly be a pain, because I don't have mercurial installed and I feel already tired of having to install another software to fetch Boost sources. For all open-source projects I'm following, none use mercurial, and more than half use Git, therefore I've got Git installed and I know how to use it (and I've got SVN for the other half). If Mercurial was more used/popular than Git, then I would have it instead of Git, and I would have learned to use it. It's not the case.
Spot-on again. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 20.03.2012 10:56, Beren Minor wrote:
I think this is just a choice that has to be done, and that can't be done in an objective way. In my opinion, the only thing that matters here, is not how hard it is to use the tool, because none of them are hard to use (seriously, it's just a matter of getting used to it) and because this will depend on individuals and how hard people try to understand how the tool works and what previous tools they were used to (coming from svn or from p4, etc...). The only thing that really matters is how easy it is for developers (old or potentially new to Boost) to find information, help or training about the tool, and how easy it integrates with any system. This is what tool popularity and marketshare reflects (2.300k results for "Git Version Control" vs 600k for "Mercurial Version Control" on Google).
The interpretation of Google hits as a popularity measure is very bold. The results you're mentioning can also suggest that Git users are more likely to require additional support. In previous comments, which you haven't read, it has been pointed out that the perceivable market share does not correlate with how well a tool integrates with a system. This is especially true when comparing Git and Mercurial.

On Tue, Mar 20, 2012 at 11:33 AM, Sergiu Dotenco <sergiu.dotenco@gmail.com> wrote:
On 20.03.2012 10:56, Beren Minor wrote: The interpretation of Google hits as a popularity measure is very bold. The results you're mentioning can also suggest that Git users are more likely to require additional support. In previous comments, which you haven't read, it has been pointed out that the perceivable market share does not correlate with how well a tool integrates with a system. This is especially true when comparing Git and Mercurial.
Let's compare open-source projects on Ohloh then instead of Google if you like it more http://www.ohloh.net/repositories/compare, but's it's not what I wanted to point out. As I said, I won't be able to provide any argument convincing everyone anyway. -- Beren Minor

Beren Minor <beren.minor+boost@gmail.com> writes:
On Tue, Mar 20, 2012 at 11:33 AM, Sergiu Dotenco <sergiu.dotenco@gmail.com> wrote:
On 20.03.2012 10:56, Beren Minor wrote: The interpretation of Google hits as a popularity measure is very bold. The results you're mentioning can also suggest that Git users are more likely to require additional support. In previous comments, which you haven't read, it has been pointed out that the perceivable market share does not correlate with how well a tool integrates with a system. This is especially true when comparing Git and Mercurial.
Let's compare open-source projects on Ohloh then instead of Google if you like it more http://www.ohloh.net/repositories/compare, but's it's not what I wanted to point out. As I said, I won't be able to provide any argument convincing everyone anyway.
Ohloh is a particularly bad statistic to use: I think they've crawled GitHub and not Bitbucket and Launchpad (there are many more Bazaar projects than the ones Ohloh list, and there are more Mercurial projects than Bazaar projects). -- Martin Geisler aragost Trifork -- Professional Mercurial support http://www.aragost.com/mercurial/

on Tue Mar 20 2012, Philippe Vaucher <philippe.vaucher-AT-gmail.com> wrote:
On Tue, Mar 20, 2012 at 10:13 AM, Dave Abrahams <dave@boostpro.com> wrote:
Well, according to https://github.com/bogolisk/egg/commits/master, egg hasn't been changed in 3 years, while https://github.com/magit/magit/commits/master shows that magit is still quite actively maintained.
FYI, The "up to date" egg page is at https://github.com/byplayer/egg/commits/master
Oh. Wow, well it's prettier than Magit in some ways, I'll give it that. It's too bad the projects have diverged so much, though. My Magit instincts don't translate very well to egg, and the keybindings are no longer identical. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

FYI, The "up to date" egg page is at https://github.com/byplayer/egg/commits/master
Oh. Wow, well it's prettier than Magit in some ways, I'll give it that. It's too bad the projects have diverged so much, though. My Magit instincts don't translate very well to egg, and the keybindings are no longer identical.
I have trouble deciding between the two. Egg has a nice recapitulation of the available keybindings, but magit seems a bit more popular... also it's hard to find the differences between the two projects. Well, we're offtopic now :) Philippe

- what is the current state of git concerning windows support? As someone pointed out, there is no official efforts to make it easily portable to windows and tools are problematic on windows.
I use git on windows, linux and OSX on a regular basis and windows support is fine. MSysGit combined with TortoiseGit makes it easy to use for beginners, tho sometimes git's performance on windows is a bit slow. Linux lacks a good TortoiseGit equivalent that integrates with the file explore imho, but this is improving with RabbitCVS & GitG. Philippe

on Tue Mar 20 2012, Klaim - Joël Lamotte <mjklaim-AT-gmail.com> wrote:
Hi,
May I ask:
- what is the current state of git concerning error reporting? Last time I checked, git still had no reports when something gone wrong and that was a major concern to me.
I don't know. A more specific question would help---"when something gone wrong" leaves a lot of room for interpretation. But then, you could probably do a quick experiment yourself and quickly get a clear answer.
- what is the current state of git concerning windows support? As someone pointed out, there is no official efforts to make it easily portable to windows and tools are problematic on windows.
http://help.github.com/win-set-up-git/ It's well supported, and support is not going away.
I don't care if boost goes with git or mercurial, but AFAIK, error reporting and windows support is far worst than git's (hard to grasp) ui. I hope it have been fixed since last time I checked (in the middle of the last year).
I personally doubt much has changed in those areas since the middle of last year. But then, I never had problems with either of those things. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 20/03/12 02:04, Dave Abrahams wrote:
on Mon Mar 19 2012, Anthony Williams<anthony.ajw-AT-gmail.com> wrote:
I've had numerous problems with git, including getting my local git repo into a state where it would neither push to nor pull from the remote repo. On the other hand, I've had no problems with Mercurial, even though I've used it on more projects, with more branching and merging.
In one case, I was having such difficulty with git that I used hg-git to import my git repo into mercurial, so I could deal with the branches and merges in a sane fashion, then exported back to git.
All my problems basically boil down to one thing though: the user interface (command line) to git doesn't map cleanly to the way I think about stuff, or the operations I wish to do, whereas the user interface for mercurial does. For me, mercurial is intuitive, whereas git is not, in a big way.
But for every story like that, there's an opposite one from the other community. For example, I find Mercurial's branch model completely insane. Multiple heads on a branch? What on earth were they thinking?! So on one project I used git-Hg to make the transition in the other direction.
Totally agreed. I was just sharing my experience. I find git unintuitive. YMMV, and apparently it does. I actually find the "multiple heads" thing quite intuitive! Anyway, as I said in the paragraph you skipped: git is better than subversion, so I'd rather use git than not change to a DVCS. Anthony -- Author of C++ Concurrency in Action http://www.stdthread.co.uk/book/ just::thread C++11 thread library http://www.stdthread.co.uk Just Software Solutions Ltd http://www.justsoftwaresolutions.co.uk 15 Carrallack Mews, St Just, Cornwall, TR19 7UL, UK. Company No. 5478976

On 3/19/2012 7:02 AM, Daryle Walker wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? They're kind-of like CVS vs. Subversion, except I think they came up in parallel. (While Subversion was designed as an updated CVS.) I think Git was made up of a bunch of script hacks, while Mercurial was a regimented single program. I don't have a preference, but I want to make sure we consider the rival options. Daryle W.
As with everything in open source, it comes down to: who is willing and able to do the work? If nobody advocates for Mercurial *and* is willing to do the work to make it happen, then it won't happen. FWIW, I sympathize with the folks complaining about git's complicated interface/mental model and with its poor Windows support. I've never used Mercurial. If it's simpler to use and has solid windows support, those are two strong argument in its favor. But again, someone needs to step up to the plate, and AFAICT nobody has. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On 19/03/2012, at 22:49, Eric Niebler wrote:
On 3/19/2012 7:02 AM, Daryle Walker wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? They're kind-of like CVS vs. Subversion, except I think they came up in parallel. (While Subversion was designed as an updated CVS.) I think Git was made up of a bunch of script hacks, while Mercurial was a regimented single program. I don't have a preference, but I want to make sure we consider the rival options. Daryle W.
As with everything in open source, it comes down to: who is willing and able to do the work? If nobody advocates for Mercurial *and* is willing to do the work to make it happen, then it won't happen.
FWIW, I sympathize with the folks complaining about git's complicated interface/mental model and with its poor Windows support. I've never used Mercurial. If it's simpler to use and has solid windows support, those are two strong argument in its favor. But again, someone needs to step up to the plate, and AFAICT nobody has.
I don't think mercurial is simpler to use. It just makes it harder to edit history, which is only advantageous for someone completely clueless about it.

On 3/20/2012 2:41 AM, Bruno Santos wrote:
On 19/03/2012, at 22:49, Eric Niebler wrote:
On 3/19/2012 7:02 AM, Daryle Walker wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? They're kind-of like CVS vs. Subversion, except I think they came up in parallel. (While Subversion was designed as an updated CVS.) I think Git was made up of a bunch of script hacks, while Mercurial was a regimented single program. I don't have a preference, but I want to make sure we consider the rival options. Daryle W.
As with everything in open source, it comes down to: who is willing and able to do the work? If nobody advocates for Mercurial *and* is willing to do the work to make it happen, then it won't happen.
FWIW, I sympathize with the folks complaining about git's complicated interface/mental model and with its poor Windows support. I've never used Mercurial. If it's simpler to use and has solid windows support, those are two strong argument in its favor. But again, someone needs to step up to the plate, and AFAICT nobody has.
I don't think mercurial is simpler to use. It just makes it harder to edit history, which is only advantageous for someone completely clueless about it.
You think? How about sticking to the facts? Moreover, why would you even want to edit already shared history? Seems like there are much more clueless Git users who are not able to handle the tool in the first place.

On 20/03/2012, at 07:47, Sergiu Dotenco wrote:
On 3/20/2012 2:41 AM, Bruno Santos wrote:
On 19/03/2012, at 22:49, Eric Niebler wrote:
On 3/19/2012 7:02 AM, Daryle Walker wrote:
Git has a competitor called Mercurial? If we're moving to a Distributed-VCS, should we go to Mercurial instead of Git? They're kind-of like CVS vs. Subversion, except I think they came up in parallel. (While Subversion was designed as an updated CVS.) I think Git was made up of a bunch of script hacks, while Mercurial was a regimented single program. I don't have a preference, but I want to make sure we consider the rival options. Daryle W.
As with everything in open source, it comes down to: who is willing and able to do the work? If nobody advocates for Mercurial *and* is willing to do the work to make it happen, then it won't happen.
FWIW, I sympathize with the folks complaining about git's complicated interface/mental model and with its poor Windows support. I've never used Mercurial. If it's simpler to use and has solid windows support, those are two strong argument in its favor. But again, someone needs to step up to the plate, and AFAICT nobody has.
I don't think mercurial is simpler to use. It just makes it harder to edit history, which is only advantageous for someone completely clueless about it.
You think? How about sticking to the facts? Moreover, why would you even want to edit already shared history? Seems like there are much more clueless Git users who are not able to handle the tool in the first place.
I wasn't referring to mercurial, I was referring to the history. So you're are well aware of how history works you don't mess up things like Anthony exemplified. I don't what to edit shared history, I want to edit what's not shared or even deleted it. The fact is mercurial resembles to much of svn. I didn't appreciate svn, I always regard it a very poor thing and I hated it when working with teams. When git come out finally had something that was really nice: branching and merging become useful and amazing. The branching model in mercurial is very poor, the multiple heads concept is just stupid. I like to treat branches as individual entities. And worst part is mercurial forces you and doesn't give any other choice. Why would I want to use a tool that forces me to such idiocracies? And it becomes really frustrating was you become a more advance user. The mercurial mentality reminds of the same mentality of managed languages.

On 20.03.2012 12:01, Bruno Santos wrote:
I wasn't referring to mercurial, I was referring to the history. So you're are well aware of how history works you don't mess up things like Anthony exemplified. I don't what to edit shared history, I want to edit what's not shared or even deleted it.
The fact is mercurial resembles to much of svn. [...]
This statement is not only wrong but actually demonstrates that you have absolutely no idea what you are talking about.

On 20/03/2012, at 12:59, Sergiu Dotenco wrote:
On 20.03.2012 12:01, Bruno Santos wrote:
I wasn't referring to mercurial, I was referring to the history. So you're are well aware of how history works you don't mess up things like Anthony exemplified. I don't what to edit shared history, I want to edit what's not shared or even deleted it.
The fact is mercurial resembles to much of svn. [...]
This statement is not only wrong but actually demonstrates that you have absolutely no idea what you are talking about.
There is no point in arguing with someone that only dismiss other people arguments and to that end will pick a specific sentence and disregard the rest to take it out of context. You have yet to provided an argument/opinion in favor of mercurial. So unless you have something to say about mercurial there is no point in this discussion, I have better things to do.

On 20.03.2012 14:56, Bruno Santos wrote:
On 20/03/2012, at 12:59, Sergiu Dotenco wrote:
On 20.03.2012 12:01, Bruno Santos wrote:
I wasn't referring to mercurial, I was referring to the history. So you're are well aware of how history works you don't mess up things like Anthony exemplified. I don't what to edit shared history, I want to edit what's not shared or even deleted it.
The fact is mercurial resembles to much of svn. [...]
This statement is not only wrong but actually demonstrates that you have absolutely no idea what you are talking about.
There is no point in arguing with someone that only dismiss other people arguments and to that end will pick a specific sentence and disregard the rest to take it out of context.
You didn't provide any valid arguments, only self-proclaimed facts. Unless I'm missing something, there's also no specific context.
You have yet to provided an argument/opinion in favor of mercurial. So unless you have something to say about mercurial there is no point in this discussion, I have better things to do.

on Tue Mar 20 2012, Sergiu Dotenco <sergiu.dotenco-AT-gmail.com> wrote:
On 20.03.2012 12:01, Bruno Santos wrote:
I wasn't referring to mercurial, I was referring to the history. So you're are well aware of how history works you don't mess up things like Anthony exemplified. I don't what to edit shared history, I want to edit what's not shared or even deleted it.
The fact is mercurial resembles to much of svn. [...]
This statement is not only wrong but actually demonstrates that you have absolutely no idea what you are talking about.
People, please. I don't mean to pick on Sergiu specifically---it's just an example---but this discussion has become needlessly heated. Please exercise some restraint. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 20/03/2012 07:47, Sergiu Dotenco wrote:
On 3/20/2012 2:41 AM, Bruno Santos wrote: ... [BK cut here]
I don't think mercurial is simpler to use. It just makes it harder to edit history, which is only advantageous for someone completely clueless about it.
You think? How about sticking to the facts? Moreover, why would you even want to edit already shared history? Seems like there are much more clueless Git users who are not able to handle the tool in the first place.
"already shared" is implied and unnecessary. If you remove this bit, editing history in git starts to make perfect sense. When you want history to be readable and logical to other contributors, you will likely want to use "git rebase -i" to tidy up or roll up your *local* commits *before* you share them with others. It is your private repository and private changes, until you share it. This enables tight private iteration loop while keeping the noise off public repository. Eg. you can do commit small change, run test, commit more changes, run more tests, to eventually find out that the first change had a fatal bug. Edit first commit, add necessary comment, rinse and repeat as necessary. When done and tested, roll up your commits and share with others. Just an example of style really, the important point is that your development style will not create unnecessary commits in shared repository. Well at least this is my experience from using git, and it seems to work well for my (very distributed) team. B.

On 20.03.2012 12:18, Bronek Kozicki wrote:
On 20/03/2012 07:47, Sergiu Dotenco wrote:
On 3/20/2012 2:41 AM, Bruno Santos wrote: ... [BK cut here]
I don't think mercurial is simpler to use. It just makes it harder to edit history, which is only advantageous for someone completely clueless about it.
You think? How about sticking to the facts? Moreover, why would you even want to edit already shared history? Seems like there are much more clueless Git users who are not able to handle the tool in the first place.
"already shared" is implied and unnecessary. If you remove this bit, editing history in git starts to make perfect sense.
When you want history to be readable and logical to other contributors, you will likely want to use "git rebase -i" to tidy up or roll up your *local* commits *before* you share them with others. It is your private repository and private changes, until you share it.
This enables tight private iteration loop while keeping the noise off public repository. Eg. you can do commit small change, run test, commit more changes, run more tests, to eventually find out that the first change had a fatal bug. Edit first commit, add necessary comment, rinse and repeat as necessary. When done and tested, roll up your commits and share with others.
Just an example of style really, the important point is that your development style will not create unnecessary commits in shared repository. Well at least this is my experience from using git, and it seems to work well for my (very distributed) team.
Everything you described works in Mercurial as well, probably much better.

on Tue Mar 20 2012, Sergiu Dotenco <sergiu.dotenco-AT-gmail.com> wrote:
When you want history to be readable and logical to other contributors, you will likely want to use "git rebase -i" to tidy up or roll up your *local* commits *before* you share them with others. It is your private repository and private changes, until you share it.
This enables tight private iteration loop while keeping the noise off public repository. Eg. you can do commit small change, run test, commit more changes, run more tests, to eventually find out that the first change had a fatal bug. Edit first commit, add necessary comment, rinse and repeat as necessary. When done and tested, roll up your commits and share with others.
Just an example of style really, the important point is that your development style will not create unnecessary commits in shared repository. Well at least this is my experience from using git, and it seems to work well for my (very distributed) team.
Everything you described works in Mercurial as well, probably much better.
For what it's worth, I found history rewriting to be quite a bit more difficult in Mercurial than in Git. I don't know why; it may be that I never learned the magic incantation that made it easy. Like I said, these stories exist in both directions. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 03/20/2012 04:12 PM, Dave Abrahams wrote:
on Tue Mar 20 2012, Sergiu Dotenco<sergiu.dotenco-AT-gmail.com> wrote:
When you want history to be readable and logical to other contributors, you will likely want to use "git rebase -i" to tidy up or roll up your *local* commits *before* you share them with others. It is your private repository and private changes, until you share it.
This enables tight private iteration loop while keeping the noise off public repository. Eg. you can do commit small change, run test, commit more changes, run more tests, to eventually find out that the first change had a fatal bug. Edit first commit, add necessary comment, rinse and repeat as necessary. When done and tested, roll up your commits and share with others.
Just an example of style really, the important point is that your development style will not create unnecessary commits in shared repository. Well at least this is my experience from using git, and it seems to work well for my (very distributed) team.
Everything you described works in Mercurial as well, probably much better.
For what it's worth, I found history rewriting to be quite a bit more difficult in Mercurial than in Git. I don't know why; it may be that I never learned the magic incantation that made it easy. Like I said, these stories exist in both directions.
I am curios now ... I get the feeling that history rewriting is one of the git killer features. Can someone enlighten me what the fuss is about? What is the usecase?

Thomas Heller wrote:
On 03/20/2012 04:12 PM, Dave Abrahams wrote:
on Tue Mar 20 2012, Sergiu Dotenco<sergiu.dotenco-AT-gmail.com> wrote:
When you want history to be readable and logical to other contributors, you will likely want to use "git rebase -i" to tidy up or roll up your *local* commits *before* you share them with others. It is your private repository and private changes, until you share it.
This enables tight private iteration loop while keeping the noise off public repository. Eg. you can do commit small change, run test, commit more changes, run more tests, to eventually find out that the first change had a fatal bug. Edit first commit, add necessary comment, rinse and repeat as necessary. When done and tested, roll up your commits and share with others.
Just an example of style really, the important point is that your development style will not create unnecessary commits in shared repository. Well at least this is my experience from using git, and it seems to work well for my (very distributed) team.
Everything you described works in Mercurial as well, probably much better.
For what it's worth, I found history rewriting to be quite a bit more difficult in Mercurial than in Git. I don't know why; it may be that I never learned the magic incantation that made it easy. Like I said, these stories exist in both directions.
I am curios now ... I get the feeling that history rewriting is one of the git killer features. Can someone enlighten me what the fuss is about? What is the usecase?
This is one of the discrimators between git and hg. git encourages the use of rebase (at least, it does socially). hg discourages it (but it is possible). There are arguments that rebase is dangerous, and that a different workflow is preferrable.

I am curios now ... I get the feeling that history rewriting is one of the git killer features. Can someone enlighten me what the fuss is about? What is the usecase?
http://sethrobertson.github.com/GitBestPractices/#sausage Basically, it allows developpers to polish their commits before pushing so the repository history looks very clean. No more "oops commits". Very often in projects even with the best intentions you eventually do a mistake and realise that this commits should have been appended to a previous (unpushed) one, or that the commit message is innacurate, or whatever. The hability to rewrite those commits without fear of losing changes if you know the basics of getting out of trouble with git (reflog+reset)) produces nice repository histories where it's much easier to pick up on a new project aftwerwards. Philippe

on Tue Mar 20 2012, Thomas Heller <thom.heller-AT-googlemail.com> wrote:
On 03/20/2012 04:12 PM, Dave Abrahams wrote:
on Tue Mar 20 2012, Sergiu Dotenco<sergiu.dotenco-AT-gmail.com> wrote:
When you want history to be readable and logical to other contributors, you will likely want to use "git rebase -i" to tidy up or roll up your
*local* commits *before* you share them with others. It is your private repository and private changes, until you share it.
This enables tight private iteration loop while keeping the noise off public repository. Eg. you can do commit small change, run test, commit more changes, run more tests, to eventually find out that the first change had a fatal bug. Edit first commit, add necessary comment, rinse and repeat as necessary. When done and tested, roll up your commits and share with others.
Just an example of style really, the important point is that your development style will not create unnecessary commits in shared repository. Well at least this is my experience from using git, and it seems to work well for my (very distributed) team.
Everything you described works in Mercurial as well, probably much better.
For what it's worth, I found history rewriting to be quite a bit more difficult in Mercurial than in Git. I don't know why; it may be that I never learned the magic incantation that made it easy. Like I said, these stories exist in both directions.
I am curios now ... I get the feeling that history rewriting is one of the git killer features. Can someone enlighten me what the fuss is about? What is the usecase?
It's primarily about presenting a logical series of changes to the world without exposing distracting hiccups like fixes for typos and bugs, decisions to try a different approach in one part of your new material, etc. Rebasing in particular (a special case of history rewriting) is useful for removing merge commits, presenting a linear history, and making it easier for upstream maintainers to evaluate your changes. After rebasing your series of changes on the latest upstream work (during which process you resolve any conflicts that arise) you have incorporated upstream work and are left with a clean set of changes on top of it. I believe sophisticated branch management tools like topgit may rely on this capability, but I admit that I'm not 100% sure. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 20/03/12 14:06, Sergiu Dotenco wrote:
Everything you described works in Mercurial as well, probably much better.
It is a relatively new Mercurial feature, was originally an experimental extension, and was added for parity with Git. I'd rather choose the software that was designed for the good ideas than the one that copied them. Also "probably much better" is the kind of self-proclaimed facts you were criticising in other parts of the thread.

On 3/22/2012 4:02 PM, Mathias Gaunard wrote:
On 20/03/12 14:06, Sergiu Dotenco wrote:
Everything you described works in Mercurial as well, probably much better.
It is a relatively new Mercurial feature, was originally an experimental extension, and was added for parity with Git.
What kind of experimental Mercurial extension are talking about?
I'd rather choose the software that was designed for the good ideas than the one that copied them.
Also "probably much better" is the kind of self-proclaimed facts you were criticising in other parts of the thread.
Except that I never declared that a fact. It's simply my opinion on this matter which is also pretty obvious, isn't it?

Mathias Gaunard <mathias.gaunard@ens-lyon.org> writes:
On 20/03/12 14:06, Sergiu Dotenco wrote:
Everything you described works in Mercurial as well, probably much better.
It is a relatively new Mercurial feature, was originally an experimental extension, and was added for parity with Git.
I think you're misunderstanding the concept of extensions in Mercurial. Mercurial never ships "experimental" extensions -- we ship optional and potentially dangerous functionality in extensions. We have a safe core set of commands and delegate functionality to extensions when it's either not something everybody would need (like integration with bugzilla) or when it's potentially dangerous (history rewriting is dangerous if you mess around with published history). All extensions are fully supported and covered by the same quality standards we use for the core code.
I'd rather choose the software that was designed for the good ideas than the one that copied them.
That's silly -- Mercurial learned from Git. I think that's a good thing and not something to sneeze at. Both Git and Mercurial was heavily influenced by Monotone, so I guess you should really be using that? We've had some good ideas of our own in Mercurial: Git borrowed the bundle command from Mercurial. The revset and fileset languages are unique to Mercurial and I think they're quite innovative. -- Martin Geisler Mercurial links: http://mercurial.ch/
participants (48)
-
Anthony Williams
-
Barend Gehrels
-
Beren Minor
-
Bjørn Roald
-
Brian Schrom
-
Bronek Kozicki
-
Bruno Santos
-
Bryce Lelbach
-
Christof Donat
-
Christopher Jefferson
-
dag@cray.com
-
Daniel James
-
Daryle Walker
-
Dave Abrahams
-
David Bergman
-
Edward Diener
-
Eric Niebler
-
Frank Birbacher
-
Gottlob Frege
-
greened@obbligato.org
-
Hartmut Kaiser
-
Jim Bosch
-
Joel de Guzman
-
John Wiegley
-
Julian Gonggrijp
-
Julien Nitard
-
Klaim - Joël Lamotte
-
Lars Viklund
-
Mark Borgerding
-
Martin Geisler
-
Mathias Gaunard
-
Nathan Ridge
-
Neal Becker
-
Olaf van der Spek
-
Oliver Kowalke
-
Oliver Kullmann
-
Paul A. Bristow
-
Philippe Vaucher
-
Rafaël Fourquet
-
Rene Rivera
-
Sergey Popov
-
Sergiu Dotenco
-
Stephan Menzel
-
Steven Watanabe
-
Thijs (M.A.) van den Berg
-
Thomas Heller
-
Thomas Klimpel
-
Topher Cooper