Edward, all, greetings --
Edward Diener
Why do you mean by "incremental checkins" ? If I use SVN I can make as many changes locally as I want.
Hopefully others have clarified this point for us, but to be perfectly clear: What I mean by "incremental checkins" is that I can make changes on my local machine *and save each change in my local history*. When I push these out to the world at large, I can either keep the history the way it is, or condense multiple "bite-sized" check-ins to a single "plate-sized" feature addition. E.g., if I make changes A, B, C, and D locally, I would commit *locally* between each change. If it turns out that commit B was incorrect or sub-optimal, I can use my local repo to do the equivalent of: revert D revert C revert B apply C apply D As others have pointed out, there is a huge difference between "local change" and "tracking local commits". git and hg both provide the latter feature.
"More distributed" [w.r.t. data backup] means nothing to me. Someone really needs to justify this distributed development idea with something more than "its distributed so it must be good".
[This is not meant in any way to disparage the current boost infrastructure maintainers; when I ask a question, please read it as "how would I fill out a form" and not "I assume they haven't taken care of this".] What's the backup plan for the boost SVN repository? Who maintains it? What is the replication factor? Offsite backups? How often is restore capability verified? Are all backups checksummed? With distributed repositories, every developer has a complete copy of the entire (public) history of the project as well as any local changes they have made. Verification of backup/restore capability is given by the fact that it's done via the exact same operations that are required in everyday development. In both git and hg, all content is implicitly checksummed (by virtue of content being addressed primarily by SHA1, at least in git). (This isn't quite as ballsy as Linus pointing out that he gets away with just uploading tarballs, with his backup being taken care of by the many thousands that download his releases...)
Please explain "cheaper and lighter weight" ?
Please note, this might be from my inexperience, but I've found that the only effective way to work on a "private branch" in SVN is to check out the branch I care to modify in a separate directory. As an example of SVN making things painful, I'm working on a minor fix inside GCC. I had to check out the trunk (1.8GiB on disk, no idea how much over the network). Then I checked out the current release tag, another 1.8GiB of network traffic. Compare with a project that is distributed via mercurial. In this case, I have the trunk and two private branches, each for a different feature. The original checkout of trunk was about as expensive as for SVN; after that, though, I could do a local "clone" to get my private feature-development branches.
It is all this rhetoric that really bothers me from developers on the Git bandwagon. I would love to see real technical proof.
My apologies if my original post across as propoganda; I was just trying to communicate what I found to be the distinct "in the real world" advantages of using a DVCS (git or hg) over a centralized VCS (SVN). (Granted, I suppose it's possible to have a *local* SVN server that one could use to do much of the same work as indicated above. I have no idea how painful that might be, though, and since the current leading DVCSs already solve the problem for me, I'm disinclined to try to find out.) And while my sympathies are primarily with git, I have (and do) work with projects that use hg, and I find them both vastly more pleasant than svn. I even contributed a trivial doc patch to hg; I found the learning curve, tool usage, and community response all incredibly pleasant. Best regards, Tony