Respecting a projects toolchain decisions (was Re: [context] new version - support for Win64)

On Fri, Dec 17, 2010 at 9:34 PM, Lars Viklund<zao@acc.umu.se> wrote:
Maybe you can adapt to the conventions of the real world, or use git-svn?
On Fri, Dec 17, 2010 at 22:19, Dean Michael Berris <mikhailberis@gmail.com> wrote:
Which real world are you talking about, the one that I live in where nobody uses SVN anymore and instead use Git for large open source development projects? :)
Am 18.12.2010 09:47, schrieb Scott McMurray:
Probably the "large commercial software company" world where the source control is so bad that branches get merged once every two weeks at best, and all checkins are blocked for over 24 hours when they do, so multiple releases get worked on in the same branch just to avoid the merging headaches.
On Sat, Dec 18, 2010 at 03:46:30PM +0100, Oliver Kowalke wrote:
I think we should stop this talk - if desired you can start a new thread with this topic.
I apologise if the term "real world" was misunderstood, but I'm rather tired of when people come in from the outside, implying or demanding that a project should change their version control, build system or favourite bikeshed colour, just because they are used to something else (possibly "better") from elsewhere. As for version control, what does it matter if Boost uses Subversion, when you as a DVCS user can trivially use git-svn [1] to interop against the repository (in this case, the sandbox). You get to use your favourite toy, without affecting the existing infrastructure in any way. In the end, the version control you choose is rather tangential. As long as it's sufficiently competent (which Subversion in my eyes is), you'll survive. Of course, you may propose constructive criticism and suggest migration plans to other toolchains, with good arguments for why this is a good thing. See the mythical 'Ryppl' project, which aims to componentise Boost into a pile of Git repositories and some magical combination of scripts and CMake, aimed at letting you track exactly the versions of components you need. Remember that no tool is isolated. Changing from Subversion to <whatever> would result in many changes propagating to how test runners are set up, rewriting of commit hooks, modifying Trac (if possible) (although the SVN functionality is disabled there for now), requiring adaptation of any entity out there that use Boost's repositories in any way, including externals, build scripts, CI environments, etc. And of course, having to have the mirrored Subversion repositories up for years, as you can't really 'flag day' such things. -- Lars Viklund | zao@acc.umu.se

Sorry about the long Hiatus, I meant to get back to this email, as there are some points raised that need to be addressed. That said, please see below: On Sun, Dec 19, 2010 at 12:43 AM, Lars Viklund <zao@acc.umu.se> wrote:
On Fri, Dec 17, 2010 at 9:34 PM, Lars Viklund<zao@acc.umu.se> wrote:
Maybe you can adapt to the conventions of the real world, or use git-svn?
On Fri, Dec 17, 2010 at 22:19, Dean Michael Berris <mikhailberis@gmail.com> wrote:
Which real world are you talking about, the one that I live in where nobody uses SVN anymore and instead use Git for large open source development projects? :)
Am 18.12.2010 09:47, schrieb Scott McMurray:
Probably the "large commercial software company" world where the source control is so bad that branches get merged once every two weeks at best, and all checkins are blocked for over 24 hours when they do, so multiple releases get worked on in the same branch just to avoid the merging headaches.
On Sat, Dec 18, 2010 at 03:46:30PM +0100, Oliver Kowalke wrote:
I think we should stop this talk - if desired you can start a new thread with this topic.
I apologise if the term "real world" was misunderstood, but I'm rather tired of when people come in from the outside, implying or demanding that a project should change their version control, build system or favourite bikeshed colour, just because they are used to something else (possibly "better") from elsewhere.
Actually, I don't know if you've been around enough to say who's coming from the outside. Oliver and the others pushing for Git in Boost aren't from the outside -- they've been contributing countless man-hours testing, patching, and helping maintain the Boost C++ Library "from the inside". The choice of whether the current system is sufficient is not made by some committee or some handful of users that get to decide whether the system is sufficient or otherwise.
As for version control, what does it matter if Boost uses Subversion, when you as a DVCS user can trivially use git-svn [1] to interop against the repository (in this case, the sandbox). You get to use your favourite toy, without affecting the existing infrastructure in any way.
Yes, it matters. Let me state a few reasons why: 1. Precisely because it's Subversion, a non-distributed configuration management system, the process of getting changes in and innovating is slowed down by the bottleneck that is the centralized source code management system. 2. Potential contributors to Boost that have to deal with Subversion from the outside through a hack that is Git-SVN is just a Bad Idea. If a library that is being developed to be made part of Boost has to go to the Sandbox, then the process of developing a library in a collaborative manner would be a lot harder. I've already pointed out the reasons for this in another thread pleading to get Boost development out of a centralized system and into a more distributed system. 3. Because of the central management that Subversion promotes, libraries that are developed by other people meant to be integrated into the Boost sources will have trouble moving the history of these projects into the Boost Subversion system -- nearly impossible if you think about it -- as opposed to the way Git or Mercurial allow for history merging/archiving to be achieved. This means Subversion actually works against the project as opposed to with the project.
In the end, the version control you choose is rather tangential. As long as it's sufficiently competent (which Subversion in my eyes is), you'll survive.
I think you haven't been looking at -- or are ignoring -- the problems that Boost is already having when it comes to making the development effort more scalable.
Of course, you may propose constructive criticism and suggest migration plans to other toolchains, with good arguments for why this is a good thing. See the mythical 'Ryppl' project, which aims to componentise Boost into a pile of Git repositories and some magical combination of scripts and CMake, aimed at letting you track exactly the versions of components you need.
Well, it's not mythical -- it's there, and the Boost Libraries have pretty much been broken up already. The CMake migration is taking a while and the only reason for that is there aren't enough help going into the CMake effort.
Remember that no tool is isolated. Changing from Subversion to <whatever> would result in many changes propagating to how test runners are set up, rewriting of commit hooks, modifying Trac (if possible) (although the SVN functionality is disabled there for now), requiring adaptation of any entity out there that use Boost's repositories in any way, including externals, build scripts, CI environments, etc.
Well, see, all these things you mention are really tangential to the issue of whether you're using Subversion or Git. Trac can be (and I think, should be) abandoned for something that reflects better the workflow that Boost would want to encourage and that performs better on the machine that is available to it. If the solution is hosted for Boost then I would say it would be better. Migration is always going to be an issue, but it's a mechanical issue in reality. People just have to decide to do it, and then do it. The commit hooks can be ported (quite easily if I may say so myself): http://www.kernel.org/pub/software/scm/git/docs/githooks.html if there was really enough momentum towards getting Boost from Subversion to Git. The regression test runners could very well just change the commands they use in the script -- instead of checking out, you'd clone, and instead of updating, you'd pull. All these things you mention are artificially made to look "hard" because it's all a matter of migration really. The "hard" part is accepting that there are better solutions out there already.
And of course, having to have the mirrored Subversion repositories up for years, as you can't really 'flag day' such things.
I don't see why it can't be flag-day'ed. Linux made the move from the proprietary system to Git in a drop of the hat. I don't see why Boost won't be able to do the same. And if people really wanted to get it via Subversion for some other reason, someone else can definitely mirror the changes from Git to the Subversion repository. Of course I'd say "good luck" to that effort and maybe people who are still stuck with Subversion deserve the pain of having to deal with it anyway. ;) Happy Holidays everyone! :) -- Dean Michael Berris about.me/deanberris

Dean Michael Berris wrote:
Trac can be (and I think, should be) abandoned for something that reflects better the workflow that Boost would want to encourage and that performs better on the machine that is available to it. If the solution is hosted for Boost then I would say it would be better.
I for one am very pleased with Trac. I find it very helpful as it is. Robert Ramey

On Mon, Dec 27, 2010 at 12:17 AM, Robert Ramey <ramey@rrsd.com> wrote:
Dean Michael Berris wrote:
Trac can be (and I think, should be) abandoned for something that reflects better the workflow that Boost would want to encourage and that performs better on the machine that is available to it. If the solution is hosted for Boost then I would say it would be better.
I for one am very pleased with Trac. I find it very helpful as it is.
Right, but if you had something else in place that gave you what Trac gives *and* is faster, more pleasant to use, and had a lot more good things going for it -- like it being hosted not by the Boost server -- then maybe you'd like whatever that is too? :D -- Dean Michael Berris about.me/deanberris

Dean Michael Berris wrote:
On Mon, Dec 27, 2010 at 12:17 AM, Robert Ramey <ramey@rrsd.com> wrote:
Dean Michael Berris wrote:
Trac can be (and I think, should be) abandoned for something that reflects better the workflow that Boost would want to encourage and that performs better on the machine that is available to it. If the solution is hosted for Boost then I would say it would be better.
I for one am very pleased with Trac. I find it very helpful as it is.
Right, but if you had something else in place that gave you what Trac gives *and* is faster, more pleasant to use, and had a lot more good things going for it -- like it being hosted not by the Boost server -- then maybe you'd like whatever that is too? :D
Would you mind suggesting a specific project management tool, as well as hosting thereof, and migration scripts? I'm sorry to be blunt, but it has been a long thread, and so far, I don't see any specific engineering suggestions being made, or specific problems listed -- rather, this seems to be a general talk about how good git is in solving Linu{x,s}' problems. - Volodya

On Mon, Dec 27, 2010 at 1:12 PM, Vladimir Prus <vladimir@codesourcery.com> wrote:
Dean Michael Berris wrote:
Right, but if you had something else in place that gave you what Trac gives *and* is faster, more pleasant to use, and had a lot more good things going for it -- like it being hosted not by the Boost server -- then maybe you'd like whatever that is too? :D
Would you mind suggesting a specific project management tool, as well as hosting thereof, and migration scripts?
Not at all. JIRA > Trac, and there are CSV importers from Trac CSV exports to JIRA.
I'm sorry to be blunt, but it has been a long thread, and so far, I don't see any specific engineering suggestions being made, or specific problems listed -- rather, this seems to be a general talk about how good git is in solving Linu{x,s}' problems.
Well, the original post wasn't about any specific engineering steps to be made, so don't expect it to get to anything like that. But, since you asked, here's what I'm thinking: 1. Move Boost away from Subversion and let's use Git instead -- have each library be a separate Git repository, follow the model that Qt and the Linux follow, and have the maintainers develop their libraries at their own pace. Release managers then pull from the different repositories and work along with a team to stabilize a release that is supported as the de-facto Boost release. 2. Use JIRA instead of Trac for better performance and more sane UI/UX for issue tracking and/or project management. 3. Set up a community process for choosing which libraries make it into the Boost distribution, which ones are dropped, whether there are multiple Boost distributions and/or mixes, etc. 4. Change the review process instead from a submission->review->inclusion process that's rigidly scheduled to one that is less rigid and is more fluid. Libraries that are developed for Boost, similar to the stuff that's in the sandbox now, are developed on their own following Boost code guidelines and library structure, play well with the build system (whether it's Boost.Build or CMake), and is phased into the Boost distribution following the community process mentioned in 3. A review can happen anytime, and only every so often a vote happens to indicate whether a certain library is up to Boost standards, and that there's a commitment to maintaining the library in case it does get baked into the Boost distribution. There's a more concrete proposal there somewhere that I think I have to write down somewhere for everyone to comment on, but I'll work on that proposal at a later time when I feel like I have enough to write down. That said, please take the above as a "preview" of what my proposal later on should look like. HTH -- Dean Michael Berris about.me/deanberris

Dean Michael Berris wrote:
On Mon, Dec 27, 2010 at 1:12 PM, Vladimir Prus <vladimir@codesourcery.com> wrote:
Dean Michael Berris wrote:
Right, but if you had something else in place that gave you what Trac gives *and* is faster, more pleasant to use, and had a lot more good things going for it -- like it being hosted not by the Boost server -- then maybe you'd like whatever that is too? :D
Would you mind suggesting a specific project management tool, as well as hosting thereof, and migration scripts?
Not at all. JIRA > Trac, and there are CSV importers from Trac CSV exports to JIRA.
Great. Could you please start a separate thread, suggesting to move to Jira. Please be sure to specify, in detail: - Why you think it is better - Where it will be hosted -- including - The results of trial Trac->Jira conversion (preferrably with a live server showing results of same). I am sure if you post such a proposal, you might get some real discussion. Thanks, Volodya

Dean Michael Berris wrote:
1. Move Boost away from Subversion and let's use Git instead -- have each library be a separate Git repository, follow the model that Qt and the Linux follow, and have the maintainers develop their libraries at their own pace. Release managers then pull from the different repositories and work along with a team to stabilize a release that is supported as the de-facto Boost release.
So, instead of having each maintainer merge/push his changes to the release branch when he feels like that, you suggest that release managers ask maintainers of 70 (or is that 100 already) libraries what revision can be pulled to the release? That seems like create scalability problems on its own.
3. Set up a community process for choosing which libraries make it into the Boost distribution, which ones are dropped, whether there are multiple Boost distributions and/or mixes, etc.
4. Change the review process instead from a submission->review->inclusion process that's rigidly scheduled to one that is less rigid and is more fluid.
I would suggest you post separately about those proposals. I think that the current review process is actually good. It does not prevent anybody from using a proposed library in practice and provide real-world feedback. However, it encourages relatively deep look -- something that might not happen during production use. - Volodya

On 12/27/2010 09:05 PM, Vladimir Prus wrote:
Dean Michael Berris wrote:
1. Move Boost away from Subversion and let's use Git instead -- have each library be a separate Git repository, follow the model that Qt and the Linux follow, and have the maintainers develop their libraries at their own pace. Release managers then pull from the different repositories and work along with a team to stabilize a release that is supported as the de-facto Boost release. So, instead of having each maintainer merge/push his changes to the release branch when he feels like that, you suggest that release managers ask maintainers of 70 (or is that 100 already) libraries what revision can be pulled to the release?
Actually, I think there are multiple models of using git that can allow integrators to automate this along with models that allows sharing the workload among multiple trusted integrators. However, this idea of pulling changes to integrate is mainly used to allow the release managers and integrators full control by preventing arbitrary pushes while they attempt to do work. In some projects and organizations this is perceived as important. Given a Boost migration to Git and the fact that Git can be used both with push and pull models, there is really no need to change current boost policy unless it helps in the release process. -- Bjørn

2010/12/28 Bjørn Roald <bjorn@4roald.org>:
On 12/27/2010 09:05 PM, Vladimir Prus wrote:
Dean Michael Berris wrote:
1. Move Boost away from Subversion and let's use Git instead -- have each library be a separate Git repository, follow the model that Qt and the Linux follow, and have the maintainers develop their libraries at their own pace. Release managers then pull from the different repositories and work along with a team to stabilize a release that is supported as the de-facto Boost release.
So, instead of having each maintainer merge/push his changes to the release branch when he feels like that, you suggest that release managers ask maintainers of 70 (or is that 100 already) libraries what revision can be pulled to the release?
Actually, I think there are multiple models of using git that can allow integrators to automate this along with models that allows sharing the workload among multiple trusted integrators. However, this idea of pulling changes to integrate is mainly used to allow the release managers and integrators full control by preventing arbitrary pushes while they attempt to do work. In some projects and organizations this is perceived as important. Given a Boost migration to Git and the fact that Git can be used both with push and pull models, there is really no need to change current boost policy unless it helps in the release process.
True, but, if you don't change the policy and process (which is really the reason why the project isn't scaling in a manner that is suitable for "explosive" growth) then using a better tool only addresses part of the problem. Of course just changing to Git can be done easily without having to change the policy and process -- as changing those usually take longer than just porting an SVN repository to Git. ;) -- Dean Michael Berris about.me/deanberris

On Tue, Dec 28, 2010 at 4:05 AM, Vladimir Prus <vladimir@codesourcery.com> wrote:
Dean Michael Berris wrote:
1. Move Boost away from Subversion and let's use Git instead -- have each library be a separate Git repository, follow the model that Qt and the Linux follow, and have the maintainers develop their libraries at their own pace. Release managers then pull from the different repositories and work along with a team to stabilize a release that is supported as the de-facto Boost release.
So, instead of having each maintainer merge/push his changes to the release branch when he feels like that, you suggest that release managers ask maintainers of 70 (or is that 100 already) libraries what revision can be pulled to the release? That seems like create scalability problems on its own.
Instead of looking at it in a binary either-or way, look at it in a different way. Maintainers get to develop the libraries at their own pace, allowing every library to cultivate a community of sorts of users and developers. I can totally see for example libraries like Spirit to grow and maintain its own community outside just one Boost community, and release versions of Spirit at the pace that that community likes. Of course then Spirit will have its dependencies on libraries like Proto and MPL, each of which can release versions on which Spirit can explicitly depend on. What this allows is for someone to maintain a globbing together of the requisite Boost libraries that deal exclusively for example with template metaprogramming (maybe Boost.Meta) that packages different versions of these libraries into a single distribution that is maintained by people that aren't necessarily part of the different projects. Then you can imagine a Boost distribution that just deals with things like ranges and algorithms that deal with ranges. Then there's might be a Boost distribution that deals with network stuff, etc. -- this allows Boost to grow in a scalable manner, and have the development of each library scale according to the communities that get built around these libraries. In the end, what you have are multiple Boost libraries developed in an optimal manner according to each one's communities, and Boost distributions that are built up from publicly released component Boost libraries. Think of how Linux distributions work -- each piece of the project is separate and there are people that work on just bringing together these things into a single coherent packaging. This is the way I'd like to think about making the Boost development effort more scalable. Instead of building just one Boost distribution, you can build many that contain different libraries.
3. Set up a community process for choosing which libraries make it into the Boost distribution, which ones are dropped, whether there are multiple Boost distributions and/or mixes, etc.
4. Change the review process instead from a submission->review->inclusion process that's rigidly scheduled to one that is less rigid and is more fluid.
I would suggest you post separately about those proposals. I think that the current review process is actually good. It does not prevent anybody from using a proposed library in practice and provide real-world feedback. However, it encourages relatively deep look -- something that might not happen during production use.
Sure, that's the plan -- I'd really write up a proposal that has more detail and concrete steps to take. What I wrote earlier was a high level view of the plan, which until now is still brewing in my head. ;) About the review process, the problem with the time limit that I see is the amount of work required to throughly look at a library usually doesn't fit in one week. And then the really deeper looks require quite a bit of discussion to clarify points and make sure that the reviewer and the library author(s) get to respond to questions and/or gather feedback regarding the implementation. By making the review process more of a collaborative development process instead of an "I'm finished, is it good enough?" thing, you can involve more people and encourage community building around your library. Having an incubation project, like the Sandbox, and providing a means of continuously improving and developing a library -- there's already a number of libraries in the UnderConstruction wiki page which may be wanting some love -- and encouraging people to work together to make the libraries actually get up to a point that would be deemed "Boost ready" should be better than reserving judgment on whether a library should be accepted in Boost in a week's (or two weeks) worth of reviews. Also, making the decision of whether a library should be accepted in Boost should more or less just be a matter of a vote, because if you really felt differently about how a certain library should get developed or if you want to actually influence the development of a library, it's all just a matter of forking the repository, making your changes, submitting your changes "upstream" and letting the community decide whether that change is worth taking in. This means more collaboration yielding libraries that the community of developers and users would really want to have, rather than building a library for a long time "alone" and then hoping that in the 1/2 weeks that your library is under review that people will actually care about it. I should really write all this down into an actionable proposal. ;) HTH -- Dean Michael Berris about.me/deanberris

On 12/27/2010 11:42 PM, Dean Michael Berris wrote:
On Tue, Dec 28, 2010 at 4:05 AM, Vladimir Prus
I would suggest you post separately about those proposals. I think that the current review process is actually good. It does not prevent anybody from using a proposed library in practice and provide real-world feedback. However, it encourages relatively deep look -- something that might not happen during production use.
Sure, that's the plan -- I'd really write up a proposal that has more detail and concrete steps to take. What I wrote earlier was a high level view of the plan, which until now is still brewing in my head. ;)
About the review process, the problem with the time limit that I see is the amount of work required to throughly look at a library usually doesn't fit in one week. And then the really deeper looks require quite a bit of discussion to clarify points and make sure that the reviewer and the library author(s) get to respond to questions and/or gather feedback regarding the implementation. By making the review process more of a collaborative development process instead of an "I'm finished, is it good enough?" thing, you can involve more people and encourage community building around your library.
I agree with you that the time limit for most reviews is too narrow. It barely leaves time for someone to investigate a library and write a good review. I believe any review should last a month or more. At the same time I do not see why more than one review can not go on at any time. If each review lasted a month minimum, perhaps as long as two months, but a number of reviews were going on at the same time, then possible Boost libraries would not languish in the queue so long. I do not however see reviews as a collaborative development process. I dislike your notion of software development as a community process. Software design is almost always an individual conception and no amount of community involvement is going to change that. Of course a developer can be influenced by the comments of others about the particulars of a software library. But I can never believe that a community of people can effectively design a software library no matter what proof you may want to try to bring from other environments like Linux and other open source projects.

At Tue, 28 Dec 2010 09:02:04 -0500, Edward Diener wrote:
Software design is almost always an individual conception and no amount of community involvement is going to change that. Of course a developer can be influenced by the comments of others about the particulars of a software library. But I can never believe that a community of people can effectively design a software library no matter what proof you may want to try to bring from other environments like Linux and other open source projects.
Good designs can be the product a small community. I designed the Boost.Iterator library with two other people. I believe that there are at least five people who could be considered designers of the Spirit library, probably more. In that case there's one visionary leader, but IIUC, design work is actively solicited and used. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 12/28/2010 9:48 PM, Dave Abrahams wrote:
At Tue, 28 Dec 2010 09:02:04 -0500, Edward Diener wrote:
Software design is almost always an individual conception and no amount of community involvement is going to change that. Of course a developer can be influenced by the comments of others about the particulars of a software library. But I can never believe that a community of people can effectively design a software library no matter what proof you may want to try to bring from other environments like Linux and other open source projects.
Good designs can be the product a small community. I designed the Boost.Iterator library with two other people. I believe that there are at least five people who could be considered designers of the Spirit library, probably more. In that case there's one visionary leader, but IIUC, design work is actively solicited and used.
I do not disagree that more than one person can design a library. But whoever does so needs to understand the ideas and implementations involved to a great degree. I still believe that the smallest number of people who can work at a really good design to accomplish a programming task in a reasonable amount of time, the better, and I still believe that 1 is the ideal number. I do recognize that a large library of functionality may require more than 1 largely based on the wealth of functionality desired. But usually, with a small group of designers, as you point out their is one leader whose impetus is the reason for the existence of the library, and this ensures that the basic design ideas are adhered to without others pulling in different directions. I do realize that a design can be flawed in some ways, or can be improved in some ways, which the original designer may not foresee, and therefore it is advantageous to hear from others in a software community. But I adhere to the belief that design by commmittee or design by community produces mediocre and badly usable software the great majority of the time. I feel the same way about any creative endeavor in life.

On Tue, Dec 28, 2010 at 10:02 PM, Edward Diener <eldiener@tropicsoft.com> wrote:
On 12/27/2010 11:42 PM, Dean Michael Berris wrote:
About the review process, the problem with the time limit that I see is the amount of work required to throughly look at a library usually doesn't fit in one week. And then the really deeper looks require quite a bit of discussion to clarify points and make sure that the reviewer and the library author(s) get to respond to questions and/or gather feedback regarding the implementation. By making the review process more of a collaborative development process instead of an "I'm finished, is it good enough?" thing, you can involve more people and encourage community building around your library.
I agree with you that the time limit for most reviews is too narrow. It barely leaves time for someone to investigate a library and write a good review. I believe any review should last a month or more. At the same time I do not see why more than one review can not go on at any time. If each review lasted a month minimum, perhaps as long as two months, but a number of reviews were going on at the same time, then possible Boost libraries would not languish in the queue so long.
+1 Actually, I'd +2 if you said a review should be open until the library gets into the main distribution. And even after that, reviewing the quality of the library should be on-going and shouldn't stop at the point of inclusion into Boost. ;)
I do not however see reviews as a collaborative development process. I dislike your notion of software development as a community process. Software design is almost always an individual conception and no amount of community involvement is going to change that. Of course a developer can be influenced by the comments of others about the particulars of a software library. But I can never believe that a community of people can effectively design a software library no matter what proof you may want to try to bring from other environments like Linux and other open source projects.
Okay, it might not convince you so I won't try too hard. Two projects come to mind: WebKit and Qt. Also, I've been in many different situations where only the collaborative method is the one that works to think otherwise. ;) -- Dean Michael Berris about.me/deanberris

At Wed, 29 Dec 2010 10:56:19 +0800, Dean Michael Berris wrote:
Actually, I'd +2 if you said a review should be open until the library gets into the main distribution. And even after that, reviewing the quality of the library should be on-going and shouldn't stop at the point of inclusion into Boost. ;)
_That_ is a really cool idea. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Wed, Dec 29, 2010 at 4:00 AM, Dave Abrahams <dave@boostpro.com> wrote:
At Wed, 29 Dec 2010 10:56:19 +0800, Dean Michael Berris wrote:
Actually, I'd +2 if you said a review should be open until the library gets into the main distribution. And even after that, reviewing the quality of the library should be on-going and shouldn't stop at the point of inclusion into Boost. ;)
_That_ is a really cool idea.
I liked too. How many times I wanted to review a library, but didn't find any time in the two or three weeks they were being reviewed. Sometimes these libraries are in the review queue for much longer than that, and a review could've been possible before or even after.
-- Dave Abrahams BoostPro Computing http://www.boostpro.com
Regards, -- Felipe Magno de Almeida

Felipe Magno de Almeida wrote:
On Wed, Dec 29, 2010 at 4:00 AM, Dave Abrahams <dave@boostpro.com> wrote:
At Wed, 29 Dec 2010 10:56:19 +0800, Dean Michael Berris wrote:
Actually, I'd +2 if you said a review should be open until the library gets into the main distribution. And even after that, reviewing the quality of the library should be on-going and shouldn't stop at the point of inclusion into Boost. ;)
_That_ is a really cool idea.
I liked too. How many times I wanted to review a library, but didn't find any time in the two or three weeks they were being reviewed. Sometimes these libraries are in the review queue for much longer than that, and a review could've been possible before or even after.
There's nothing stopping you from submitting reviews for any of the pending libraries right now. The library author may make all of the suggested changes before the review period begins, so you may wish to submit an amended review -- perhaps as a reply to your original review -- that indicates your satisfaction with the changes. In the worst case, when the official review is announced, or during the review period, you'll need to pass along a link to your early review so the review manager will know of it. (If there is no review manager now, presumably the author will keep track of your review anyway.) _____ Rob Stewart robert.stewart@sig.com Software Engineer, Core Software using std::disclaimer; Susquehanna International Group, LLP http://www.sig.com IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

Stewart, Robert wrote:
Felipe Magno de Almeida wrote:
On Wed, Dec 29, 2010 at 4:00 AM, Dave Abrahams <dave@boostpro.com> wrote:
At Wed, 29 Dec 2010 10:56:19 +0800, Dean Michael Berris wrote:
Actually, I'd +2 if you said a review should be open until the library gets into the main distribution. And even after that, reviewing the quality of the library should be on-going and shouldn't stop at the point of inclusion into Boost. ;)
_That_ is a really cool idea.
I had a lot to say about the review process at BoostCon 2010. It seemed pretty similar to the ideas touched upon in the thread and seemed generally well received. I haven't seen any efforts to implement any of the ideas yet though. http://www.rrsd.com/software_development/boost/BoostCon2010/index.htm Robert Ramey

At Wed, 29 Dec 2010 09:58:15 -0800, Robert Ramey wrote:
I had a lot to say about the review process at BoostCon 2010. It seemed pretty similar to the ideas touched upon in the thread and seemed generally well received. I haven't seen any efforts to implement any of the ideas yet though.
http://www.rrsd.com/software_development/boost/BoostCon2010/index.htm
You might recall that I mentioned at the time it's the person with the vision (that'd be you) who usually has to do the implementation, if it's going to be successful. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

At Wed, 29 Dec 2010 08:58:52 -0500, Stewart, Robert wrote:
Felipe Magno de Almeida wrote:
On Wed, Dec 29, 2010 at 4:00 AM, Dave Abrahams <dave@boostpro.com> wrote:
At Wed, 29 Dec 2010 10:56:19 +0800, Dean Michael Berris wrote:
Actually, I'd +2 if you said a review should be open until the library gets into the main distribution. And even after that, reviewing the quality of the library should be on-going and shouldn't stop at the point of inclusion into Boost. ;)
_That_ is a really cool idea.
I liked too. How many times I wanted to review a library, but didn't find any time in the two or three weeks they were being reviewed. Sometimes these libraries are in the review queue for much longer than that, and a review could've been possible before or even after.
There's nothing stopping you from submitting reviews for any of the pending libraries right now.
Yeah, but there's nothing encouraging it either. It would be cool to have a system that made it more rewarding to write reviews of Boost libraries, in such a way that reviews would continue after the review period. Of course, that's mostly social engineering and someone would have to figure out how to accomplish it :-) Maybe if the reviews were more carefully archived and somehow viewable separately from everything else, that'd be a first step. Just thinking out loud, now. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 12/29/2010 3:30 PM, Dave Abrahams wrote:
At Wed, 29 Dec 2010 08:58:52 -0500, Stewart, Robert wrote:
Felipe Magno de Almeida wrote:
On Wed, Dec 29, 2010 at 4:00 AM, Dave Abrahams <dave@boostpro.com> wrote:
At Wed, 29 Dec 2010 10:56:19 +0800, Dean Michael Berris wrote:
Actually, I'd +2 if you said a review should be open until the library gets into the main distribution. And even after that, reviewing the quality of the library should be on-going and shouldn't stop at the point of inclusion into Boost. ;)
_That_ is a really cool idea.
I liked too. How many times I wanted to review a library, but didn't find any time in the two or three weeks they were being reviewed. Sometimes these libraries are in the review queue for much longer than that, and a review could've been possible before or even after.
There's nothing stopping you from submitting reviews for any of the pending libraries right now.
Yeah, but there's nothing encouraging it either. It would be cool to have a system that made it more rewarding to write reviews of Boost libraries, in such a way that reviews would continue after the review period. Of course, that's mostly social engineering and someone would have to figure out how to accomplish it :-)
Maybe if the reviews were more carefully archived and somehow viewable separately from everything else, that'd be a first step. Just thinking out loud, now.
Well.. This is actually a solved social network problem. The obvious way to handle this is to post reviews to a web site in addition to the list organized by libraries, of course. The reviews would be available long-term and linked from the libraries listing (and the library itself). Making it so people can vote on reviews, and hence meta-vote on libraries, it might accomplish the social aspect. The immediate choice would be to structure it like stackoverflow. Hence people have some social competition impetus to do numerous quality reviews. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

On Thu, Dec 30, 2010 at 5:56 AM, Rene Rivera <grafikrobot@gmail.com> wrote:
On 12/29/2010 3:30 PM, Dave Abrahams wrote:
At Wed, 29 Dec 2010 08:58:52 -0500, Stewart, Robert wrote:
Yeah, but there's nothing encouraging it either. It would be cool to have a system that made it more rewarding to write reviews of Boost libraries, in such a way that reviews would continue after the review period. Of course, that's mostly social engineering and someone would have to figure out how to accomplish it :-)
Maybe if the reviews were more carefully archived and somehow viewable separately from everything else, that'd be a first step. Just thinking out loud, now.
Well.. This is actually a solved social network problem. The obvious way to handle this is to post reviews to a web site in addition to the list organized by libraries, of course. The reviews would be available long-term and linked from the libraries listing (and the library itself). Making it so people can vote on reviews, and hence meta-vote on libraries, it might accomplish the social aspect. The immediate choice would be to structure it like stackoverflow. Hence people have some social competition impetus to do numerous quality reviews.
Interesting thought. I think there's something to this meta-voting thing. A structure similar to Stack Overflow's would definitely put the game mechanics into it to make it at least a little more rewarding. -- Dean Michael Berris about.me/deanberris

On 12/28/2010 9:56 PM, Dean Michael Berris wrote:
On Tue, Dec 28, 2010 at 10:02 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
On 12/27/2010 11:42 PM, Dean Michael Berris wrote:
About the review process, the problem with the time limit that I see is the amount of work required to throughly look at a library usually doesn't fit in one week. And then the really deeper looks require quite a bit of discussion to clarify points and make sure that the reviewer and the library author(s) get to respond to questions and/or gather feedback regarding the implementation. By making the review process more of a collaborative development process instead of an "I'm finished, is it good enough?" thing, you can involve more people and encourage community building around your library.
I agree with you that the time limit for most reviews is too narrow. It barely leaves time for someone to investigate a library and write a good review. I believe any review should last a month or more. At the same time I do not see why more than one review can not go on at any time. If each review lasted a month minimum, perhaps as long as two months, but a number of reviews were going on at the same time, then possible Boost libraries would not languish in the queue so long.
+1
Actually, I'd +2 if you said a review should be open until the library gets into the main distribution. And even after that, reviewing the quality of the library should be on-going and shouldn't stop at the point of inclusion into Boost. ;)
If a review, as it sometimes happens, determines that a library does not qualify for Boost but if some things were improved it might, I would agree with you that a review could be ongoing if the developer of the library is committed to maklng changes that would improve it. But I do not see the advantage of keeping reviews open until the library is accepted, especially with libraries deemed not of a high enough quality for Boost. OTOH nothing keeps a developer from making changes in his library and re-submitting it for inclusion into Boost again. As far as determining the quality of a library on an ongoing basis, anybody can currently suggest changes and new features. But I do not believe the developer should have to meet those new specs. OTOH I see nothing wrong with someone forking the library on his own and producing a second, very similar implementation with features he may deem necessary added, updated, or changed, and submitting that to Boost. This has already been done in some cases, such as signals and signals2, so why should not someone feel that it can be done elsewhere with another library.

Prelude: The world needs ubiquitous Internet service on the cheap, because there are parts of the world where the Internet isn't as accessible. Or... I should really stop engaging in discussions around holiday times so that I don't miss out on the conversation. ;) That said, please see in-lined below. On Wed, Dec 29, 2010 at 2:41 PM, Edward Diener <eldiener@tropicsoft.com> wrote:
On 12/28/2010 9:56 PM, Dean Michael Berris wrote:
Actually, I'd +2 if you said a review should be open until the library gets into the main distribution. And even after that, reviewing the quality of the library should be on-going and shouldn't stop at the point of inclusion into Boost. ;)
If a review, as it sometimes happens, determines that a library does not qualify for Boost but if some things were improved it might, I would agree with you that a review could be ongoing if the developer of the library is committed to maklng changes that would improve it. But I do not see the advantage of keeping reviews open until the library is accepted, especially with libraries deemed not of a high enough quality for Boost. OTOH nothing keeps a developer from making changes in his library and re-submitting it for inclusion into Boost again.
Well, there's the rub. Take what I said in the context of collaborative development instead of the current way of getting libraries into Boost. My issue with the status quo is that the barrier to entry for a library (and thus a contributor) is really high especially because of this practice of not letting others pitch into the work that goes into writing a library. As a case in point, I started the implementation of a bloom filter -- it's in the sandbox now, is functional, and some people have already contributed to the discussion on the development of the library. Back then there were people saying something about changing the implementation to do this, or the interface to be like that, etc. -- I pretty much dropped the development of that library mostly because I found that the suggestions would have been better in the form of patches. It is way easier to tell someone "you should do it another way" instead of actually showing what should be done with code -- and at the time I was interested in finishing the implementation, there were too many people saying "do this", or "do that" instead of sending me patches. Now what would have been a better process IMO would be something like the following: 1. Someone (in this case, I) show the community that "hey, I have this idea for a library that would be cool to include in Boost" -- this means it's in Github and people can actually get it 2. Those that would have wanted to contribute, would, instead of just expressing interest, would actually fork the repo and then implement their contributions in their own repositories, either submit their changes to me and we can all be co-authors of the library, or just gut it and tell everyone that here's a better way of doing it, based on what I've already done (or not). 3. In the process of the library being developed, a review can be posted by anyone at any given time which would count as a personal vote for/against the inclusion of the library. 4. Once there are enough "yes" votes, the library can then be scheduled for inclusion by the release managers -- this means, release managers would typically either pull, or use something like a git submodule to manage the libraries to be packaged up. On-going development of the library can follow that model and the "review" becomes more a regular part of daily things that happen on the developer's mailing list. The "management" of the review could be as simple as setting up a Wordpress Poll or something similar to get an actual "vote" from the members of the community -- not in an anonymous manner of course. This process is nothing like the status quo, and is actually a more encouraging model that allows people to get involved with minimal effort required.
As far as determining the quality of a library on an ongoing basis, anybody can currently suggest changes and new features. But I do not believe the developer should have to meet those new specs. OTOH I see nothing wrong with someone forking the library on his own and producing a second, very similar implementation with features he may deem necessary added, updated, or changed, and submitting that to Boost. This has already been done in some cases, such as signals and signals2, so why should not someone feel that it can be done elsewhere with another library.
Sure, but that doesn't make the process collaborative -- which is actually my main "beef" with the current way things are going. And, even if someone were to re-write a signals implementation, there's no need to actually fork it as a separate project as it could very well just be an evolution of the implementation and just get the contribution in as part of the normal process. Then, the release managers just make a determination of whether to actually get a certain version of the signals implementation from one repo, or get another from another repo. I hope this makes sense. :) PS. If you think about how the stock market indexes work, there's only a handful of people who choose which listed stock gets to be part of an index. The S&P 500 is managed by Standard & Poor who rate stocks according to their performance, suitability, capitalization, etc. and just list which of these stocks are part of the index. Boost can follow this model and have release managers -- in behalf of a larger community -- actually picking libraries which become part of the main distribution, and libraries that want to be listed follow a process similar to what the SEC requires companies that want to list on the NYSE or NASDAQ (of course with less red tape and bureaucracy ;)). That's the crux of the model I've wanted to convey, but apparently I needed to sleep on it a little more to get that analogy out. :D -- Dean Michael Berris about.me/deanberris

On 1/1/2011 4:20 AM, Dean Michael Berris wrote:
Prelude: The world needs ubiquitous Internet service on the cheap, because there are parts of the world where the Internet isn't as accessible. Or... I should really stop engaging in discussions around holiday times so that I don't miss out on the conversation. ;) That said, please see in-lined below.
On Wed, Dec 29, 2010 at 2:41 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
On 12/28/2010 9:56 PM, Dean Michael Berris wrote:
Actually, I'd +2 if you said a review should be open until the library gets into the main distribution. And even after that, reviewing the quality of the library should be on-going and shouldn't stop at the point of inclusion into Boost. ;)
If a review, as it sometimes happens, determines that a library does not qualify for Boost but if some things were improved it might, I would agree with you that a review could be ongoing if the developer of the library is committed to maklng changes that would improve it. But I do not see the advantage of keeping reviews open until the library is accepted, especially with libraries deemed not of a high enough quality for Boost. OTOH nothing keeps a developer from making changes in his library and re-submitting it for inclusion into Boost again.
Well, there's the rub.
Take what I said in the context of collaborative development instead of the current way of getting libraries into Boost. My issue with the status quo is that the barrier to entry for a library (and thus a contributor) is really high especially because of this practice of not letting others pitch into the work that goes into writing a library.
As a case in point, I started the implementation of a bloom filter -- it's in the sandbox now, is functional, and some people have already contributed to the discussion on the development of the library. Back then there were people saying something about changing the implementation to do this, or the interface to be like that, etc. -- I pretty much dropped the development of that library mostly because I found that the suggestions would have been better in the form of patches. It is way easier to tell someone "you should do it another way" instead of actually showing what should be done with code -- and at the time I was interested in finishing the implementation, there were too many people saying "do this", or "do that" instead of sending me patches.
Now what would have been a better process IMO would be something like the following:
1. Someone (in this case, I) show the community that "hey, I have this idea for a library that would be cool to include in Boost" -- this means it's in Github and people can actually get it 2. Those that would have wanted to contribute, would, instead of just expressing interest, would actually fork the repo and then implement their contributions in their own repositories, either submit their changes to me and we can all be co-authors of the library, or just gut it and tell everyone that here's a better way of doing it, based on what I've already done (or not). 3. In the process of the library being developed, a review can be posted by anyone at any given time which would count as a personal vote for/against the inclusion of the library. 4. Once there are enough "yes" votes, the library can then be scheduled for inclusion by the release managers -- this means, release managers would typically either pull, or use something like a git submodule to manage the libraries to be packaged up.
On-going development of the library can follow that model and the "review" becomes more a regular part of daily things that happen on the developer's mailing list. The "management" of the review could be as simple as setting up a Wordpress Poll or something similar to get an actual "vote" from the members of the community -- not in an anonymous manner of course.
This process is nothing like the status quo, and is actually a more encouraging model that allows people to get involved with minimal effort required.
You can not fail to understand that your idea of collaborative development of your own library is not everybody's way of working. Do not try to force that idea on everybody else.

On Sat, Jan 1, 2011 at 9:50 PM, Edward Diener <eldiener@tropicsoft.com> wrote:
On 1/1/2011 4:20 AM, Dean Michael Berris wrote:
Well, there's the rub.
Take what I said in the context of collaborative development instead of the current way of getting libraries into Boost. My issue with the status quo is that the barrier to entry for a library (and thus a contributor) is really high especially because of this practice of not letting others pitch into the work that goes into writing a library.
[snip example]
This process is nothing like the status quo, and is actually a more encouraging model that allows people to get involved with minimal effort required.
You can not fail to understand that your idea of collaborative development of your own library is not everybody's way of working. Do not try to force that idea on everybody else.
Right, so what is the other idea of collaborative development? Notice that in the example process I posted, there was the chance for me to actually say "no, I don't like this patch". Also, under the same example process you can also do the "status quo" process, just don't announce your library for inclusion for Boost until you feel it's ready and don't accept patches once you've done the announcement. That example process allows for the status quo process *and* a more collaborative process to happen. But what's happening at the moment is that people who *like* the collaborative process *aren't* supported explicitly by the current Boost process. This means what's happening is, the current Boost process is imposing upon me the way to develop a library that I want to develop and am already developing. Note that the goal is to lower the barrier to entry, in which the current process introduces a lot of (barriers). I'm not about to impose anything on anybody -- I was under the impression that decisions like these would be a community matter. In that case, I don't see why I shouldn't try to ask everybody else to change the way they do things because, well, the current process is asking me to change the way *I* do things. ;) HTH -- Dean Michael Berris about.me/deanberris

On Jan 1, 2011, at 5:12 AM, Dean Michael Berris <mikhailberis@gmail.com> wrote:
But what's happening at the moment is that people who *like* the collaborative process *aren't* supported explicitly by the current Boost process. This means what's happening is, the current Boost process is imposing upon me the way to develop a library that I want to develop and am already developing.
I don't see how boost fails to support collaboration. Anyone is free to use Git (or whatever) and whatever community resources they want. Nearly everything I've done within boost has been a collaboration. BoostPro Computing * http://boostpro.com [Sent from coveted but awkward mobile device] --

On Sun, Jan 2, 2011 at 12:20 AM, Dave Abrahams <dave@boostpro.com> wrote:
On Jan 1, 2011, at 5:12 AM, Dean Michael Berris <mikhailberis@gmail.com> wrote:
But what's happening at the moment is that people who *like* the collaborative process *aren't* supported explicitly by the current Boost process. This means what's happening is, the current Boost process is imposing upon me the way to develop a library that I want to develop and am already developing.
I don't see how boost fails to support collaboration. Anyone is free to use Git (or whatever) and whatever community resources they want. Nearly everything I've done within boost has been a collaboration.
Yes, true, but the context of what I was saying is in the case where a library being developed for eventual inclusion into Boost is met with the barrier which is the sandbox, and the review process that happens only later on in the library's development. Of course a library can be developed outside of Boost outside of the Boost process first, then just jump into the process when the library is deemed "ready" for review -- this is basically what I'm going through with cpp-netlib as well. Now though, the point I was trying to address is that the current process for getting a library into Boost seems to start at the "the library is ready now" instead of the "here's a good idea, let's work on it" stage. With a continuous review until it gets "baked right" and made part of the main distribution that's explicitly supported by the current process -- i.e., support multiple concurrent libraries under construction by multiple teams -- and with the reviews being as short as they are and with timing constraints, the whole point of the peer review (which is supposed to be the scalable part) becomes the bottleneck. Of course the current process forces me to think in a way that makes the library development/design an individual process, rather than an open collaborative process. Here's where the implicit and explicit nature of the process comes in and I think needs to be addressed. I hope that makes more sense at least. ;) -- Dean Michael Berris about.me/deanberris

On 1/1/2011 9:12 AM, Dean Michael Berris wrote:
On Sat, Jan 1, 2011 at 9:50 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
On 1/1/2011 4:20 AM, Dean Michael Berris wrote:
Well, there's the rub.
Take what I said in the context of collaborative development instead of the current way of getting libraries into Boost. My issue with the status quo is that the barrier to entry for a library (and thus a contributor) is really high especially because of this practice of not letting others pitch into the work that goes into writing a library.
[snip example]
This process is nothing like the status quo, and is actually a more encouraging model that allows people to get involved with minimal effort required.
You can not fail to understand that your idea of collaborative development of your own library is not everybody's way of working. Do not try to force that idea on everybody else.
Right, so what is the other idea of collaborative development?
Notice that in the example process I posted, there was the chance for me to actually say "no, I don't like this patch". Also, under the same example process you can also do the "status quo" process, just don't announce your library for inclusion for Boost until you feel it's ready and don't accept patches once you've done the announcement. That example process allows for the status quo process *and* a more collaborative process to happen.
But what's happening at the moment is that people who *like* the collaborative process *aren't* supported explicitly by the current Boost process. This means what's happening is, the current Boost process is imposing upon me the way to develop a library that I want to develop and am already developing.
Note that the goal is to lower the barrier to entry, in which the current process introduces a lot of (barriers).
I'm not about to impose anything on anybody -- I was under the impression that decisions like these would be a community matter. In that case, I don't see why I shouldn't try to ask everybody else to change the way they do things because, well, the current process is asking me to change the way *I* do things. ;)
You still do not get it. So here it is as plain as I can make it. As a developer potentially creating for Boost I am interested in creating libraries myself, documenting and testing them, then offering them to others to use and comment on. If I feel like the comments will improve the library, I will improve it. If others do not like the library they will not use it, or not want it to be part of Boost, or they will create their own library to do what I have done. I may work on a library with a very few other people if that suits me, but I feel absolutely no impetus to work in a collaborative environment with a group of people, each one of which can contribute to the library in their own way. Shocking, isn't it. There are still a few souls like me that are not going to create software as a community effort of people, but individually as my own design and effort to do something as best as I can. That does not mean I would not ask or accept help from others or give help to others if I can. Nor does it mean that I am not appreciative of the knowledge of others and what they may have to tell me to improve my efforts. But it does mean that I have almost no interest in initially creating any piece of software as a community effort. With all that said my view of community effort is much less harsh than it may seem. Of course developers should be free if they have the desire to contribute their effort to a library which already exists, as well as work on a library initially together if they like. But you should stop proselytizing for this as the end-all and be-all of all software development in Boost, as if everything done must be some community of programmers in some sort of collaborative effort.

On Sun, Jan 2, 2011 at 8:50 AM, Edward Diener <eldiener@tropicsoft.com> wrote:
On 1/1/2011 9:12 AM, Dean Michael Berris wrote:
I'm not about to impose anything on anybody -- I was under the impression that decisions like these would be a community matter. In that case, I don't see why I shouldn't try to ask everybody else to change the way they do things because, well, the current process is asking me to change the way *I* do things. ;)
You still do not get it.
I think I do get your point, except that what I'm saying doesn't seem to be making much sense -- or at least not conveying the fact that I get your point. :D I was actually agreeing with you when I said that the current process will be supported by the "extended" version I'm proposing. ;)
So here it is as plain as I can make it. As a developer potentially creating for Boost I am interested in creating libraries myself, documenting and testing them, then offering them to others to use and comment on. If I feel like the comments will improve the library, I will improve it. If others do not like the library they will not use it, or not want it to be part of Boost, or they will create their own library to do what I have done. I may work on a library with a very few other people if that suits me, but I feel absolutely no impetus to work in a collaborative environment with a group of people, each one of which can contribute to the library in their own way.
Sure, and the expanded process I'm proposing *doesn't* preclude this. The current process actually prescribes what you're proposing which isn't necessarily wrong, except IMO -- and the whole point of the discussion really is -- it doesn't scale, which is the problem I wanted to address. You can go about doing exactly this what you describe in the expanded process and it wouldn't be a problem. But if the expanded process also accounted for *explicitly* another supported means for getting libraries from the review queue to the main distribution -- to address the scalability, continued maintenance, and lowered barrier to entry for potential developers -- then I don't see why that would be a bad thing. It's all a matter of tweaking the process to allow for a more collaborative way of developing libraries, not saying that any other way (the current way or the way you highlight) would be discouraged. The irony of it is that that a more welcoming and collaborative process also embraces a non-collaborative process as the degenerate case. :)
Shocking, isn't it. There are still a few souls like me that are not going to create software as a community effort of people, but individually as my own design and effort to do something as best as I can. That does not mean I would not ask or accept help from others or give help to others if I can. Nor does it mean that I am not appreciative of the knowledge of others and what they may have to tell me to improve my efforts. But it does mean that I have almost no interest in initially creating any piece of software as a community effort.
Not shocking at all, and I account for that in the process I describe. The only difference between a more collaborative process and a process that you describe is that: in your case, you just don't put the same premium on the collaborative aspect. Your community can be a community of 1. ;)
With all that said my view of community effort is much less harsh than it may seem. Of course developers should be free if they have the desire to contribute their effort to a library which already exists, as well as work on a library initially together if they like. But you should stop proselytizing for this as the end-all and be-all of all software development in Boost, as if everything done must be some community of programmers in some sort of collaborative effort.
Hmmm... I don't think I should stop "proselytizing" mainly because the point of starting the discussion is so that I hope to at least be able to convince people that a different way might make sense for Boost. I didn't say it was the end-all and be-all, I was mainly putting out ideas for how to lower the barrier to entry for potential contributors. If I don't convince you, then that's alright by me. Although I say again, what you describe would be perfectly fine in the process I was describing a few emails ago. Now if others -- like me -- would like to do it in a more collaborative manner than how you describe, I don't see why the Boost policy or guidelines shouldn't explicitly allow or at least encourage that as a means to scale the development/maintenance/evolution effort to ensure that Boost keeps going as an open source project that people want to contribute too. HTH -- Dean Michael Berris about.me/deanberris

Hi Dean, I happen to agree with what you are saying about there not being much support for collaborative development before review (for people who want that), and IMO git does sound neat, but -
On-going development of the library can follow that model and the "review" becomes more a regular part of daily things that happen on the developer's mailing list. The "management" of the review could be as simple as setting up a Wordpress Poll or something similar to get an actual "vote" from the members of the community -- not in an anonymous manner of course.
You are undervaluing what a review manager does here. A big part is to moderate the discussion, find points of agreement, and work out compromises on individual points - as well as deciding where there is not going to be agreement at all. A poll would certainly not cover this. It is more like consensus-building than voting. I could imagine tools that would help with this (e.g. email re-threading), but I haven't seen any yet. In a way a review manager is an advocate for the library who is not as ego-burdened as the author(s).
Sure, but that doesn't make the process collaborative -- which is actually my main "beef" with the current way things are going. And, even if someone were to re-write a signals implementation, there's no need to actually fork it as a separate project as it could very well just be an evolution of the implementation and just get the contribution in as part of the normal process. Then, the release managers just make a determination of whether to actually get a certain version of the signals implementation from one repo, or get another from another repo.
This seems to shift a lot of the decision-making to the release managers, who are already overworked. Review managers can better focus on their individual libraries and judge whether the conditions on acceptance were fulfilled. Joachim's proposal for review manager assistants would lighten their workload considerably. Maybe there is a case for maintenance review managers? This is a complex social process, and tools aren't going to make it easy. But they can help people make better judgements, and follow through better. Thank you for raising many interesting ideas, Gordon

On Sun, Jan 2, 2011 at 1:48 AM, Gordon Woodhull <gordon@woodhull.com> wrote:
Hi Dean,
I happen to agree with what you are saying about there not being much support for collaborative development before review (for people who want that), and IMO git does sound neat, but -
On-going development of the library can follow that model and the "review" becomes more a regular part of daily things that happen on the developer's mailing list. The "management" of the review could be as simple as setting up a Wordpress Poll or something similar to get an actual "vote" from the members of the community -- not in an anonymous manner of course.
You are undervaluing what a review manager does here. A big part is to moderate the discussion, find points of agreement, and work out compromises on individual points - as well as deciding where there is not going to be agreement at all. A poll would certainly not cover this. It is more like consensus-building than voting. I could imagine tools that would help with this (e.g. email re-threading), but I haven't seen any yet.
Right, I missed that part completely. I guess a poll wouldn't work as well. How about if the review was posted as Trac/Redmine/Jira issues on the projects instead and where the review manager can curate the discussion accordingly? The progress of which can be kept track of in a Wiki which eventually becomes a "living record" of the actual review process -- meaning, especially in Trac, issues can be linked to from the Wiki and as such they can technically be referenced, cross-referenced, and collected accordingly. Comments in the issues should typically stay "on-topic" and thus moderating the discussion on issues would be a part of the normal chores for a review manager (or managers).
In a way a review manager is an advocate for the library who is not as ego-burdened as the author(s).
+1 to that. I agree completely.
Sure, but that doesn't make the process collaborative -- which is actually my main "beef" with the current way things are going. And, even if someone were to re-write a signals implementation, there's no need to actually fork it as a separate project as it could very well just be an evolution of the implementation and just get the contribution in as part of the normal process. Then, the release managers just make a determination of whether to actually get a certain version of the signals implementation from one repo, or get another from another repo.
This seems to shift a lot of the decision-making to the release managers, who are already overworked. Review managers can better focus on their individual libraries and judge whether the conditions on acceptance were fulfilled. Joachim's proposal for review manager assistants would lighten their workload considerably.
Well, not entirely on the release managers really. More on the community of trusted developers alongside the release managers. The logic goes this way: Given: - there are Release Managers whose main purpose is to make sure that the upcoming release is in a shippable state - there are Trusted Developers working on the integration and stabilization effort on releases - there are a list of libraries that are considered part of the Boost distribution Then: - a release will consist of snapshots of releases from individual libraries packaged as a whole - there may be patches made on the official distribution as part of the stabilization effort across Boost distributions releases, which should be submitted back "upstream" to the individual libraries - the "developer community" around a particular library (I put that in quotes because that can very well be one person) then manages the development of that particular library and all patches made to that library So in that situation there are two places where a potential contributor can get involved in: in the evolution of an existing library (for example, signals), or the stabilization of releases and gain status as a "trusted developer" through the web of trust system (as alluded to by the GPG key signing mechanism, etc.). You lower the barrier in this situation two ways: 1. You allow for more chances for people to contribute in a less-obtrusive manner. No need to get permission to get commit access to the sandbox/trunk to get started with contributing. 2. You allow individual libraries to grow communities and/or evolve/mature independently. Note that the status quo would be a subset of this larger scheme, which means people can keep working on a single Boost repository (may it be Git or Subversion) and libraries can still evolve outside of Boost getting integrated into the Boost repository later. There's nothing really being removed in the process if you look at it, the current process would still be supported in this expanded process to lower the barrier to entry.
Maybe there is a case for maintenance review managers?
I haven't gotten that far yet though. ;) I usually just call that the community, which is more fluid -- no need to put a label on that role which pretty much any Boost contributor would do in the case of submitting patches and making sure that libraries being used are maintained accordingly. :D
This is a complex social process, and tools aren't going to make it easy. But they can help people make better judgements, and follow through better.
Definitely. Although I think the tools you use increases the upper bound on the productivity of a group in general. All other things being equal, better tools usually gives you a better advantage. ;)
Thank you for raising many interesting ideas,
You're welcome, and thank you for the thoughtful response as well. :D -- Dean Michael Berris about.me/deanberris

On 1/1/2011 5:20 PM, Dean Michael Berris wrote:
On Wed, Dec 29, 2010 at 2:41 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
As far as determining the quality of a library on an ongoing basis, anybody can currently suggest changes and new features. But I do not believe the developer should have to meet those new specs. OTOH I see nothing wrong with someone forking the library on his own and producing a second, very similar implementation with features he may deem necessary added, updated, or changed, and submitting that to Boost. This has already been done in some cases, such as signals and signals2, so why should not someone feel that it can be done elsewhere with another library.
Sure, but that doesn't make the process collaborative -- which is actually my main "beef" with the current way things are going. And, even if someone were to re-write a signals implementation, there's no need to actually fork it as a separate project as it could very well just be an evolution of the implementation and just get the contribution in as part of the normal process. Then, the release managers just make a determination of whether to actually get a certain version of the signals implementation from one repo, or get another from another repo.
Dean, you have very good points. We all want to have better processes in place and we all agree that there are various aspects of boost process that can be improved. However, in my opinion, there's nothing wrong with the collaborative environment in Boost. I've been collaborating with other Boost-izens for almost a decade now and that extends long before I got my first library into Boost. In my experience, collaboration started as soon as I introduced my library into Boost in the all-too-familiar "is anyone interested" fashion. Pre-boost collaboration happened at SourceForge for Spirit. Today, that may very well be github. I fail to see any problem with that. Git may be the next greatest tool out there. N-years down the line, it may be Wow-xxx or Geez-yyy or whatever, which will give us M more features. Then, one very bright individual will advocate the latest "greatest" tool proclaiming that he can't live without the features of Geez-yyy and that the current tools are broken and that if you begin to use Geez-yyy, you don't want to go back. Do we see a pattern emerging? I think it's an effect of the gadget generation. As for us, Spirit folks, we are quite happy with whatever tools Boost provides for us. We will continue to collaborate and innovate regardless of the tools. Remember: A bad craftsman always blames his tools. That said, I want to end this with a constructive note. I do think that you have very valid points and suggestions. My suggestion: focus on improving the process without focusing too much on the tools. As much as possible, take advantage of whatever tools are set in place. ***Prefer evolution instead of revolution.*** Regards, -- Joel de Guzman http://www.boostpro.com http://spirit.sf.net

On Sun, Jan 2, 2011 at 9:38 AM, Joel de Guzman <joel@boost-consulting.com> wrote:
On 1/1/2011 5:20 PM, Dean Michael Berris wrote:
Sure, but that doesn't make the process collaborative -- which is actually my main "beef" with the current way things are going. And, even if someone were to re-write a signals implementation, there's no need to actually fork it as a separate project as it could very well just be an evolution of the implementation and just get the contribution in as part of the normal process. Then, the release managers just make a determination of whether to actually get a certain version of the signals implementation from one repo, or get another from another repo.
Dean, you have very good points. We all want to have better processes in place and we all agree that there are various aspects of boost process that can be improved.
Thanks. :)
However, in my opinion, there's nothing wrong with the collaborative environment in Boost. I've been collaborating with other Boost-izens for almost a decade now and that extends long before I got my first library into Boost. In my experience, collaboration started as soon as I introduced my library into Boost in the all-too-familiar "is anyone interested" fashion. Pre-boost collaboration happened at SourceForge for Spirit. Today, that may very well be github. I fail to see any problem with that.
True, there are a lot of people already collaborating now and the environment is actually good -- for people already within the process, i.e. have subversion access to the sandbox, are active in the mailing list, are comfortable with creating patches and submitting them through Trac. Also, just for the record, I have no issue with the way collaboration is happening now. What I do have an issue with is the realization that the current process doesn't scale and the barrier to entry is pretty high even to those most determined to contribute -- and the tools we use currently which don't allow for a scalable means of accommodating more contributors and more effective collaboration. The idea is to tweak the current process, so that the collaboration can happen in a scalable fashion as compared to what's happening now. The prerequisite to that would be making the contribution process easier -- either to the libraries under construction, or those that are already in the distribution.
Git may be the next greatest tool out there. N-years down the line, it may be Wow-xxx or Geez-yyy or whatever, which will give us M more features. Then, one very bright individual will advocate the latest "greatest" tool proclaiming that he can't live without the features of Geez-yyy and that the current tools are broken and that if you begin to use Geez-yyy, you don't want to go back.
Do we see a pattern emerging? I think it's an effect of the gadget generation. As for us, Spirit folks, we are quite happy with whatever tools Boost provides for us. We will continue to collaborate and innovate regardless of the tools.
I for one would like to think that I'm not part of that gadget generation you refer to. ;-) Kidding aside, I see that the Spirit development effort has an effective means of releasing versions of Spirit that are "pulled in" (or in this case, merged into) the Boost Release. I also see that Spirit has a process on its own and a community on its own of users and developers. Same goes for Asio, which AFAIK has a mailing list and a different development timeline. This is really the point I was trying to make: to encourage people to do the same, but this time in a slightly different (and arguably more scalable) paradigm. Instead of there being just one Boost library project where the chaos of innovating libraries and getting the consensus on which libraries to include in the distribution happens in a single place (centralized), that there may very well be multiple library projects that have their own development pace and a means of pulling a distribution together in a non-obtrusive and "seamless" manner (decentralized).
Remember: A bad craftsman always blames his tools.
Yes, and a good craftsman knows when the tools he uses aren't enough for the project at hand. ;-)
That said, I want to end this with a constructive note. I do think that you have very valid points and suggestions. My suggestion: focus on improving the process without focusing too much on the tools. As much as possible, take advantage of whatever tools are set in place.
***Prefer evolution instead of revolution.***
Thanks, I'll try to think of a way to make the decentralized development of Boost libraries happen with tools that work in a different paradigm. At the moment, the best model that I know of a scalable open source project in terms of development, evolution, and lowered barrier to entry for contributors is the one that the Linux kernel and the Linux distribution projects follow. In these settings, the decentralized system is the one that works and where collaboration and active involvement is encouraged (and nurtured). Thanks again Joel. :) -- Dean Michael Berris about.me/deanberris

Am 27.12.2010 21:05, schrieb Vladimir Prus:
Dean Michael Berris wrote:
4. Change the review process instead from a submission->review->inclusion process that's rigidly scheduled to one that is less rigid and is more fluid.
I think that the current review process is actually good.
How much libs are in the review queue and how long are they waiting for a review (my libs are waiting for more than one year)? The review process is very slow and could be much faster (at least for me). Oliver

Oliver Kowalke wrote:
Am 27.12.2010 21:05, schrieb Vladimir Prus:
Dean Michael Berris wrote:
4. Change the review process instead from a submission->review->inclusion process that's rigidly scheduled to one that is less rigid and is more fluid.
I think that the current review process is actually good.
How much libs are in the review queue and how long are they waiting for a review (my libs are waiting for more than one year)? The review process is very slow and could be much faster (at least for me).
- Yes it could. I get the impression that the review process is not actively driven -- in particular, I'm sure that if past review managers were contacted and asked if they would be willing to review something again, we'd have quite some slots in the schedule filled in. - Given that somebody still should decide that include your library in the official release, you still depend on active 'somebody'. - Even now, nothing prevents you from publishing your library for anybody to try. Am I missing something? Thanks, Volodya

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Vladimir Prus Sent: Tuesday, December 28, 2010 11:40 AM To: boost@lists.boost.org Subject: Re: [boost] Respecting a projects toolchain decisions (was Re: [context] new version - support for Win64)
Oliver Kowalke wrote:
Am 27.12.2010 21:05, schrieb Vladimir Prus:
Dean Michael Berris wrote:
4. Change the review process instead from a submission->review->inclusion process that's rigidly scheduled to submission->review->one that is less rigid and is more fluid.
I think that the current review process is actually good.
How much libs are in the review queue and how long are they waiting for a review (my libs are waiting for more than one year)? The review process is very slow and could be much faster (at least for me).
- Yes it could. I get the impression that the review process is not actively driven -- in particular, I'm sure that if past review managers were contacted and asked if they would be willing to review something again, we'd have quite some slots in the schedule filled in.
- Given that somebody still should decide that include your library in the official release, you still depend on active 'somebody'.
- Even now, nothing prevents you from publishing your library for anybody to try.
Am I missing something?
I feel that there are big barriers to getting a 'user base' for libraries (for me, an very important part of the review process - for the users will smoke out defects in both design and implementation and will contribute to the formal review process). 1 Not yet reviewed libraries are not in quite the right format (folders etc) so that they can easily be added to one's main Boost tree. (John Maddock has comments on how easy it is to use SVN for this, but I think many people need help/documentation on doing this). <aside> the difference between CVS and SVN was mainly the decent user interface? </aside> 2 As I've said before, I believe we need a process change to package 'ready for review' libraries differently. Perhaps we might require a few sponsors (probably users) to give a library this status? And set up a new SVN tree 'Boost-Review' like sandbox, but better ordered? 3 Not yet reviewed libraries are not in a (separate) Sourceforge download. This makes them seem not 'kosher'. Paul --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

Oliver Kowalke-2 wrote:
Am 27.12.2010 21:05, schrieb Vladimir Prus:
Dean Michael Berris wrote:
4. Change the review process instead from a submission->review->inclusion process that's rigidly scheduled to one that is less rigid and is more fluid.
I think that the current review process is actually good.
How much libs are in the review queue and how long are they waiting for a review (my libs are waiting for more than one year)?
Oliver, your first version of ThreadPool could be reviewed more than a year ago: Now that all your libs depend on Boost.Atomic, which is not yet in the reveiw schedule, your libs are blocked by dependencies and can not be reviewed, or am I missing something?
The review process is very slow and could be much faster (at least for me).
I don't think the review process is slow. The major issue for ñost of them i sthat the review manager is missing. I would like to know how many libs in the list are really ready for review. Best, Vicente -- View this message in context: http://boost.2283326.n4.nabble.com/Respecting-a-projects-toolchain-decisions... Sent from the Boost - Dev mailing list archive at Nabble.com.

On Tue, 28 Dec 2010 04:00:50 -0800 (PST) Vicente Botet <vicente.botet@wanadoo.fr> wrote:
The review process is very slow and could be much faster (at least for me).
I don't think the review process is slow. The major issue for ñost of them i sthat the review manager is missing.
I would like to know how many libs in the list are really ready for review.
XInt is ready, and has been since somewhere around July. I haven't been looking for a review manager very aggressively as yet (busy with other things), but I plan to in the very near future. -- Chad Nelson Oak Circle Software, Inc. * * *

At Tue, 28 Dec 2010 04:00:50 -0800 (PST), Vicente Botet wrote:
I don't think the review process is slow. The major issue for ñost of them i sthat the review manager is missing.
That's precisely what Joachim's proposal at BoostCon addressed. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

At Tue, 28 Dec 2010 11:05:35 +0100, Oliver Kowalke wrote:
Am 27.12.2010 21:05, schrieb Vladimir Prus:
Dean Michael Berris wrote:
4. Change the review process instead from a submission->review->inclusion process that's rigidly scheduled to one that is less rigid and is more fluid.
I think that the current review process is actually good.
How much libs are in the review queue and how long are they waiting for a review (my libs are waiting for more than one year)? The review process is very slow and could be much faster (at least for me).
At BoostCon, Joachim Faulhaber presented some great ideas for improving the review process, that IMO would keep its positive aspects while allowing the pace of review to increase. I keep hoping he'll make a case for those changes here, and help to implement them. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 12/26/2010 8:31 AM, Dean Michael Berris wrote:
On Sun, Dec 19, 2010 at 12:43 AM, Lars Viklund<zao@acc.umu.se> wrote:
On Fri, Dec 17, 2010 at 9:34 PM, Lars Viklund<zao@acc.umu.se> wrote: On Fri, Dec 17, 2010 at 22:19, Dean Michael Berris <mikhailberis@gmail.com> wrote: Am 18.12.2010 09:47, schrieb Scott McMurray: The choice of whether the current system is sufficient is not made by some committee or some handful of users that get to decide whether the system is sufficient or otherwise.
Well, yes, and no.. Ultimately the choice is made by the Boost moderators and the people ponying up the server and personnel resources. They may be some consensus building on the dev list but AFAIR how to serve the Boost sources is not a "community" choice.
In the end, the version control you choose is rather tangential. As long as it's sufficiently competent (which Subversion in my eyes is), you'll survive.
I think you haven't been looking at -- or are ignoring -- the problems that Boost is already having when it comes to making the development effort more scalable.
I have mentioned in the past that the real problems Boost has, have nothing to do with the tools. But instead with the organization and process.
Of course, you may propose constructive criticism and suggest migration plans to other toolchains, with good arguments for why this is a good thing. See the mythical 'Ryppl' project, which aims to componentise Boost into a pile of Git repositories and some magical combination of scripts and CMake, aimed at letting you track exactly the versions of components you need.
Well, it's not mythical -- it's there, and the Boost Libraries have pretty much been broken up already. The CMake migration is taking a while and the only reason for that is there aren't enough help going into the CMake effort.
The fact that "there aren't enough people" to make a cmake version possible should be an indication that it should be reconsidered. If it's not possible for *one* person, working part time, to create and maintain the build system, it's already failed.
Remember that no tool is isolated. Changing from Subversion to <whatever> would result in many changes propagating to how test runners are set up, rewriting of commit hooks, modifying Trac (if possible) (although the SVN functionality is disabled there for now), requiring adaptation of any entity out there that use Boost's repositories in any way, including externals, build scripts, CI environments, etc.
Well, see, all these things you mention are really tangential to the issue of whether you're using Subversion or Git.
Trac can be (and I think, should be) abandoned for something that reflects better the workflow that Boost would want to encourage and that performs better on the machine that is available to it. If the solution is hosted for Boost then I would say it would be better. Migration is always going to be an issue, but it's a mechanical issue in reality. People just have to decide to do it, and then do it.
Well, that last past is your problem! It's not that people have to decide to do it.. It's that people have to demonstrate it's possible with actual use cases. For example the Cmake effort tried to make a build system equivalent to BBv2, and it did not entirely succeed in having the same features. The same applies to any system you might think of replacing. As a present example.. I'm working on replacing the test reporting system of Boost. But you don't see me trying to convince anyone a priory to switch to it or to devote resources to it. When I'm done with it, I'll show it to the community. And if I'm lucky I'll convince enough people that it's worth switching, shouldn't be hard in this domain though ;-) And the moderators, testers, and others devoting their personal resources will decide to switch.
The commit hooks can be ported (quite easily if I may say so myself): http://www.kernel.org/pub/software/scm/git/docs/githooks.html if there was really enough momentum towards getting Boost from Subversion to Git. The regression test runners could very well just change the commands they use in the script -- instead of checking out, you'd clone, and instead of updating, you'd pull.
All these things you mention are artificially made to look "hard" because it's all a matter of migration really. The "hard" part is accepting that there are better solutions out there already.
Awesome... Please show us a working Git+Trac (or equivalent flavor of software you are proposing) of Boost will all the history and trac tickets ported over, i.e. with a working migration plan, and I'll consider it. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

On 12/26/2010 11:17 PM, Rene Rivera wrote:
The fact that "there aren't enough people" to make a cmake version possible should be an indication that it should be reconsidered. If it's not possible for *one* person, working part time, to create and maintain the build system, it's already failed.
I don't think there is even one person actively working on it currently, and there hasn't been for some time. We've all been distracted with other things, including the guys from Kitware. IMO, we desperately need a modularized-Boost-CMake-build "guy". Volunteers would be appreciated. -- Eric Niebler BoostPro Computing http://www.boostpro.com

Am Sonntag, den 26.12.2010, 23:35 -0500 schrieb Eric Niebler:
On 12/26/2010 11:17 PM, Rene Rivera wrote:
The fact that "there aren't enough people" to make a cmake version possible should be an indication that it should be reconsidered. If it's not possible for *one* person, working part time, to create and maintain the build system, it's already failed.
I don't think there is even one person actively working on it currently, and there hasn't been for some time. We've all been distracted with other things, including the guys from Kitware. IMO, we desperately need a modularized-Boost-CMake-build "guy". Volunteers would be appreciated.
What exactly is the problem with CMake? Could you describe the role of this "CMake-guy" in more detail? cheers, Daniel

On 12/27/2010 6:12 AM, Daniel Pfeifer wrote:
Am Sonntag, den 26.12.2010, 23:35 -0500 schrieb Eric Niebler:
On 12/26/2010 11:17 PM, Rene Rivera wrote:
The fact that "there aren't enough people" to make a cmake version possible should be an indication that it should be reconsidered. If it's not possible for *one* person, working part time, to create and maintain the build system, it's already failed.
I don't think there is even one person actively working on it currently, and there hasn't been for some time. We've all been distracted with other things, including the guys from Kitware. IMO, we desperately need a modularized-Boost-CMake-build "guy". Volunteers would be appreciated.
What exactly is the problem with CMake?
No problem at all.
Could you describe the role of this "CMake-guy" in more detail?
There are several facets to this, and the first job is in figuring out how far to go. 1) There already was CMake support in Boost but it was removed because it was unmaintained. A "CMake-guy" could simply assume the responsibility of maintaining this separate build system across all of Boost. In this scenario, we leave the existing testing and release procedures in place. 2) A more ambitious plan is to modularize boost, put each library into its own git repository, and port *that* to CMake. I was working on that until I got busy with other things. If someone were interesting in running with this, I could help them get up to speed. The benefits of this would be a very nimble and flexible Boost library development ecosystem. The idea is very appealing to me. 3) Ryppl: this project had (2) as a sub-goal, but added a top-level tool that took meta-data about project dependencies (in git repositories), downloaded, built, installed, and tested everything. It was to be based on git, CMake, and python packaging support. We got it to the point where it could download, build, install and uninstall packages and resolve some simple dependencies. It got hung up on resolving more complicated dependencies, which is hard problem in the general case. It's doable, but in the end, we just didn't have enough free time. In all cases, there is also the issue of migration, how it effects testing, release procedures, trac, the website, etc. etc. It's acceptable so simply say it doesn't effect it at all and simply ship CMake as an optional alternate build system. But a "CMake guy" would have to commit long-term to maintaining it. -- Eric Niebler BoostPro Computing http://www.boostpro.com

On Mon, Dec 27, 2010 at 10:55 AM, Eric Niebler <eric@boostpro.com> wrote:
3) Ryppl: this project had (2) as a sub-goal, but added a top-level tool that took meta-data about project dependencies (in git repositories), downloaded, built, installed, and tested everything. It was to be based on git, CMake, and python packaging support. We got it to the point where it could download, build, install and uninstall packages and resolve some simple dependencies. It got hung up on resolving more complicated dependencies, which is hard problem in the general case. It's doable, but in the end, we just didn't have enough free time.
I still intend to produce useful results on this front in the next 120 days. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 12/27/2010 4:38 PM, Dave Abrahams wrote:
I still intend to produce useful results on this front in the next 120 days.
I am trying to use and install RYPPL and am having a problem. I followed the instructions found at 'http://github.com/ryppl/ryppl'. When trying to run self_test.py, I get the error that there is no module named distutils2 which I did install successfully. Besides wanting to have a fix for this problem, is there any documentation or tutorial from the end users perspective. An example of how to create a new dll project, another dll project that depends upon the first dll project and finally an executable project that depends upon them both would be great. The problem is I am not an experienced Python programmer and as such don't know the best practices. To compound matters, most web documentation is on using PIP and disutils from the PHP programmer's perspective and not for a C++ programmer. Consequently users like me need help from you C++ and Python veterans using these individual and combined technologies. P.S. I have tried using Maven successfully for C++ but what it doesn't provide [well] is the distribution of source. While it can build source and handle dependencies, it distributes by default the deliverable results. Maybe the previous example could show using PIP and disutils with C++ and then how adding RYPPL ties them all together. -------------------------------------------------------------------------- -------------------- Results of installing Distutils2 -------------------- -------------------------------------------------------------------------- C:\Python27\Scripts\pip install Distutils2 --upgrade Downloading/unpacking Distutils2 Downloading Distutils2-1.0a3.tar.gz (878Kb): 878Kb downloaded Running setup.py egg_info for package Distutils2 Installing collected packages: Distutils2 Found existing installation: Distutils2 1.0a3 Uninstalling Distutils2: Successfully uninstalled Distutils2 Running setup.py install for Distutils2 Successfully installed Distutils2 Cleaning up... ----------------------------------------------------------------------------------------------- -------------------- Results of trying to run test ryppl\test\self_test.py -------------------- ----------------------------------------------------------------------------------------------- C:\Python27\python.exe self_test.py Checking for installed prerequisites in PATH: git ... cmake ... ok Cleaning ... ok Preparing test environment ... pip install virtualenv ... ok pip install --no-index -f http://pypi.python.org/packages/source/n/nose/ nose ... ok pip install scripttest>=1.0.4 ... ok ok E ====================================================================== ERROR: Failure: ImportError (No module named distutils2) ---------------------------------------------------------------------- Traceback (most recent call last): File "c:\docume~1\jwater~1.que\locals~1\temp\tmpsoe7ht-ryppl_self_test\lib\sit e-packages\nose\loader.py", line 390, in loadTestsFromName addr.filename, addr.module) File "c:\docume~1\jwater~1.que\locals~1\temp\tmpsoe7ht-ryppl_self_test\lib\sit e-packages\nose\importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "c:\docume~1\jwater~1.que\locals~1\temp\tmpsoe7ht-ryppl_self_test\lib\sit e-packages\nose\importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "D:\Documents and Settings\JWATERLOO\My Documents\My Downloads\ryppl.org\ Source Code\ryppl\test\test_basic.py", line 16, in <module> import distutils2 ImportError: No module named distutils2 ---------------------------------------------------------------------- Ran 1 test in 0.140s FAILED (errors=1) Cleaning ... ok Traceback (most recent call last): File "self_test.py", line 130, in <module> main( sys.argv[1:] ) File "self_test.py", line 121, in main run( *test_cmd ) File "self_test.py", line 12, in run check_call(args) File "C:\Python27\lib\subprocess.py", line 504, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '(Path(u'c:\\docume~1\\jwater~1.que\\loca ls~1\\temp\\tmpsoe7ht-ryppl_self_test\\Scripts\\nosetests.EXE'), '--exe', '-w', Path(u'D:\\Documents and Settings\\JWATERLOO\\My Documents\\My Downloads\\ryppl. org\\Source Code\\ryppl\\test'))' returned non-zero exit status 1

Jarrad Waterloo wrote:
On 12/27/2010 4:38 PM, Dave Abrahams wrote:
I still intend to produce useful results on this front in the next 120 days.
I am trying to use and install RYPPL and am having a problem.
I'd imagine the discussion of using ryppl belongs to its mailing list. This thread is already too crowded. - Volodya

On 12/28/2010 10:10 AM, Vladimir Prus wrote:
I am trying to use and install RYPPL and am having a problem. I'd imagine the discussion of using ryppl belongs to its mailing list. This thread is already too crowded.
Where is the join page of the RYPPL mailing list? Thank You!

On 12/28/2010 10:16 AM, Jarrad Waterloo wrote:
On 12/28/2010 10:10 AM, Vladimir Prus wrote:
I am trying to use and install RYPPL and am having a problem. I'd imagine the discussion of using ryppl belongs to its mailing list. This thread is already too crowded.
Where is the join page of the RYPPL mailing list? Thank You!
http://groups.google.com/group/ryppl-dev -- Eric Niebler BoostPro Computing http://www.boostpro.com

At Mon, 27 Dec 2010 12:12:30 +0100, Daniel Pfeifer wrote:
Could you describe the role of this "CMake-guy" in more detail?
* Marcus Hanwell of Kitware did a bunch of work on CMake builds for modularized boost, but we never got to the point that *everything* built, installed, and passed the tests. I'm not up-to-speed on the specifics of which parts worked and which didn't, but Eric filed some bug reports someplace. He also prepared a dashboard (http://my.cdash.org/index.php?project=Ryppl) More details at http://groups.google.com/group/ryppl-dev/browse_thread/thread/bd42cdc422f9c2... * Denis Arnaud has been maintaining a (non-modularized) Boost-CMake distribution; it's being used for packaging boost for one of the Linux distros, IIUC. see https://groups.google.com/group/ryppl-dev/browse_thread/thread/4e2ffe397d03e... for details. * Someone who really knows CMake needs to pull these efforts together and get them into solid working condition. * Ideally we'd be able to compare the results from the current Boost testing matrix with those generated for the new system -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Mon, Dec 27, 2010 at 12:17 PM, Rene Rivera <grafikrobot@gmail.com> wrote:
On 12/26/2010 8:31 AM, Dean Michael Berris wrote:
The choice of whether the current system is sufficient is not made by some committee or some handful of users that get to decide whether the system is sufficient or otherwise.
Well, yes, and no.. Ultimately the choice is made by the Boost moderators and the people ponying up the server and personnel resources. They may be some consensus building on the dev list but AFAIR how to serve the Boost sources is not a "community" choice.
Ah, right. I completely missed that part. I guess this has to go somewhere so that poor souls like me under the delusion that Boost is a community project might be pointed to.
I think you haven't been looking at -- or are ignoring -- the problems that Boost is already having when it comes to making the development effort more scalable.
I have mentioned in the past that the real problems Boost has, have nothing to do with the tools. But instead with the organization and process.
Okay, fine. So I guess I should say that Boost's problem is that the leaders of the project -- the people mentioned above -- make the choices that potential contributors don't really want to abide by, and have a process in place that's not conducive to a faster contribution and/or more scalable innovation pace. Since the tools suggest and support a centralized development model, how do you suggest you put in place an organization and process that isn't centralized? This is not a rhetorical question, I really want to know.
Well, it's not mythical -- it's there, and the Boost Libraries have pretty much been broken up already. The CMake migration is taking a while and the only reason for that is there aren't enough help going into the CMake effort.
The fact that "there aren't enough people" to make a cmake version possible should be an indication that it should be reconsidered. If it's not possible for *one* person, working part time, to create and maintain the build system, it's already failed.
Well, it's not entirely true. There's one Boost port that uses CMake that is done by one person part-time. That effort I think is on-going for Debian packaging (or something to that effect). He's on the Ryppl list too IIRC. What's not going swimmingly well is the part where Boost libraries get broken up into individual Git repositories each with their own CMake build systems for tests and what not, and then having a means of globbing them together when you pull in the release. It's not entirely impossible, it's just that this high-level kung fu with CMake to make this happen "seamlessly" is, well, high-level kung fu -- similar to the same kind of Kung Fu required to make the same happen with Boost.Build/Boost.Jam. Now, there's a fork in the road which the Ryppl folks are wanting to ponder which direction to go. One path takes Boost down to making it depend entirely on CMake for the dependency and discovery process -- something that will require quite some investment into writing CMake scripts. It's not entirely rocket science, it's just a lot of work to do. And, given the obvious resistance by some (or maybe everyone?) on the main Boost developers mailing list into moving away from Boost.Build/Boost.Jam to CMake as the build system for Boost, this path might not be the best path to take if only for the politics of the matter. The other path takes Ryppl down the full-blown-glue that takes in these disparate Boost libraries which have been broken up into multiple Git repositories, adds metadata then uses the individual CMake files that are in these repositories. This path also deals with the smart dependency management that is oh so fascinatingly close to impossible to solve optimally. That's another can of worms that has its own issues. I've already said a lot already on this, but really this discussion is better done in a different thread, which should really be a Ryppl update that shouldn't really be done by me. :D
Well, see, all these things you mention are really tangential to the issue of whether you're using Subversion or Git.
Trac can be (and I think, should be) abandoned for something that reflects better the workflow that Boost would want to encourage and that performs better on the machine that is available to it. If the solution is hosted for Boost then I would say it would be better. Migration is always going to be an issue, but it's a mechanical issue in reality. People just have to decide to do it, and then do it.
Well, that last past is your problem! It's not that people have to decide to do it.. It's that people have to demonstrate it's possible with actual use cases.
Eh? Are you seriously saying that you haven't seen any other workflow besides the one that is already in place that will work for Boost?
For example the Cmake effort tried to make a build system equivalent to BBv2, and it did not entirely succeed in having the same features. The same applies to any system you might think of replacing.
Okay... So replacing Subversion with Git is going to be an issue because... Git supports all the things that Subversion supports and is a distributed version control system to boot? I don't see the logic in that.
As a present example.. I'm working on replacing the test reporting system of Boost. But you don't see me trying to convince anyone a priory to switch to it or to devote resources to it. When I'm done with it, I'll show it to the community. And if I'm lucky I'll convince enough people that it's worth switching, shouldn't be hard in this domain though ;-) And the moderators, testers, and others devoting their personal resources will decide to switch.
Cool. Now though, imagine when you're done with replacing the test reporting system and instead of the community getting in what they think it should have and also be able to help out in the effort, you slave through that on your own and in the end other people think what you've done is insufficient. Because you've hidden this work from the community, you're not giving the community a chance to help you out even in a little way by looking at what you're trying to accomplish and maybe seeing things differently from your vision of the solution. Maybe it's not your style, but I think this is precisely the reason why the Boost library development process isn't as community friendly as the other open source projects are. Because there's this "I'll go do it my way, and then show it to everyone when I'm done" attitude, the opportunity for collaboration is lost except in the very end when it's almost too late to make any changes. At any rate, I still look forward to a cooler way of seeing the regression test results. :D
The commit hooks can be ported (quite easily if I may say so myself): http://www.kernel.org/pub/software/scm/git/docs/githooks.html if there was really enough momentum towards getting Boost from Subversion to Git. The regression test runners could very well just change the commands they use in the script -- instead of checking out, you'd clone, and instead of updating, you'd pull.
All these things you mention are artificially made to look "hard" because it's all a matter of migration really. The "hard" part is accepting that there are better solutions out there already.
Awesome... Please show us a working Git+Trac (or equivalent flavor of software you are proposing) of Boost will all the history and trac tickets ported over, i.e. with a working migration plan, and I'll consider it.
That's too easy. First, I can't port all the Boost tickets yet, but it's a matter of scripting moving the tickets over from Trac to GitHub issues. If you don't like how GitHub does issue tracking (which is really simple with a tagging system akin to Gmail labels) then we can use a JIRA installation -- it's free for Open Source projects to use, can be hosted on pretty much any machine, and there are importers for different issue tracking systems, like Trac. I recently got my hands on a Redmine installation and that's really darn cool looking. Migrating the tickets over would be a matter of writing the scripts to make that happen. If people think it's worth doing then I might spend an afternoon writing the Python/Ruby scripts to make that migration happen. Maybe people with more Python/Ruby kung fu can make that happen faster. It might just be me though thinking that Boost might benefit from greater community involvement and encouraging collaborative development over the current system. If that's the case, then I pretty much give up on that, maybe try much luck again at convincing people in the list to maybe give Git and JIRA a chance next year. ;) -- Dean Michael Berris about.me/deanberris

On 12/27/2010 8:49 AM, Dean Michael Berris wrote:
On Mon, Dec 27, 2010 at 12:17 PM, Rene Rivera<grafikrobot@gmail.com> wrote:
On 12/26/2010 8:31 AM, Dean Michael Berris wrote:
Since the tools suggest and support a centralized development model, how do you suggest you put in place an organization and process that isn't centralized? This is not a rhetorical question, I really want to know.
Well, I'd first ask if we want a decentralized organization and/or process. Or the question I really want to ask... Is a decentralized org and/or process have any advantages over the open Guild process we've been discussing?
Trac can be (and I think, should be) abandoned for something that reflects better the workflow that Boost would want to encourage and that performs better on the machine that is available to it. If the solution is hosted for Boost then I would say it would be better. Migration is always going to be an issue, but it's a mechanical issue in reality. People just have to decide to do it, and then do it.
Well, that last past is your problem! It's not that people have to decide to do it.. It's that people have to demonstrate it's possible with actual use cases.
Eh?
Are you seriously saying that you haven't seen any other workflow besides the one that is already in place that will work for Boost?
No. I'm saying that the only way to know if a work-flow works is to try it (or some reasonable approximation to that). At worse, but optimal in effect, it means someone has to follow the new work-flow within Boost as a real-use case.
For example the Cmake effort tried to make a build system equivalent to BBv2, and it did not entirely succeed in having the same features. The same applies to any system you might think of replacing.
Okay... So replacing Subversion with Git is going to be an issue because... Git supports all the things that Subversion supports and is a distributed version control system to boot? I don't see the logic in that.
I never said it would be a problem to switch to Git. I did say no one has demonstrated it can be fully done. Say we switch to git.. How will a tester that doesn't have access to networking except for web, and possibly ftp, and more importantly doesn't have git on the testing machine, achieve pulling the sources and posting the results? What extra requirements are there for testers, users, authors, review managers, release managers, etc. How does their jobs change? Or maybe you've already mentioned all that, and I just missed it :-\
As a present example.. I'm working on replacing the test reporting system of Boost. But you don't see me trying to convince anyone a priory to switch to it or to devote resources to it. When I'm done with it, I'll show it to the community. And if I'm lucky I'll convince enough people that it's worth switching, shouldn't be hard in this domain though ;-) And the moderators, testers, and others devoting their personal resources will decide to switch.
Cool.
Now though, imagine when you're done with replacing the test reporting system and instead of the community getting in what they think it should have and also be able to help out in the effort, you slave through that on your own and in the end other people think what you've done is insufficient.
No problem.. Wouldn't be the first time ;-)
Because you've hidden this work from the community, you're not giving the community a chance to help you out even in a little way by looking at what you're trying to accomplish and maybe seeing things differently from your vision of the solution.
Practically what I've found out is that there's plenty of vision and no follow through.
Maybe it's not your style,
In this case it's because I have ulterior motives outside of Boost. I.e. it's not an open-source project I'm working on.
but I think this is precisely the reason why the Boost library development process isn't as community friendly as the other open source projects are.
Perhaps, but it's also what's made it successful in other ways. So a key question is how to get more community involvement without throwing away the parts that work.
Because there's this "I'll go do it my way, and then show it to everyone when I'm done" attitude, the opportunity for collaboration is lost except in the very end when it's almost too late to make any changes.
It's closer to "We'll go do it our way, and then show it to everyone when we have something". But I think the start of the process is i minor part of the broken picture. The process of submission is as community driven as it can get. It's the process after acceptance and inclusion that is really broken at the moment.
At any rate, I still look forward to a cooler way of seeing the regression test results. :D
And I look forward to showing it.
It might just be me though thinking that Boost might benefit from greater community involvement and encouraging collaborative development over the current system. If that's the case, then I pretty much give up on that, maybe try much luck again at convincing people in the list to maybe give Git and JIRA a chance next year. ;)
It's not just you that's thinking about it, as evidence from the various community discussions recently. Just remember that you are dealing with a very skeptical, stubborn crowd here ;-) -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

On 10-12-28 12:30 AM, Rene Rivera wrote:
Is a decentralized org and/or process have any advantages over the open Guild process we've been discussing?
Sorry if I have not kept up with "open Guild" but it sounds to me like a key component is one requires permission and the other doesn't. Over the years, I have accumulated dozens of patches in various places and while I probably submitted some through Trac, if I could just fork a repo in an organized manner without asking anyone's permission, I could keep my patches in one place (irony?) and in sync with some ease. This benefits Boost because then my patches are not in multiple repositories behind different corporate firewalls, but somewhere accessible to others. Not that they are anything spectacular, mind you! I think you *might* enable more contributions as a benefit of a decentralized process just based on my experience. -- Sohail Somani -- iBlog : http://uint32t.blogspot.com iTweet: http://twitter.com/somanisoftware iCode : http://bitbucket.org/cheez

Sorry if I have not kept up with "open Guild" but it sounds to me like a key component is one requires permission and the other doesn't.
Over the years, I have accumulated dozens of patches in various places and while I probably submitted some through Trac, if I could just fork a repo in an organized manner without asking anyone's permission, I could keep my patches in one place (irony?) and in sync with some ease. This benefits Boost because then my patches are not in multiple repositories behind different corporate firewalls, but somewhere accessible to others. Not that they are anything spectacular, mind you!
I think you *might* enable more contributions as a benefit of a decentralized process just based on my experience.
Well maybe, but how will developers ever know about these patches unless you tell them about them (presumably via an issue tracker)? Come to that, can't you do this right now with git-svn? Cheers, John.

On 10-12-28 12:54 PM, John Maddock wrote:
I think you *might* enable more contributions as a benefit of a decentralized process just based on my experience.
Well maybe, but how will developers ever know about these patches unless you tell them about them (presumably via an issue tracker)?
Well, yes I'm not saying that people will flock to my fork. That same process would still continue. But if you work with multiple sites, you end up having local patches for each.
Come to that, can't you do this right now with git-svn?
Apparently: https://github.com/ryppl/boost-svn That's good enough for me. -- Sohail Somani -- iBlog : http://uint32t.blogspot.com iTweet: http://twitter.com/somanisoftware iCode : http://bitbucket.org/cheez

Come to that, can't you do this right now with git-svn? Apparently: https://github.com/ryppl/boost-svn That's good enough for me.
Thanks for the github link. I'd like to better manage our somewhat hacked boost fork. git seems to me a better approach than svn in terms of having a rather ad-hoc arrangment such as: boost svn -> github.com/boost-svn -> mycompany.com/boost <-> myproject/boost | | <------------------------------------------------------------ - Nigel

At Mon, 27 Dec 2010 23:30:20 -0600, Rene Rivera wrote:
On 12/27/2010 8:49 AM, Dean Michael Berris wrote:
On Mon, Dec 27, 2010 at 12:17 PM, Rene Rivera<grafikrobot@gmail.com> wrote:
On 12/26/2010 8:31 AM, Dean Michael Berris wrote:
Since the tools suggest and support a centralized development model, how do you suggest you put in place an organization and process that isn't centralized? This is not a rhetorical question, I really want to know.
Well, I'd first ask if we want a decentralized organization and/or process. Or the question I really want to ask... Is a decentralized org and/or process have any advantages over the open Guild process we've been discussing?
The guild is a great idea. We should definitely do it. However, some percentage of people will not want to be part of that structure. The question is, how seamless is the transition into being part of the Guild, or part of Boost itself, when they decide to change? Git makes that work really well. SVN, less so. FWIW.
Are you seriously saying that you haven't seen any other workflow besides the one that is already in place that will work for Boost?
No. I'm saying that the only way to know if a work-flow works is to try it (or some reasonable approximation to that). At worse, but optimal in effect, it means someone has to follow the new work-flow within Boost as a real-use case.
Yep.
For example the Cmake effort tried to make a build system equivalent to BBv2, and it did not entirely succeed in having the same features. The same applies to any system you might think of replacing.
Okay... So replacing Subversion with Git is going to be an issue because... Git supports all the things that Subversion supports and is a distributed version control system to boot? I don't see the logic in that.
I never said it would be a problem to switch to Git. I did say no one has demonstrated it can be fully done. Say we switch to git.. How will a tester that doesn't have access to networking except for web, and possibly ftp, and more importantly doesn't have git on the testing machine, achieve pulling the sources and posting the results?
Easy: github provides links that download tarballs :-)
What extra requirements are there for testers, users, authors, review managers, release managers, etc. How does their jobs change?
Important questions.
but I think this is precisely the reason why the Boost library development process isn't as community friendly as the other open source projects are.
Perhaps, but it's also what's made it successful in other ways. So a key question is how to get more community involvement without throwing away the parts that work.
Yep.
Because there's this "I'll go do it my way, and then show it to everyone when I'm done" attitude, the opportunity for collaboration is lost except in the very end when it's almost too late to make any changes.
It's closer to "We'll go do it our way, and then show it to everyone when we have something". But I think the start of the process is i minor part of the broken picture. The process of submission is as community driven as it can get. It's the process after acceptance and inclusion that is really broken at the moment.
+1
At any rate, I still look forward to a cooler way of seeing the regression test results. :D
And I look forward to showing it.
+! -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dean Michael Berris wrote:
As for version control, what does it matter if Boost uses Subversion, when you as a DVCS user can trivially use git-svn [1] to interop against the repository (in this case, the sandbox). You get to use your favourite toy, without affecting the existing infrastructure in any way.
Yes, it matters. Let me state a few reasons why:
1. Precisely because it's Subversion, a non-distributed configuration management system, the process of getting changes in and innovating is slowed down by the bottleneck that is the centralized source code management system.
Would you please specify a concrete example where innovation in a library that you personally is contributing to is slowed down by centralized version control?
2. Potential contributors to Boost that have to deal with Subversion from the outside through a hack that is Git-SVN is just a Bad Idea. If a library that is being developed to be made part of Boost has to go to the Sandbox, then the process of developing a library in a collaborative manner would be a lot harder. I've already pointed out the reasons for this in another thread pleading to get Boost development out of a centralized system and into a more distributed system.
Have you ever used git-svn in practice? I do use it daily, and it's not entirely clear whether git or git-svn is worse "hack".
3. Because of the central management that Subversion promotes, libraries that are developed by other people meant to be integrated into the Boost sources will have trouble moving the history of these projects into the Boost Subversion system -- nearly impossible if you think about it --
You *really* should use git-svn. It's trivial to push any line of history to any branch on any subversion server.
Of course, you may propose constructive criticism and suggest migration plans to other toolchains, with good arguments for why this is a good thing. See the mythical 'Ryppl' project, which aims to componentise Boost into a pile of Git repositories and some magical combination of scripts and CMake, aimed at letting you track exactly the versions of components you need.
Well, it's not mythical -- it's there, and the Boost Libraries have pretty much been broken up already. The CMake migration is taking a while and the only reason for that is there aren't enough help going into the CMake effort.
Actually, not quite. The primarily reason is that while CMake was originally touted as a turn-key solution, turning the key did not work. So much, that a Kitware engineer had to be brough in to fix issues -- and apparently haven't yet. - Volodya

On Mon, Dec 27, 2010 at 1:27 PM, Vladimir Prus <vladimir@codesourcery.com> wrote:
Dean Michael Berris wrote:
As for version control, what does it matter if Boost uses Subversion, when you as a DVCS user can trivially use git-svn [1] to interop against the repository (in this case, the sandbox). You get to use your favourite toy, without affecting the existing infrastructure in any way.
Yes, it matters. Let me state a few reasons why:
1. Precisely because it's Subversion, a non-distributed configuration management system, the process of getting changes in and innovating is slowed down by the bottleneck that is the centralized source code management system.
Would you please specify a concrete example where innovation in a library that you personally is contributing to is slowed down by centralized version control?
Boost.Pool is one, and the other was Boost.Iterators. With Boost.Pool, there's no maintainer around (that I know) who's checking patches submitted to it and making sure the changes make it into the releases. With Boost.Iterators, it took a while to get a patch for an additional iterator implementation to get into trunk -- and I'm not even sure if my patch has made it into the release yet. And these are just the libraries I've tried contributing to. I'm sure there are other libraries out there like Boost.Serialization where the port to Boost.Spirit's Qi got bogged down because Bryce couldn't get commit access early enough to be able to make changes to the Boost.Serialization parser implementation.
2. Potential contributors to Boost that have to deal with Subversion from the outside through a hack that is Git-SVN is just a Bad Idea. If a library that is being developed to be made part of Boost has to go to the Sandbox, then the process of developing a library in a collaborative manner would be a lot harder. I've already pointed out the reasons for this in another thread pleading to get Boost development out of a centralized system and into a more distributed system.
Have you ever used git-svn in practice? I do use it daily, and it's not entirely clear whether git or git-svn is worse "hack".
Yes -- and if you use Git like you use SVN, then you won't have problems. But if you're like me who has a lot of small changes checked into Git, multiple branches, and multiple integration points from different branches (and repositories) then you'll see how git-svn is a bigger pain to deal with than it is worth.
3. Because of the central management that Subversion promotes, libraries that are developed by other people meant to be integrated into the Boost sources will have trouble moving the history of these projects into the Boost Subversion system -- nearly impossible if you think about it --
You *really* should use git-svn. It's trivial to push any line of history to any branch on any subversion server.
No, sorry. If you have a ton of changes that are in SVN repository SVN-A, and a ton of changes that are in SVN repository SVN-B, then merging the histories of SVN-A and SVN-B turn into gobbleygook. Sure you can graft together histories and it would be a bad hack and you don't really achieve what you want in the end.
Well, it's not mythical -- it's there, and the Boost Libraries have pretty much been broken up already. The CMake migration is taking a while and the only reason for that is there aren't enough help going into the CMake effort.
Actually, not quite. The primarily reason is that while CMake was originally touted as a turn-key solution, turning the key did not work. So much, that a Kitware engineer had to be brough in to fix issues -- and apparently haven't yet.
Well, I'm not sure if you've followed the latest discussions that happened in the Ryppl ML, but basically what has already happened is: Boost has been broken up into multiple Git repositories, sync'ed with Subversion. The issue now is getting the individual CMake files in the Git repositories to be able to register itself to the bigger Boost distribution CMake configuration. CMake wasn't originally made to handle this specific use case. Now making this specific thing happen is what's taking a while. The other issue has to do with Ryppl's dependency management, but that's another can of worms in itself. Anyway, really, CMake has been a joy to deal with, except in the case where you have to do something that's out of the ordinary. What the modularized Boost effort is facing is really something unique that I don't think Boost.Build is even able to solve either. Of course you're welcome to prove me wrong on that. :) -- Dean Michael Berris about.me/deanberris

Dean Michael Berris wrote on Monday, December 27, 2010 On Mon, Dec 27, 2010 at 1:27 PM, Vladimir Prus <vladimir@codesourcery.com> wrote:
Would you please specify a concrete example where innovation in a library that you personally is contributing to is slowed down by centralized version control?
Boost.Pool is one, and the other was Boost.Iterators. With Boost.Pool, there's no maintainer around (that I know) who's checking patches submitted to it and making sure the changes make it into the releases. With Boost.Iterators, it took a while to get a patch for an additional iterator implementation to get into trunk -- and I'm not even sure if my patch has made it into the release yet.
It's not clear to me that these problems have anything to do with Subversion... regardless of which vcs is used, *someone* is going to be the designated maintainer/owner of the library, and if you can't get them to include your patches, your patches won't be in the Boost release, right? In that case you'd need to do the same thing you would do under the current system... get yourself named maintainer/owner and then do it yourself. Am I missing something there? It seems to me that there's no shortage of innovation going on in Boost... I've seen many fine libraries released over the years. My observation is that there's some shortage of people who want to do the laborious work... review manager, release manager, BoostCon organizer, etc. My hat's off to the folks that do the heavy lifting there, and it seems reasonable that they should have a larger voice on how that process works. Erik ---------------------------------------------------------------------- This message w/attachments (message) is intended solely for the use of the intended recipient(s) and may contain information that is privileged, confidential or proprietary. If you are not an intended recipient, please notify the sender, and then please delete and destroy all copies and attachments, and be advised that any review or dissemination of, or the taking of any action in reliance on, the information contained in or attached to this message is prohibited. Unless specifically indicated, this message is not an offer to sell or a solicitation of any investment products or other financial product or service, an official confirmation of any transaction, or an official statement of Sender. Subject to applicable law, Sender may intercept, monitor, review and retain e-communications (EC) traveling through its networks/systems and may produce any such EC to regulators, law enforcement, in litigation and as required by law. The laws of the country of each sender/recipient may impact the handling of EC, and EC may be archived, supervised and produced in countries other than the country in which you are located. This message cannot be guaranteed to be secure or free of errors or viruses. References to "Sender" are references to any subsidiary of Bank of America Corporation. Securities and Insurance Products: * Are Not FDIC Insured * Are Not Bank Guaranteed * May Lose Value * Are Not a Bank Deposit * Are Not a Condition to Any Banking Service or Activity * Are Not Insured by Any Federal Government Agency. Attachments that are part of this EC may have additional important disclosures and disclaimers, which you should read. This message is subject to terms available at the following link: http://www.bankofamerica.com/emaildisclaimer. By messaging with Sender you consent to the foregoing.

As far as I unserstand, and tell me if I'm wrong, the point Dean is making is that using a decentralised source control tool would allow anywone to just clone the original library, make the changes, put the modified library online and allow anyone to get "just the change-sets" he made, maybe more change-sets from different people that have fixed different things in the library. If the maintainer of the library wants to get thoses changesets in the authoritative repository, that's good, but as it takes time, allowing people to expose freely changes in a better form than patch files in a bug-tracker will help to allow users to get quickly fixes and enhancements that are not yet reviewed by maintainers. And also to propose change-sets to those people making non-authoritative changes without to first have to wait their changes make in the autoritative repository. It's only about exposing change-sets in fact. Am I correct? Joel On Mon, Dec 27, 2010 at 16:39, Nelson, Erik - 2 < erik.l.nelson@bankofamerica.com> wrote:
Dean Michael Berris wrote on Monday, December 27, 2010
On Mon, Dec 27, 2010 at 1:27 PM, Vladimir Prus <vladimir@codesourcery.com> wrote:
Would you please specify a concrete example where innovation in a library that you personally is contributing to is slowed down by centralized version control?
Boost.Pool is one, and the other was Boost.Iterators. With Boost.Pool, there's no maintainer around (that I know) who's checking patches submitted to it and making sure the changes make it into the releases. With Boost.Iterators, it took a while to get a patch for an additional iterator implementation to get into trunk -- and I'm not even sure if my patch has made it into the release yet.
It's not clear to me that these problems have anything to do with Subversion... regardless of which vcs is used, *someone* is going to be the designated maintainer/owner of the library, and if you can't get them to include your patches, your patches won't be in the Boost release, right?
In that case you'd need to do the same thing you would do under the current system... get yourself named maintainer/owner and then do it yourself.
Am I missing something there?
It seems to me that there's no shortage of innovation going on in Boost... I've seen many fine libraries released over the years. My observation is that there's some shortage of people who want to do the laborious work... review manager, release manager, BoostCon organizer, etc. My hat's off to the folks that do the heavy lifting there, and it seems reasonable that they should have a larger voice on how that process works.
Erik
---------------------------------------------------------------------- This message w/attachments (message) is intended solely for the use of the intended recipient(s) and may contain information that is privileged, confidential or proprietary. If you are not an intended recipient, please notify the sender, and then please delete and destroy all copies and attachments, and be advised that any review or dissemination of, or the taking of any action in reliance on, the information contained in or attached to this message is prohibited. Unless specifically indicated, this message is not an offer to sell or a solicitation of any investment products or other financial product or service, an official confirmation of any transaction, or an official statement of Sender. Subject to applicable law, Sender may intercept, monitor, review and retain e-communications (EC) traveling through its networks/systems and may produce any such EC to regulators, law enforcement, in litigation and as required by law. The laws of the country of each sender/recipient may impact the handling of EC, and EC may be archived, supervised and produced in countries other than the country in which you are located. This message cannot be guaranteed to be secure or free of errors or viruses.
References to "Sender" are references to any subsidiary of Bank of America Corporation. Securities and Insurance Products: * Are Not FDIC Insured * Are Not Bank Guaranteed * May Lose Value * Are Not a Bank Deposit * Are Not a Condition to Any Banking Service or Activity * Are Not Insured by Any Federal Government Agency. Attachments that are part of this EC may have additional important disclosures and disclaimers, which you should read. This message is subject to terms available at the following link: http://www.bankofamerica.com/emaildisclaimer. By messaging with Sender you consent to the foregoing. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Mon, Dec 27, 2010 at 11:39 PM, Nelson, Erik - 2 <erik.l.nelson@bankofamerica.com> wrote:
Dean Michael Berris wrote on Monday, December 27, 2010
On Mon, Dec 27, 2010 at 1:27 PM, Vladimir Prus <vladimir@codesourcery.com> wrote:
Would you please specify a concrete example where innovation in a library that you personally is contributing to is slowed down by centralized version control?
Boost.Pool is one, and the other was Boost.Iterators. With Boost.Pool, there's no maintainer around (that I know) who's checking patches submitted to it and making sure the changes make it into the releases. With Boost.Iterators, it took a while to get a patch for an additional iterator implementation to get into trunk -- and I'm not even sure if my patch has made it into the release yet.
It's not clear to me that these problems have anything to do with Subversion... regardless of which vcs is used, *someone* is going to be the designated maintainer/owner of the library, and if you can't get them to include your patches, your patches won't be in the Boost release, right?
Right, but then as I mention in a different thread, the maintainer/owner of the library can be MIA, and then I can ask the release managers and/or someone else who can pull the changes into their repository and shepherd the changes in. People (or in this case, I) can keep innovating and then the changes can get into the release in a less centralized manner -- which is the whole point of a decentralized system.
In that case you'd need to do the same thing you would do under the current system... get yourself named maintainer/owner and then do it yourself.
Well, it's really not that easy -- becoming maintainer of something is a lot more baggage than just that guy who made changes and had his changes pulled into the main release. I should know as a maintainer of cpp-netlib at the moment. ;)
Am I missing something there?
Maybe just the background information that has been expressed in a different thread, which I kind of assumed would be read along the same line as this one -- which may be my mistake. :D
It seems to me that there's no shortage of innovation going on in Boost... I've seen many fine libraries released over the years. My observation is that there's some shortage of people who want to do the laborious work... review manager, release manager, BoostCon organizer, etc. My hat's off to the folks that do the heavy lifting there, and it seems reasonable that they should have a larger voice on how that process works.
Sure, if you think having 20+ libraries in the review queue and lots of patches waiting around for years in Trac and then seeing some of your efforts being either ignored or forgotten to be a good enough pace of innovation... then I can say yeah, there's good innovation going on. ;) But seriously, it's not about the lack of innovation, it's just that the pace and means apparently does not scale. The suggestions being raised (by me) are meant to reduce the amount of work and lower the barrier to entry for potential contributors/maintainers of Boost libraries both present and future. -- Dean Michael Berris about.me/deanberris

Behalf Of Dean Michael Berris wrote on Monday, December 27, 2010 10:58 AM
On Mon, Dec 27, 2010 at 11:39 PM, Nelson, Erik - 2 <erik.l.nelson@bankofamerica.com> wrote:
Dean Michael Berris wrote on Monday, December 27, 2010
Boost.Pool is one, and the other was Boost.Iterators. With Boost.Pool, there's no maintainer around (that I know) who's checking patches submitted to it and making sure the changes make it into the releases. With Boost.Iterators, it took a while to get a patch for an additional iterator implementation to get into trunk -- and I'm not even sure if my patch has made it into the release yet.
It's not clear to me that these problems have anything to do with Subversion... regardless of which vcs is used, *someone* is going to be the designated maintainer/owner of the library, and if you can't get them to include your patches, your patches won't be in the Boost release, right?
Right, but then as I mention in a different thread, the maintainer/owner of the library can be MIA, and then I can ask the release managers and/or someone else who can pull the changes into their repository and shepherd the changes in. People (or in this case, I) can keep innovating and then the changes can get into the release in a less centralized manner -- which is the whole point of a decentralized system.
It seems to me you could write almost the same thing using SVN-speak... "if the owner is MIA, I can send the patch to the release manager who can apply the patch to SVN and shepherd the changes in. That way I can keep innovating on my local working copy and the change can get into the release on its own schedule" Anyway, it seemed to me that the issue with patches for Boost.Pool and Boost.Iterators wasn't any inability to supply the patches to the 'official' place... it was an inability to get the attention of those who had the power to get the patches into 'official' Boost. Git won't change that. Erik ---------------------------------------------------------------------- This message w/attachments (message) is intended solely for the use of the intended recipient(s) and may contain information that is privileged, confidential or proprietary. If you are not an intended recipient, please notify the sender, and then please delete and destroy all copies and attachments, and be advised that any review or dissemination of, or the taking of any action in reliance on, the information contained in or attached to this message is prohibited. Unless specifically indicated, this message is not an offer to sell or a solicitation of any investment products or other financial product or service, an official confirmation of any transaction, or an official statement of Sender. Subject to applicable law, Sender may intercept, monitor, review and retain e-communications (EC) traveling through its networks/systems and may produce any such EC to regulators, law enforcement, in litigation and as required by law. The laws of the country of each sender/recipient may impact the handling of EC, and EC may be archived, supervised and produced in countries other than the country in which you are located. This message cannot be guaranteed to be secure or free of errors or viruses. References to "Sender" are references to any subsidiary of Bank of America Corporation. Securities and Insurance Products: * Are Not FDIC Insured * Are Not Bank Guaranteed * May Lose Value * Are Not a Bank Deposit * Are Not a Condition to Any Banking Service or Activity * Are Not Insured by Any Federal Government Agency. Attachments that are part of this EC may have additional important disclosures and disclaimers, which you should read. This message is subject to terms available at the following link: http://www.bankofamerica.com/emaildisclaimer. By messaging with Sender you consent to the foregoing.

Right, but then as I mention in a different thread, the maintainer/owner of the library can be MIA, and then I can ask the release managers and/or someone else who can pull the changes into their repository and shepherd the changes in. People (or in this case, I) can keep innovating and then the changes can get into the release in a less centralized manner -- which is the whole point of a decentralized system.
It seems to me you could write almost the same thing using SVN-speak...
"if the owner is MIA, I can send the patch to the release manager who can apply the patch to SVN and shepherd the changes in. That way I can keep innovating on my local working copy and the change can get into the release on its own schedule"
Anyway, it seemed to me that the issue with patches for Boost.Pool and Boost.Iterators wasn't any inability to supply the patches to the 'official' place... it was an inability to get the attention of those who had the power to get the patches into 'official' Boost. Git won't change that.
I'm mostly staying out of this... but that sounds about right to me... at least we have a centralized place to put patches/bug reports/feature requests, what we really need is more folks to process them. Further in order to process patches into the "official" release, we really need "that one guy" that just knows lib X inside out and can look at a patch and just know whether it's going to be OK or not. I'd say about 3/4 of the patches I get are spot on, and frankly I never fail to be impressed by the quality of a lot of them. But there's about a quarter that are either down right dangerous, or at least will cause more trouble down the line if applied. Often these problem patches aren't obviously problems, it's just that the person supplying them quite clearly doesn't have the time to really get to know the library inside out, so they can be supplied by perfectly trustworthy individuals. Hell, I may have submitted a few myself! Too many patches like that, and the whole thing can snowball, making it much harder to fix things properly down the road. On the issue of library maintainers.... we don't actually need permanent full time maintainers for a lot of the older stuff... all they may need is a good spruce up every 5 years or so, with maybe the odd patch applied here and there in-between if there's a new compiler that causes issues. Maybe as well as bug sprints, we should organize a few "hit squads" to go after an older library, rejuvenate it and then move on to the next target. In fact if folks think that might be a good idea, I might be moved to try and organize something.... Just my 2c, John.

On Tue, Dec 28, 2010 at 2:14 AM, John Maddock <boost.regex@virgin.net> wrote:
Anyway, it seemed to me that the issue with patches for Boost.Pool and Boost.Iterators wasn't any inability to supply the patches to the 'official' place... it was an inability to get the attention of those who had the power to get the patches into 'official' Boost. Git won't change that.
I'm mostly staying out of this... but that sounds about right to me... at least we have a centralized place to put patches/bug reports/feature requests, what we really need is more folks to process them.
Right. There's a different way of looking at it too: "At least we have a centralized place to put garbage in and let it rot, what we really need is more folks to segregate the trash and manage one big pile" To me at least, that centralized place to gather patches/bug reports/feature requests isn't actually a good thing. Let me state a few reasons why I think this: 1. The signal/noise ratio can be hard to keep down especially if you have a lot of ground to cover. Consider how you (or any other maintainer for example) would want to manage a part of the 1000 tickets that are all in the same pile. Sure you can query it many different ways, but wouldn't it be way easier to just look at 100 issues just for Boost.Regex than it is to spend some time looking at 1000 issues that might be relevant to Boost.Regex? 2. It's harder to divide and conquer if you start from the top-down. Let me qualify that a little: if you start with one big-ass pile of dung, the stink is much harder to overcome than if you processed the input as it comes in and segregate bottom-up (no pun intended). If you had one place where issues for Boost.Regex gets tracked, where discussion around Boost.Regex gets documented, where design decisions are hashed out, and where documentation is ultimately developed, then your progress with dealing with Boost.Regex shouldn't hamper the progress and development of other libraries not dependent on Boost.Regex. This means issues for Boost.Proto don't get piled into the same pile of issues where Boost.Regex issues will be piled on. Processing the issues as they come in would be way easier to manage than if you started with one pile containing both issues. 3. I'm not sure how the "single point of failure" comes into play, but centralized anything means that one thing goes down, then everything fails. I don't think I need to stress that point any more than I have to. ;)
Further in order to process patches into the "official" release, we really need "that one guy" that just knows lib X inside out and can look at a patch and just know whether it's going to be OK or not. I'd say about 3/4 of the patches I get are spot on, and frankly I never fail to be impressed by the quality of a lot of them. But there's about a quarter that are either down right dangerous, or at least will cause more trouble down the line if applied. Often these problem patches aren't obviously problems, it's just that the person supplying them quite clearly doesn't have the time to really get to know the library inside out, so they can be supplied by perfectly trustworthy individuals. Hell, I may have submitted a few myself! Too many patches like that, and the whole thing can snowball, making it much harder to fix things properly down the road.
Unfortunately, insisting on "that one guy" being there and doing what you describe as essentially maintenance, is actually part of the reason why the development model doesn't scale IMHO. Being able to trust people and empowering people to actually be able to just muck around with things and then asking for changes -- that still need to be reviewed anyway -- to get baked in and shepherded by trusted people (note, not just "that one guy") lowers the barrier to entry for contributors. It's actually a better problem to have if you have 10x more contributors than it is to have 10x more issues. Because sending patches around is brittle and a nightmare to manage, using tools that make it easier should be a welcome development I imagine. That said, consider the case where you have 5 trusted people who you know already know the Boost.Regex internals either as much as you do or better, then have 10 people who implement new features and make changes to the implementation -- would you rather be the one to deal with the changes of these 10 people or would you welcome the word of any of the 5 trusted people to apply changes that any of the 10 people make on your behalf? Essentially in the current parlance, these 5 trusted people would normally be called "co-maintainers", and the 10 people would be called "potential contributors"; in case any of the 10 potential contributors have their changes pulled in, then they become "contributors". Then apply this above logic to every Boost library that's actively maintained (and maybe not so actively maintained) and then you have a wider contribution base -- maybe some people would call it a more scalable model, but I don't want to jump the gun on that yet. ;)
On the issue of library maintainers.... we don't actually need permanent full time maintainers for a lot of the older stuff... all they may need is a good spruce up every 5 years or so, with maybe the odd patch applied here and there in-between if there's a new compiler that causes issues. Maybe as well as bug sprints, we should organize a few "hit squads" to go after an older library, rejuvenate it and then move on to the next target. In fact if folks think that might be a good idea, I might be moved to try and organize something....
Sure, that's a thought, but that's thinking with a band-aid short-term solution. The bug sprints are a good idea, but I don't get why a bug sprint can't last 1 whole year and be an on-going effort. Having bug sprints and hit squads (ninja clans, strike teams, etc.) are short-term non-sustainable solutions to the issue of open source maintenance. I'd for one as a potential contributor would like to be encouraged to dive into the code, get some changes submitted, and see that there are people who actually care. With the current process and system in place, I don't feel like it's a conducive environment for potential contributors only just because of the barriers to entry. Although I agree that it weeds out the people who aren't serious about contributing, I don't see it encouraging more people to be contributors either.
Just my 2c, John.
Definitely worth much more IMO. :) Thanks John. -- Dean Michael Berris about.me/deanberris

Dean Michael Berris wrote:
1. The signal/noise ratio can be hard to keep down especially if you have a lot of ground to cover. Consider how you (or any other maintainer for example) would want to manage a part of the 1000 tickets that are all in the same pile. Sure you can query it many different ways, but wouldn't it be way easier to just look at 100 issues just for Boost.Regex than it is to spend some time looking at 1000 issues that might be relevant to Boost.Regex?
Michael, you seem to be making very strange points here. I have a saved query in Trac for all the components that I maintain, which is bookmarked in my browser, and once a week, I click "Alt-F2", type "Boost Trac", and examine those tickets without seeing anything I don't care about. Is there a reason you think this approach won't work for everybody? Like, is there a web browser that lacks bookmarks functionality? - Volodya

On Tue, Dec 28, 2010 at 2:50 PM, Vladimir Prus <vladimir@codesourcery.com> wrote:
Dean Michael Berris wrote:
1. The signal/noise ratio can be hard to keep down especially if you have a lot of ground to cover. Consider how you (or any other maintainer for example) would want to manage a part of the 1000 tickets that are all in the same pile. Sure you can query it many different ways, but wouldn't it be way easier to just look at 100 issues just for Boost.Regex than it is to spend some time looking at 1000 issues that might be relevant to Boost.Regex?
Michael,
you seem to be making very strange points here. I have a saved query in Trac for all the components that I maintain, which is bookmarked in my browser, and once a week, I click "Alt-F2", type "Boost Trac", and examine those tickets without seeing anything I don't care about. Is there a reason you think this approach won't work for everybody? Like, is there a web browser that lacks bookmarks functionality?
Sure, but the whole point that you have a central place to query the information is what's broken -- especially if you have to resort to these "hacks" just to filter out what's important for you. Imagine if you had one issue tracker per Boost library. Then you don't have to worry about crafting the queries to get the relevant information in the first place. And then it's going to be easier to develop milestones per library than creating one big milestone and having one giant release. You can then have different workflows per Boost library depending on what the developers of the library are comfortable with. The idea that you have a single place for everything and "just one way" to do things is really what I'm having an issue with. Sure we can set up standards that people follow already -- especially when it comes to code, license, etc. -- but asking everyone to follow the same development pace and congregate on one single repository and issue tracker is sounding like the "cathedral" model than the "bazaar" model. I for one like the bazaar. ;) HTH -- Dean Michael Berris about.me/deanberris

Dean Michael Berris wrote:
On Tue, Dec 28, 2010 at 2:50 PM, Vladimir Prus <vladimir@codesourcery.com> wrote:
Dean Michael Berris wrote:
1. The signal/noise ratio can be hard to keep down especially if you have a lot of ground to cover. Consider how you (or any other maintainer for example) would want to manage a part of the 1000 tickets that are all in the same pile. Sure you can query it many different ways, but wouldn't it be way easier to just look at 100 issues just for Boost.Regex than it is to spend some time looking at 1000 issues that might be relevant to Boost.Regex?
Michael,
you seem to be making very strange points here. I have a saved query in Trac for all the components that I maintain, which is bookmarked in my browser, and once a week, I click "Alt-F2", type "Boost Trac", and examine those tickets without seeing anything I don't care about. Is there a reason you think this approach won't work for everybody? Like, is there a web browser that lacks bookmarks functionality?
Sure, but the whole point that you have a central place to query the information is what's broken -- especially if you have to resort to these "hacks" just to filter out what's important for you.
Imagine if you had one issue tracker per Boost library. Then you don't have to worry about crafting the queries to get the relevant information in the first place. And then it's going to be easier to develop milestones per library than creating one big milestone and having one giant release. You can then have different workflows per Boost library depending on what the developers of the library are comfortable with.
I'd be very much against the idea of checking 4 different sites as opposed to 1 -- at least, until I have 4 pairs of hand to work on 4 different projects at the same time. In fact, I so much dislike having to check N different issue trackers that I have a student working on a tool to present issues from different trackers in a single UI. However, until that project is done, I'd much rather not having things split up unnecessary. Even Linux has a single bug tracker, you know. - Volodya

On Tue, Dec 28, 2010 at 7:34 PM, Vladimir Prus <vladimir@codesourcery.com> wrote:
Dean Michael Berris wrote:
Imagine if you had one issue tracker per Boost library. Then you don't have to worry about crafting the queries to get the relevant information in the first place. And then it's going to be easier to develop milestones per library than creating one big milestone and having one giant release. You can then have different workflows per Boost library depending on what the developers of the library are comfortable with.
I'd be very much against the idea of checking 4 different sites as opposed to 1 -- at least, until I have 4 pairs of hand to work on 4 different projects at the same time.
I'm not talking about having 4 different sites for one library -- I'm saying, for each Boost library there should be one issue tracker. This means if you need to check anything regarding that library, then you go to exactly one place. I think part of this line of reasoning from me assumes that Boost isn't just one library, but actually many libraries distributed as a single downloadable glob. I think that has to change for any significant progress to be made on the Boost front.
In fact, I so much dislike having to check N different issue trackers that I have a student working on a tool to present issues from different trackers in a single UI. However, until that project is done, I'd much rather not having things split up unnecessary.
But what I've been pointing out is that it has come to the point where it's now necessary to break Boost up into multiple projects, each one building a community of users/developers that maintain a specific part.
Even Linux has a single bug tracker, you know.
But Linux is just one kernel. ;) Imagine globbing together the issues of glibc, gcc, and the Linux kernel into one issue tracker, just because they're all part of the LSB -- that's what's happening in Boost now IMO, which is not scalable. -- Dean Michael Berris about.me/deanberris

Dean Michael Berris wrote:
On Tue, Dec 28, 2010 at 7:34 PM, Vladimir Prus <vladimir@codesourcery.com> wrote:
Dean Michael Berris wrote:
Imagine if you had one issue tracker per Boost library. Then you don't have to worry about crafting the queries to get the relevant information in the first place. And then it's going to be easier to develop milestones per library than creating one big milestone and having one giant release. You can then have different workflows per Boost library depending on what the developers of the library are comfortable with.
I'd be very much against the idea of checking 4 different sites as opposed to 1 -- at least, until I have 4 pairs of hand to work on 4 different projects at the same time.
I'm not talking about having 4 different sites for one library -- I'm saying, for each Boost library there should be one issue tracker.
No, I mean that I'll be opposed to visiting 4 different issue trackers for 4 different components I maintain. I want to see a single list of issues on my plate, so that I can prioritise them together.
In fact, I so much dislike having to check N different issue trackers that I have a student working on a tool to present issues from different trackers in a single UI. However, until that project is done, I'd much rather not having things split up unnecessary.
But what I've been pointing out is that it has come to the point where it's now necessary to break Boost up into multiple projects, each one building a community of users/developers that maintain a specific part.
I don't think you have proven this, yet. Your proposed split will only cause pain for me.
Even Linux has a single bug tracker, you know.
But Linux is just one kernel. ;) Imagine globbing together the issues of glibc, gcc, and the Linux kernel into one issue tracker, just because they're all part of the LSB -- that's what's happening in Boost now IMO, which is not scalable.
glibc, gcc, gdb and binutils all live in the same issue tracker. - Volodya

No, I mean that I'll be opposed to visiting 4 different issue trackers for 4 different components I maintain. I want to see a single list of issues on my plate, so that I can prioritise them together.
Isn't it more a issue tracking problem? TRAC is well known to be able to manage only 1 project (that will change with coming versions) I've used RedMine to compare with TRAC, it allows managing hierarchies of projects and having cross-project ticket requests too. I'm not advocating for a change to RedMine but just that TRAC could be updated (later not now) to manage libraries as different projects (that can share the same repo or not). I'm saying this because I know that the component field alone isn't powerful enough when it comes to managing different modules or libraries inside the same organisation. It's better used to say if the ticket is about documentation or not for example. That said, I'm talkign about features not available yet in TRAC and I don't know exactly the cost of moving a big tracker database to another one, so ignore my humble comment as you wish :) On Tue, Dec 28, 2010 at 14:41, Vladimir Prus <vladimir@codesourcery.com>wrote:
No, I mean that I'll be opposed to visiting 4 different issue trackers for 4 different components I maintain. I want to see a single list of issues on my plate, so that I can prioritise them together.

Klaim wrote:
No, I mean that I'll be opposed to visiting 4 different issue trackers for 4 different components I maintain. I want to see a single list of issues on my plate, so that I can prioritise them together.
Isn't it more a issue tracking problem? TRAC is well known to be able to manage only 1 project (that will change with coming versions) I've used RedMine to compare with TRAC, it allows managing hierarchies of projects and having cross-project ticket requests too. I'm not advocating for a change to RedMine but just that TRAC could be updated (later not now) to manage libraries as different projects (that can share the same repo or not). I'm saying this because I know that the component field alone isn't powerful enough when it comes to managing different modules or libraries inside the same organisation. It's better used to say if the ticket is about documentation or not for example. That said, I'm talkign about features not available yet in TRAC and I don't know exactly the cost of moving a big tracker database to another one, so ignore my humble comment as you wish :)
I have heard positive comments about Redmine, too, exactly concerning its ability to work with multiple projects better than Trac. However, I don't have practical experience with it, so whether it is better than Trac enough to migrate is something we should determine when/if Dean or something else posts a specific proposal to move from Trac. - Volodya

If you have any interest in testing it without having to find a server host available and having to install it yourself (it's not as easy as they said, even TRAC is simpler...), I can create accounts to my "private" installation (on a publicly available server) that is mostly setup to experiment and compare with TRAC (so you can play with it without bothering about side effects on other projects inside it). Just tell me (and others too) if you want an access to play with it. I'm wishing to help improve boost any way I can. :) On Tue, Dec 28, 2010 at 15:08, Vladimir Prus <vladimir@codesourcery.com>wrote:
Klaim wrote:
No, I mean that I'll be opposed to visiting 4 different issue trackers for 4 different components I maintain. I want to see a single list of issues on my plate, so that I can prioritise them together.
Isn't it more a issue tracking problem? TRAC is well known to be able to manage only 1 project (that will change with coming versions) I've used RedMine to compare with TRAC, it allows managing hierarchies of projects and having cross-project ticket requests too. I'm not advocating for a change to RedMine but just that TRAC could be updated (later not now) to manage libraries as different projects (that can share the same repo or not). I'm saying this because I know that the component field alone isn't powerful enough when it comes to managing different modules or libraries inside the same organisation. It's better used to say if the ticket is about documentation or not for example. That said, I'm talkign about features not available yet in TRAC and I don't know exactly the cost of moving a big tracker database to another one, so ignore my humble comment as you wish :)
I have heard positive comments about Redmine, too, exactly concerning its ability to work with multiple projects better than Trac. However, I don't have practical experience with it, so whether it is better than Trac enough to migrate is something we should determine when/if Dean or something else posts a specific proposal to move from Trac.
- Volodya
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

(Mybe that should be moved in another "thread") About moving from TRAC, I'm not the most experienced here but my personal research about this specific thing (bug tracking tools) came to this : 1. TRAC is at the moment well thought for big libraries (looks like implementors are really experimented and think a lot before coding), better than Redmine that is still young (in my view at least). 2. TRAC will have the major feature everyone ask for in coming releases : multiple projects management. Redmine have it from the start. TRAC schedule is always (very) late so it depends on when you want what. 3. Redmine is really powerful on the user experience side : easier to do anything than TRAC. However, some improvement in TRAC makes me think that some enhancements are going on that might help get back to the Redmine level. But... 4. ...Redmine evolves clearly faster than TRAC. Lot of releases per year, almost all easy to update. Even if Redmine lacks some features I'd like to see in both TRAC and Redmine, it have better chance to implement them first. 5. Ive posted tickets on redmine project telling what I think is wrong in it. I did the same in TRAC too. I can provide links if you want, but I'm not sure my experience is really worth so let me know. 6. I've worked with Jira before. It's good too but I think the request interface is as bad as the TRAC one (that might be personal). Although, there don't seem to be any way to setup a specific ticket workflow (that is vital to adapt the tool to your specific team organisation) but I'm not sure about that (didn't have access to the server where our JIRA was hosted to see how it's configured). 7. A lot of people (including the OGRE library implementation leader) tried to move from TRAC to Redmine with success. I though to do the same for my bigger project but some things in Redmine makes me think that I should stay with TRAC for the moment. So my current thinking is that TRAC is good enough if you have already a project in it, would be even better if the next release comes not to late( because it is almost always too late) and it's a good idea to watch Redmine evolve because it might get really quick to the point it's the defacto TRAC killer. That said, if boost does split libraries (whatever how), that would imply having multiple projects in TRAC. My 2 cents on this specific point. On Tue, Dec 28, 2010 at 15:08, Vladimir Prus <vladimir@codesourcery.com>wrote:
Klaim wrote:
No, I mean that I'll be opposed to visiting 4 different issue trackers for 4 different components I maintain. I want to see a single list of issues on my plate, so that I can prioritise them together.
Isn't it more a issue tracking problem? TRAC is well known to be able to manage only 1 project (that will change with coming versions) I've used RedMine to compare with TRAC, it allows managing hierarchies of projects and having cross-project ticket requests too. I'm not advocating for a change to RedMine but just that TRAC could be updated (later not now) to manage libraries as different projects (that can share the same repo or not). I'm saying this because I know that the component field alone isn't powerful enough when it comes to managing different modules or libraries inside the same organisation. It's better used to say if the ticket is about documentation or not for example. That said, I'm talkign about features not available yet in TRAC and I don't know exactly the cost of moving a big tracker database to another one, so ignore my humble comment as you wish :)
I have heard positive comments about Redmine, too, exactly concerning its ability to work with multiple projects better than Trac. However, I don't have practical experience with it, so whether it is better than Trac enough to migrate is something we should determine when/if Dean or something else posts a specific proposal to move from Trac.
- Volodya
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

At Tue, 28 Dec 2010 15:02:07 +0100, Klaim wrote:
No, I mean that I'll be opposed to visiting 4 different issue trackers for 4 different components I maintain. I want to see a single list of issues on my plate, so that I can prioritise them together.
Isn't it more a issue tracking problem? TRAC is well known to be able to manage only 1 project (that will change with coming versions)
No, it won't. Do you know how old that feature request is? 7 years (http://trac.edgewall.org/ticket/130)! A few years ago I gave up waiting for the Trac people to handle it and authored an extension to do that: http://trac.edgewall.org/wiki/TracMultipleProjects/ComprehensiveSolution and couldn't maintain it. Very happily using Redmine now. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

I know about the 7 years existence of the ticket, but if you follow closely the (very) recent evolution of TRAC, or simply look at the features of the (late) coming version : http://trac.edgewall.org/wiki/TracDev/ReleaseNotes/0.13 <http://trac.edgewall.org/wiki/TracDev/ReleaseNotes/0.13>The discussion about how it will be achieved is there : http://trac.edgewall.org/wiki/TracDev/Proposals/MultipleProject The related tasks planned are there : http://trac.edgewall.org/wiki/MultipleProjectSupport You see that it's planned (at least). It was not planned before as you pointed it correctly. <http://trac.edgewall.org/wiki/MultipleProjectSupport>However, I agree that you can't count on it at the moment neither in near future, so if multi-projects management becomes a requirement, TRAC can't be kept and Redmine and Jira becomes the only good alternatives. I personally prefer Redmine but would still like it to be more mature in some aspects. On Wed, Dec 29, 2010 at 03:15, Dave Abrahams <dave@boostpro.com> wrote:
At Tue, 28 Dec 2010 15:02:07 +0100, Klaim wrote:
No, I mean that I'll be opposed to visiting 4 different issue trackers for 4 different components I maintain. I want to see a single list of issues on my plate, so that I can prioritise them together.
Isn't it more a issue tracking problem? TRAC is well known to be able to manage only 1 project (that will change with coming versions)
No, it won't. Do you know how old that feature request is? 7 years (http://trac.edgewall.org/ticket/130)! A few years ago I gave up waiting for the Trac people to handle it and authored an extension to do that: http://trac.edgewall.org/wiki/TracMultipleProjects/ComprehensiveSolution and couldn't maintain it. Very happily using Redmine now.
-- Dave Abrahams BoostPro Computing http://www.boostpro.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Klaim, please don't top post. See http://www.boost.org/community/policy.html#quoting That said, please see my response in-lined below: On Wed, Dec 29, 2010 at 7:49 PM, Klaim <mjklaim@gmail.com> wrote:
I know about the 7 years existence of the ticket, but if you follow closely the (very) recent evolution of TRAC, or simply look at the features of the (late) coming version : http://trac.edgewall.org/wiki/TracDev/ReleaseNotes/0.13 <http://trac.edgewall.org/wiki/TracDev/ReleaseNotes/0.13>The discussion about how it will be achieved is there : http://trac.edgewall.org/wiki/TracDev/Proposals/MultipleProject The related tasks planned are there : http://trac.edgewall.org/wiki/MultipleProjectSupport You see that it's planned (at least). It was not planned before as you pointed it correctly.
Sorry, but I myself loved Trac when it came out as it looked better than Bugzilla. I was stuck in a place where Bugzilla was sacred and the thought of moving to something better didn't even get discussed as it was one of those "taboo" issues. Anyway, I'm glad they stuck with Bugzilla because even Bugzilla seems to be evolving much faster/better than Trac. The project doesn't inspire confidence if there isn't an active community of users and developers dedicated to improving the product. Even if Trac 0.13 does deliver on its promises and does do things in a better way, then it would take time for me to even consider using it again.
<http://trac.edgewall.org/wiki/MultipleProjectSupport>However, I agree that you can't count on it at the moment neither in near future, so if multi-projects management becomes a requirement, TRAC can't be kept and Redmine and Jira becomes the only good alternatives. I personally prefer Redmine but would still like it to be more mature in some aspects.
I'm a +1 for either Redmine or Jira, but mostly Jira because of the community aspects that it introduces. With Jira you can let members of the community vote up on a certain issue that's raised. This makes planning which issues to address much easier to see which ones actually have a higher impact based on community feedback. I'm positive Redmine would have a similar mechanism, but I'd be happy moving to either one of them as long as it's away from Trac. ;) -- Dean Michael Berris about.me/deanberris

Dean Michael Berris wrote on Tuesday, December 28, 2010 7:05 AM
I'm not talking about having 4 different sites for one library -- I'm saying, for each Boost library there should be one issue tracker. This means if you need to check anything regarding that library, then you go to exactly one place.
It sounds to me like Vladimir goes to exactly one place right now. Erik ---------------------------------------------------------------------- This message w/attachments (message) is intended solely for the use of the intended recipient(s) and may contain information that is privileged, confidential or proprietary. If you are not an intended recipient, please notify the sender, and then please delete and destroy all copies and attachments, and be advised that any review or dissemination of, or the taking of any action in reliance on, the information contained in or attached to this message is prohibited. Unless specifically indicated, this message is not an offer to sell or a solicitation of any investment products or other financial product or service, an official confirmation of any transaction, or an official statement of Sender. Subject to applicable law, Sender may intercept, monitor, review and retain e-communications (EC) traveling through its networks/systems and may produce any such EC to regulators, law enforcement, in litigation and as required by law. The laws of the country of each sender/recipient may impact the handling of EC, and EC may be archived, supervised and produced in countries other than the country in which you are located. This message cannot be guaranteed to be secure or free of errors or viruses. References to "Sender" are references to any subsidiary of Bank of America Corporation. Securities and Insurance Products: * Are Not FDIC Insured * Are Not Bank Guaranteed * May Lose Value * Are Not a Bank Deposit * Are Not a Condition to Any Banking Service or Activity * Are Not Insured by Any Federal Government Agency. Attachments that are part of this EC may have additional important disclosures and disclaimers, which you should read. This message is subject to terms available at the following link: http://www.bankofamerica.com/emaildisclaimer. By messaging with Sender you consent to the foregoing.

On Tue, Dec 28, 2010 at 10:23 PM, Nelson, Erik - 2 <erik.l.nelson@bankofamerica.com> wrote:
Dean Michael Berris wrote on Tuesday, December 28, 2010 7:05 AM
I'm not talking about having 4 different sites for one library -- I'm saying, for each Boost library there should be one issue tracker. This means if you need to check anything regarding that library, then you go to exactly one place.
It sounds to me like Vladimir goes to exactly one place right now.
Well, if you look at it the same way I do, in Github I only ever go to one place too. I can follow the repositories/projects I'm interested in, and find the appropriate issues that I care about accordingly too. -- Dean Michael Berris about.me/deanberris

At Tue, 28 Dec 2010 14:34:23 +0300, Vladimir Prus wrote:
In fact, I so much dislike having to check N different issue trackers that I have a student working on a tool to present issues from different trackers in a single UI.
Oh, so cool. I'm rooting for you. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

AMDG On 12/28/2010 3:34 AM, Vladimir Prus wrote:
Dean Michael Berris wrote:
Imagine if you had one issue tracker per Boost library. Then you don't have to worry about crafting the queries to get the relevant information in the first place. And then it's going to be easier to develop milestones per library than creating one big milestone and having one giant release. You can then have different workflows per Boost library depending on what the developers of the library are comfortable with. I'd be very much against the idea of checking 4 different sites as opposed to 1 -- at least, until I have 4 pairs of hand to work on 4 different projects at the same time. In fact, I so much dislike having to check N different issue trackers that I have a student working on a tool to present issues from different trackers in a single UI. However, until that project is done, I'd much rather not having things split up unnecessary.
I very strongly agree. I periodically go through every open ticket in trac. Having separate issue trackers would make this practically impossible. In Christ, Steven Watanabe

On Thu, Dec 30, 2010 at 2:09 PM, Steven Watanabe <watanabesj@gmail.com> wrote:
AMDG
On 12/28/2010 3:34 AM, Vladimir Prus wrote:
I'd be very much against the idea of checking 4 different sites as opposed to 1 -- at least, until I have 4 pairs of hand to work on 4 different projects at the same time. In fact, I so much dislike having to check N different issue trackers that I have a student working on a tool to present issues from different trackers in a single UI. However, until that project is done, I'd much rather not having things split up unnecessary.
I very strongly agree. I periodically go through every open ticket in trac. Having separate issue trackers would make this practically impossible.
Interesting point. Definitely well taken. I just wonder though how that practice of "periodically going through every open ticket in Trac" scales. Is it just me who doesn't do this and who would rather focus on libraries that actually mean more to me? -- Dean Michael Berris about.me/deanberris

AMDG On 1/1/2011 1:41 AM, Dean Michael Berris wrote:
On Thu, Dec 30, 2010 at 2:09 PM, Steven Watanabe<watanabesj@gmail.com> wrote:
I very strongly agree. I periodically go through every open ticket in trac. Having separate issue trackers would make this practically impossible. Interesting point. Definitely well taken.
I just wonder though how that practice of "periodically going through every open ticket in Trac" scales.
I don't know. It hasn't been a problem up to 1000 tickets. I think I could still handle it up to a few times larger. Anyway I don't do this all the time, so it's okay if it takes a while.
Is it just me who doesn't do this and who would rather focus on libraries that actually mean more to me?
I expect that most people are like you in this respect. I do think that it is important for someone to do so that critical and/or trivial tickets against less well maintained libraries don't slip through the cracks. In Christ, Steven Watanabe

On Sat, Jan 1, 2011 at 7:17 PM, Steven Watanabe <watanabesj@gmail.com> wrote:
AMDG
On 1/1/2011 1:41 AM, Dean Michael Berris wrote:
I just wonder though how that practice of "periodically going through every open ticket in Trac" scales.
I don't know. It hasn't been a problem up to 1000 tickets. I think I could still handle it up to a few times larger. Anyway I don't do this all the time, so it's okay if it takes a while.
Steven, it might be easier for you to change the global TRAC configuration to send a mail to you at every change. It is at least a complement to what you're doing. /$

AMDG On 1/1/2011 10:43 AM, Henrik Sundberg wrote:
On Sat, Jan 1, 2011 at 7:17 PM, Steven Watanabe<watanabesj@gmail.com> wrote:
On 1/1/2011 1:41 AM, Dean Michael Berris wrote:
I just wonder though how that practice of "periodically going through every open ticket in Trac" scales. I don't know. It hasn't been a problem up to 1000 tickets. I think I could still handle it up to a few times larger. Anyway I don't do this all the time, so it's okay if it takes a while. Steven, it might be easier for you to change the global TRAC configuration to send a mail to you at every change. It is at least a complement to what you're doing.
Yep. I'm already subscribed to boost-bugs In Christ, Steven Watanabe

On Jan 1, 2011, at 10:47 AM, Steven Watanabe wrote:
On 1/1/2011 10:43 AM, Henrik Sundberg wrote:
On Sat, Jan 1, 2011 at 7:17 PM, Steven Watanabe<watanabesj@gmail.com> wrote:
On 1/1/2011 1:41 AM, Dean Michael Berris wrote:
I just wonder though how that practice of "periodically going through every open ticket in Trac" scales. I don't know. It hasn't been a problem up to 1000 tickets. I think I could still handle it up to a few times larger. Anyway I don't do this all the time, so it's okay if it takes a while. Steven, it might be easier for you to change the global TRAC configuration to send a mail to you at every change. It is at least a complement to what you're doing.
Yep. I'm already subscribed to boost-bugs
As are 33 other people, fwiw. -- Marshall

At Tue, 28 Dec 2010 19:25:52 +0800, Dean Michael Berris wrote:
Sure, but the whole point that you have a central place to query the information is what's broken -- especially if you have to resort to these "hacks" just to filter out what's important for you.
Imagine if you had one issue tracker per Boost library. Then you don't have to worry about crafting the queries to get the relevant information in the first place.
I disagree with you on this one. Even if I had every project in a separate tracker, I would still want something to aggregate the issues so I could prioritize and look in one place for all the things that are relevant to me... see Redmine, for example. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On Wed, Dec 29, 2010 at 10:08 AM, Dave Abrahams <dave@boostpro.com> wrote:
At Tue, 28 Dec 2010 19:25:52 +0800, Dean Michael Berris wrote:
Sure, but the whole point that you have a central place to query the information is what's broken -- especially if you have to resort to these "hacks" just to filter out what's important for you.
Imagine if you had one issue tracker per Boost library. Then you don't have to worry about crafting the queries to get the relevant information in the first place.
I disagree with you on this one. Even if I had every project in a separate tracker, I would still want something to aggregate the issues so I could prioritize and look in one place for all the things that are relevant to me... see Redmine, for example.
Right, I guess I never really had that issue as I generally want to be in a certain "mode" when I'm looking through issues. GitHub makes that really simple since I only get notifications on issues I'm involved in -- ones where I either commented or posted myself -- and then I can go look at the issues relevant to the library I'm working on from that library's issue tracker. I think this is also the reason why I don't like the Trac approach where all the issues just get piled up in the same container and you have to filter it out actively. Maybe it's just the way I work that's different from everyone else's preferred way of working on things? -- Dean Michael Berris about.me/deanberris

AMDG On 1/1/2011 1:33 AM, Dean Michael Berris wrote:
Right, I guess I never really had that issue as I generally want to be in a certain "mode" when I'm looking through issues.
GitHub makes that really simple since I only get notifications on issues I'm involved in -- ones where I either commented or posted myself -- and then I can go look at the issues relevant to the library I'm working on from that library's issue tracker.
I think this is also the reason why I don't like the Trac approach where all the issues just get piled up in the same container and you have to filter it out actively.
Maybe it's just the way I work that's different from everyone else's preferred way of working on things?
From my point of view, it doesn't really matter whether everything is in one place and I can filter out what I'm not interested in or is separate and I can somehow aggregate it all. I want both but I don't really care how I get them nor how the data is actually stored. In Christ, Steven Watanabe

On Sun, Jan 2, 2011 at 2:02 AM, Steven Watanabe <watanabesj@gmail.com> wrote:
From my point of view, it doesn't really matter whether everything is in one place and I can filter out what I'm not interested in or is separate and I can somehow aggregate it all. I want both but I don't really care how I get them nor how the data is actually stored.
Okay, I agree having both would not be a bad thing. :) -- Dean Michael Berris about.me/deanberris

Anyway, it seemed to me that the issue with patches for Boost.Pool and Boost.Iterators wasn't any inability to supply the patches to the 'official' place... it was an inability to get the attention of those who had the power to get the patches into 'official' Boost. Git won't change that.
I'm mostly staying out of this... but that sounds about right to me... at least we have a centralized place to put patches/bug reports/feature requests, what we really need is more folks to process them.
Right. There's a different way of looking at it too:
"At least we have a centralized place to put garbage in and let it rot, what we really need is more folks to segregate the trash and manage one big pile"
To me at least, that centralized place to gather patches/bug reports/feature requests isn't actually a good thing. Let me state a few reasons why I think this:
1. The signal/noise ratio can be hard to keep down especially if you have a lot of ground to cover. Consider how you (or any other maintainer for example) would want to manage a part of the 1000 tickets that are all in the same pile. Sure you can query it many different ways, but wouldn't it be way easier to just look at 100 issues just for Boost.Regex than it is to spend some time looking at 1000 issues that might be relevant to Boost.Regex?
I only ever look at those issues that are relevent to me, curently not quite down to single figures, but close ;-) and that covers all of config, regex, math, type_traits and tr1... It's also not uncommon for issues to either effect multiple libraries, or to need to be reassigned from one library to another, the current system makes that trivial - albeit I do wish that Trac had an easier way to get from folks real names to their SVN login name (we should probably have insisted folks use their real name for this).
2. It's harder to divide and conquer if you start from the top-down. Let me qualify that a little: if you start with one big-ass pile of dung, the stink is much harder to overcome than if you processed the input as it comes in and segregate bottom-up (no pun intended). If you had one place where issues for Boost.Regex gets tracked, where discussion around Boost.Regex gets documented, where design decisions are hashed out, and where documentation is ultimately developed, then your progress with dealing with Boost.Regex shouldn't hamper the progress and development of other libraries not dependent on Boost.Regex. This means issues for Boost.Proto don't get piled into the same pile of issues where Boost.Regex issues will be piled on. Processing the issues as they come in would be way easier to manage than if you started with one pile containing both issues.
I'm not sure I follow, I do process issues as they come in, and there is one place for regex discussions - right here with [regex] in the title - or on any Trac ticket assigned to me.
3. I'm not sure how the "single point of failure" comes into play, but centralized anything means that one thing goes down, then everything fails. I don't think I need to stress that point any more than I have to. ;)
OK you win on that one ;-)
Further in order to process patches into the "official" release, we really need "that one guy" that just knows lib X inside out and can look at a patch and just know whether it's going to be OK or not. I'd say about 3/4 of the patches I get are spot on, and frankly I never fail to be impressed by the quality of a lot of them. But there's about a quarter that are either down right dangerous, or at least will cause more trouble down the line if applied. Often these problem patches aren't obviously problems, it's just that the person supplying them quite clearly doesn't have the time to really get to know the library inside out, so they can be supplied by perfectly trustworthy individuals. Hell, I may have submitted a few myself! Too many patches like that, and the whole thing can snowball, making it much harder to fix things properly down the road.
Unfortunately, insisting on "that one guy" being there and doing what you describe as essentially maintenance, is actually part of the reason why the development model doesn't scale IMHO.
Being able to trust people and empowering people to actually be able to just muck around with things and then asking for changes -- that still need to be reviewed anyway -- to get baked in and shepherded by trusted people (note, not just "that one guy") lowers the barrier to entry for contributors. It's actually a better problem to have if you have 10x more contributors than it is to have 10x more issues. Because sending patches around is brittle and a nightmare to manage, using tools that make it easier should be a welcome development I imagine.
Whose doing the reviewing? IMO it's back to that "one guy" again - OK there may actually be more than one person who can do that (as there may be now) - but there's still that bottleneck there IMO.
That said, consider the case where you have 5 trusted people who you know already know the Boost.Regex internals either as much as you do or better, then have 10 people who implement new features and make changes to the implementation -- would you rather be the one to deal with the changes of these 10 people or would you welcome the word of any of the 5 trusted people to apply changes that any of the 10 people make on your behalf? Essentially in the current parlance, these 5 trusted people would normally be called "co-maintainers", and the 10 people would be called "potential contributors"; in case any of the 10 potential contributors have their changes pulled in, then they become "contributors".
IMO there's nothing stopping folks now from getting involved like that, and there are a few very welcome folks around here who have their fingers in multiple pies and can help out with any of them. But IMO the central issue is getting the volunteers in the first place.
On the issue of library maintainers.... we don't actually need permanent full time maintainers for a lot of the older stuff... all they may need is a good spruce up every 5 years or so, with maybe the odd patch applied here and there in-between if there's a new compiler that causes issues. Maybe as well as bug sprints, we should organize a few "hit squads" to go after an older library, rejuvenate it and then move on to the next target. In fact if folks think that might be a good idea, I might be moved to try and organize something....
Sure, that's a thought, but that's thinking with a band-aid short-term solution. The bug sprints are a good idea, but I don't get why a bug sprint can't last 1 whole year and be an on-going effort. Having bug sprints and hit squads (ninja clans, strike teams, etc.) are short-term non-sustainable solutions to the issue of open source maintenance.
The reason it doesn't last all year, is simply not getting enough volunteers to run things IMO.
I'd for one as a potential contributor would like to be encouraged to dive into the code, get some changes submitted, and see that there are people who actually care. With the current process and system in place, I don't feel like it's a conducive environment for potential contributors only just because of the barriers to entry.
I still don't see how changing toolsets helps this - right now you can SVN copy a single library into the sandbox *or any other SVN repo of your choice* - it took me about 2 minutes flat to do this for a section of Boost.Math that I wanted to work on this month - and then away you go, edit away to your hearts content, and submit the final result when you're ready. I accept that GIT may be a lot easier *for folks that already use it*, just as SVN is easier for those of us old timers that have been using that for a while. Of course I didn't see the need to change from CVS to SVN either, so you can see where I'm coming from.... :-0 John.

On Tue, Dec 28, 2010 at 5:33 PM, John Maddock <boost.regex@virgin.net> wrote:
1. The signal/noise ratio can be hard to keep down especially if you have a lot of ground to cover. Consider how you (or any other maintainer for example) would want to manage a part of the 1000 tickets that are all in the same pile. Sure you can query it many different ways, but wouldn't it be way easier to just look at 100 issues just for Boost.Regex than it is to spend some time looking at 1000 issues that might be relevant to Boost.Regex?
I only ever look at those issues that are relevent to me, curently not quite down to single figures, but close ;-) and that covers all of config, regex, math, type_traits and tr1...
It's also not uncommon for issues to either effect multiple libraries, or to need to be reassigned from one library to another, the current system makes that trivial - albeit I do wish that Trac had an easier way to get from folks real names to their SVN login name (we should probably have insisted folks use their real name for this).
Not withstanding the issue with real names -- that gets solved partially by a GPG web of trust system where you associate keys with people with real names and/or unique email addresses -- re-assigning tickets is a bad practice IMO. The reason is simple: train the users/developers to file the issue in the correct issue tracker; if they file it wrong, it gets closed with an explanation of what to do correctly. For someone wanting to contribute, hacking up a Trac query to get issues that only pertain to Boost.Regex is too much to ask. That kind of barrier is the kind that I want to be able to remove -- if you want to see the issues related to Boost.Regex that need fixing, there should just be one place where you find everything there is to find about Boost.Regex. No fumbling about with Trac queries just to filter out the noise from other projects that a potential contributor would rather not deal with.
2. It's harder to divide and conquer if you start from the top-down. Let me qualify that a little: if you start with one big-ass pile of dung, the stink is much harder to overcome than if you processed the input as it comes in and segregate bottom-up (no pun intended). If you had one place where issues for Boost.Regex gets tracked, where discussion around Boost.Regex gets documented, where design decisions are hashed out, and where documentation is ultimately developed, then your progress with dealing with Boost.Regex shouldn't hamper the progress and development of other libraries not dependent on Boost.Regex. This means issues for Boost.Proto don't get piled into the same pile of issues where Boost.Regex issues will be piled on. Processing the issues as they come in would be way easier to manage than if you started with one pile containing both issues.
I'm not sure I follow, I do process issues as they come in, and there is one place for regex discussions - right here with [regex] in the title - or on any Trac ticket assigned to me.
Right. Is it just me who thinks that the Trac UI is needlessly complicated when filing issues? I use GitHub's issue tracker and it gives me two fields: the title of the issue, and the comment that comes with the title (which serves as a longer description of the issue). If you want to send me a patch on GitHub, you ought to fork the repository, make changes to your own fork, then ask me to pull. I can review the changes right there and then with a few quick steps merge your changes into my repository. No attaching files, tickets, etc. If you wanted to show some code to show how to reproduce an issue, you either put the code in-line or link in a Gist. It's really that simple over there. As opposed to Trac which has 15 (?) fields to fill out just to file an issue or start a conversation around a feature request, etc. ;)
3. I'm not sure how the "single point of failure" comes into play, but centralized anything means that one thing goes down, then everything fails. I don't think I need to stress that point any more than I have to. ;)
OK you win on that one ;-)
;-)
Being able to trust people and empowering people to actually be able to just muck around with things and then asking for changes -- that still need to be reviewed anyway -- to get baked in and shepherded by trusted people (note, not just "that one guy") lowers the barrier to entry for contributors. It's actually a better problem to have if you have 10x more contributors than it is to have 10x more issues. Because sending patches around is brittle and a nightmare to manage, using tools that make it easier should be a welcome development I imagine.
Whose doing the reviewing? IMO it's back to that "one guy" again - OK there may actually be more than one person who can do that (as there may be now) - but there's still that bottleneck there IMO.
No, the process of reviewing can be done by anybody really. I can choose to merge in changes you make on your fork in case you have a nifty implementation that I would like to either build upon or shepherd into getting someone else to pull it from me. You can work around whether that "one guy" is actually there or whether he bothers to look at the pull requests that make it into his queue. A trusted group can then pick up the slack of "that one guy" not being there. In the end, from whom the release managers choose to pull changes from is no longer a matter of who the maintainer of a library is, but rather who they (or the community) think they should pull from. This will largely revolve on whom they trust *and* who does what well enough to have their changes pulled in. Pure meritocracy +1, lower barrier to entry +1.
That said, consider the case where you have 5 trusted people who you know already know the Boost.Regex internals either as much as you do or better, then have 10 people who implement new features and make changes to the implementation -- would you rather be the one to deal with the changes of these 10 people or would you welcome the word of any of the 5 trusted people to apply changes that any of the 10 people make on your behalf? Essentially in the current parlance, these 5 trusted people would normally be called "co-maintainers", and the 10 people would be called "potential contributors"; in case any of the 10 potential contributors have their changes pulled in, then they become "contributors".
IMO there's nothing stopping folks now from getting involved like that, and there are a few very welcome folks around here who have their fingers in multiple pies and can help out with any of them. But IMO the central issue is getting the volunteers in the first place.
What's really stopping folks now is the high barrier to entry for potential contributors. Just getting sandbox access -- asking permission -- is hard enough as opposed to clicking a button that says "fork". That "asking permission" part is what's stopping many people from even trying to contribute. That additional mental step of having to ask for permission to make changes is really a non-starter for most of the potential contributors. Then there's also the issue of being called a "maintainer". Labels matter, and putting that label on someone conveys some sort of authority, which some people really wouldn't like. Instead of being an encouraging factor, it becomes a discouraging factor. Also, just like in real life, earning someone else's trust is hard enough, making it harder doesn't encourage more people to try and earn others' trust. In the current scheme of things, to gain the other maintainers and release managers trust, you're going to have to make it into the club by submitting a full-blown library that gets reviewed and accepted -- there's no second-tier or level of contributors who just want to help out by submitting patches and earn trust that way. Maybe the Guild is a potential way of getting more interested contributors into the fold, but it's still a top-down approach to solving the issue IMO.
Sure, that's a thought, but that's thinking with a band-aid short-term solution. The bug sprints are a good idea, but I don't get why a bug sprint can't last 1 whole year and be an on-going effort. Having bug sprints and hit squads (ninja clans, strike teams, etc.) are short-term non-sustainable solutions to the issue of open source maintenance.
The reason it doesn't last all year, is simply not getting enough volunteers to run things IMO.
Actually, the fact that there has to be a sprint to address the issues in a focused manner is a little disturbing to me. I like participating in things like the bug sprint, and maybe the occasional hackathon. But unfortunately the issues being addressed are symptoms of a larger problem, which is that: 1. There are already a lot of issues raised and the current maintainers of the libraries either don't have time to address them or aren't interested in addressing them. Either way, they're MIA and getting someone else to replace that role is not the solution either -- because that person can later on be MIA and the development/maintenance halts again as a result. 2. Because of the high barrier to entry for potential contributors coupled with the high potential for maintainers to be MIA for various reasons, the issues that get ignored or remain un-addressed contributes to the larger hurdle of improving or maintaining Boost library quality. More issues means more work needs to be done, and having a high barrier doesn't help with allowing others to do that work immediately. 3. The bug sprint is a short-term solution to stop the bleeding, it has to be augmented with a larger more sustainable effort to cutting down the issues that are being raised or that have already been raised. Maybe the guild is a source of potential contributors but if we don't address the high barrier to entry, we're not probably going to see much uptake on being a member of the guild.
I'd for one as a potential contributor would like to be encouraged to dive into the code, get some changes submitted, and see that there are people who actually care. With the current process and system in place, I don't feel like it's a conducive environment for potential contributors only just because of the barriers to entry.
I still don't see how changing toolsets helps this - right now you can SVN copy a single library into the sandbox *or any other SVN repo of your choice* - it took me about 2 minutes flat to do this for a section of Boost.Math that I wanted to work on this month - and then away you go, edit away to your hearts content, and submit the final result when you're ready.
Can you svn copy from one SVN repository to another? How do you experiment on SVN, you make tons of branches that you may potentially forget later on -- how about sending the changes over email or how do you sign changes to certify that you were the ones who really made them and not someone who just managed to forge patches on your behalf?
I accept that GIT may be a lot easier *for folks that already use it*, just as SVN is easier for those of us old timers that have been using that for a while. Of course I didn't see the need to change from CVS to SVN either, so you can see where I'm coming from.... :-0
Well, the workflow is what's fundamentally different. With Git, everybody has a repository of the code. This means you can muck around with your local repository, make as much changes as you want, pull in changes from other repositories willy-nilly, stabilize the implementation locally, then have others pull from your repository as well. Because with subversion you have to maintain a single consistent view of the repository at any given time, the cost of making changes is a lot higher than it is if you had a local repository that you're working on. It's really hard to describe how the distributed model looks like if you've only ever seen the centralized model. Once you've gone "distributed" though, I'm positive you won't go back -- much like when you go black, you won't... Part of changing the toolset is changing the process as well, which you can only do if your tools allow you to make these changes. If you want to have a more scalable and decentralized system (actually, I haven't seen a scalable centralized system as well, that might be a misnomer if there ever existed a scalable centralized solution) then you're going to have to change both the tools and the process. HTH
John. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Dean Michael Berris about.me/deanberris

Dean Michael Berris wrote:
Also, just like in real life, earning someone else's trust is hard enough, making it harder doesn't encourage more people to try and earn others' trust. In the current scheme of things, to gain the other maintainers and release managers trust, you're going to have to make it into the club by submitting a full-blown library that gets reviewed and accepted -- there's no second-tier or level of contributors who just want to help out by submitting patches and earn trust that way.
Why do you think so? In past, a full commit access to program_options was given to a person who is not a maintainer of any other library. - Volodya

On 12/28/2010 6:00 AM, Dean Michael Berris wrote:
On Tue, Dec 28, 2010 at 5:33 PM, John Maddock<boost.regex@virgin.net> wrote:
that trivial - albeit I do wish that Trac had an easier way to get from folks real names to their SVN login name (we should probably have insisted folks use their real name for this).
Assuming people put in their real names we do have a Trac report that does just that <https://svn.boost.org/trac/boost/report/16>.
3. I'm not sure how the "single point of failure" comes into play, but centralized anything means that one thing goes down, then everything fails. I don't think I need to stress that point any more than I have to. ;)
OK you win on that one ;-)
;-)
Well... Centralized doesn't need to mean single point of failure. And there's plenty of industry practice in making centralized and fault-tolerant services. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

that trivial - albeit I do wish that Trac had an easier way to get from folks real names to their SVN login name (we should probably have insisted folks use their real name for this).
Assuming people put in their real names we do have a Trac report that does just that <https://svn.boost.org/trac/boost/report/16>.
Well I never did.... thanks! John.

On Tue, Dec 28, 2010 at 11:51 PM, Rene Rivera <grafikrobot@gmail.com> wrote:
On 12/28/2010 6:00 AM, Dean Michael Berris wrote:
On Tue, Dec 28, 2010 at 5:33 PM, John Maddock<boost.regex@virgin.net> wrote:
3. I'm not sure how the "single point of failure" comes into play, but centralized anything means that one thing goes down, then everything fails. I don't think I need to stress that point any more than I have to. ;)
OK you win on that one ;-)
;-)
Well... Centralized doesn't need to mean single point of failure. And there's plenty of industry practice in making centralized and fault-tolerant services.
Sure, none of which we follow at Boost at the moment IIUC. ;-) I mean, really, when the server hosting the website, the subversion repo, and Trac go down, where do we go? Is there some magical standby machine that's going to kick in as some hot backup which mirrors the whole Boost repo, Trac, and website? -- Dean Michael Berris about.me/deanberris

On 1:59 PM, John Maddock wrote:
Right, but then as I mention in a different thread, the maintainer/owner of the library can be MIA, and then I can ask the release managers and/or someone else who can pull the changes into their repository and shepherd the changes in. People (or in this case, I) can keep innovating and then the changes can get into the release in a less centralized manner -- which is the whole point of a decentralized system.
[...] Anyway, it seemed to me that the issue with patches for Boost.Pool and Boost.Iterators wasn't any inability to supply the patches to the 'official' place... it was an inability to get the attention of those who had the power to get the patches into 'official' Boost. Git won't change that.
I'm mostly staying out of this... but that sounds about right to me... at least we have a centralized place to put patches/bug reports/feature requests, what we really need is more folks to process them.
That's Boost.Guild's concept. See <http://jc-bell.com/contributions/boost-guild>
Further in order to process patches into the "official" release, we really need "that one guy" that just knows lib X inside out and can look at a patch and just know whether it's going to be OK or not. I'd say about 3/4 of the patches I get are spot on, and frankly I never fail to be impressed by the quality of a lot of them. But there's about a quarter that are either down right dangerous, or at least will cause more trouble down the line if applied. Often these problem patches aren't obviously problems, it's just that the person supplying them quite clearly doesn't have the time to really get to know the library inside out, so they can be supplied by perfectly trustworthy individuals. Hell, I may have submitted a few myself! Too many patches like that, and the whole thing can snowball, making it much harder to fix things properly down the road.
This is the rub. Boost.Guild ticket handlers <http://jc-bell.com/contributions/boost-guild/boost-ticket-handling> seem like the best bet, if new maintainers don't step up.
On the issue of library maintainers.... we don't actually need permanent full time maintainers for a lot of the older stuff... all they may need is a good spruce up every 5 years or so, with maybe the odd patch applied here and there in-between if there's a new compiler that causes issues. Maybe as well as bug sprints, we should organize a few "hit squads" to go after an older library, rejuvenate it and then move on to the next target.
I hadn't thought of roving "hit squads" but I like it. My thinking: some low level of continuous activity by a reasonably large group of people, each doing a little: fixing regressions, closing tickets.
In fact if folks think that might be a good idea, I might be moved to try and organize something....
Please do! Can I interest you in the Guild? Want to refine it? Run it?

On Tue, Dec 28, 2010 at 2:33 PM, Jim Bell <Jim@jc-bell.com> wrote:
On 1:59 PM, John Maddock wrote:
I'm mostly staying out of this... but that sounds about right to me... at least we have a centralized place to put patches/bug reports/feature requests, what we really need is more folks to process them.
That's Boost.Guild's concept. See <http://jc-bell.com/contributions/boost-guild>
+1 for Boost.Guild.
Further in order to process patches into the "official" release, we really need "that one guy" that just knows lib X inside out and can look at a patch and just know whether it's going to be OK or not. I'd say about 3/4 of the patches I get are spot on, and frankly I never fail to be impressed by the quality of a lot of them. But there's about a quarter that are either down right dangerous, or at least will cause more trouble down the line if applied. Often these problem patches aren't obviously problems, it's just that the person supplying them quite clearly doesn't have the time to really get to know the library inside out, so they can be supplied by perfectly trustworthy individuals. Hell, I may have submitted a few myself! Too many patches like that, and the whole thing can snowball, making it much harder to fix things properly down the road.
This is the rub. Boost.Guild ticket handlers <http://jc-bell.com/contributions/boost-guild/boost-ticket-handling> seem like the best bet, if new maintainers don't step up.
I'd say "ticket handlers" would be another word of "trusted developers". In the system I was thinking of using Git+GPG having someone sign the changes/patches would be a good way of marking that they trust a patch and are willing to accept it into their own repository. By signing the changes a contributor/developer puts his reputation on the line with every patch he accepts into his repository -- and the same goes until the changes get to the top. It's pretty hard for me to visualize this bottom-up approach to getting changes up the line, maybe I'll try a diagram at some point if it doesn't make sense yet.
On the issue of library maintainers.... we don't actually need permanent full time maintainers for a lot of the older stuff... all they may need is a good spruce up every 5 years or so, with maybe the odd patch applied here and there in-between if there's a new compiler that causes issues. Maybe as well as bug sprints, we should organize a few "hit squads" to go after an older library, rejuvenate it and then move on to the next target.
I hadn't thought of roving "hit squads" but I like it. My thinking: some low level of continuous activity by a reasonably large group of people, each doing a little: fixing regressions, closing tickets.
This is, in other projects, called a developer community -- and they don't need a name like "hit squad" and "guild". ;)
In fact if folks think that might be a good idea, I might be moved to try and organize something....
Please do! Can I interest you in the Guild? Want to refine it? Run it?
I have a reservation on this idea which I've already expressed in a separate reply on the same thread. That said, if it's really what people in Boost would want to do, then I guess I'll just go along with the flow -- and complain loudly as I can along the way. ;) -- Dean Michael Berris about.me/deanberris

At Mon, 27 Dec 2010 18:14:25 -0000, John Maddock wrote:
Right, but then as I mention in a different thread, the maintainer/owner of the library can be MIA, and then I can ask the release managers and/or someone else who can pull the changes into their repository and shepherd the changes in. People (or in this case, I) can keep innovating and then the changes can get into the release in a less centralized manner -- which is the whole point of a decentralized system.
It seems to me you could write almost the same thing using SVN-speak...
"if the owner is MIA, I can send the patch to the release manager who can apply the patch to SVN and shepherd the changes in. That way I can keep innovating on my local working copy and the change can get into the release on its own schedule"
Anyway, it seemed to me that the issue with patches for Boost.Pool and Boost.Iterators wasn't any inability to supply the patches to the 'official' place... it was an inability to get the attention of those who had the power to get the patches into 'official' Boost. Git won't change that.
I'm mostly staying out of this... but that sounds about right to me... at least we have a centralized place to put patches/bug reports/feature requests, what we really need is more folks to process them.
I think this is where the "web of trust" comes into play. One problem right now is that for each library there's a single bottleneck through which all changes *must* pass. No release manager is going to integrate changes from an outside contributor without them having been looked over by the library maintainer. Heck, patches don't even get *tested* in a meaningful way until the library maintainer has pulled them in. If patches could be broadly tested, I can easily imagine that a community contributor's level of credibility could rise to a level where her patches would be accepted upstream very easily without intervention. It's also the case that merging changes and moving them to the release branch is just *way* too labor-intensive to make broad contribution practical. Take a look at GitHub for example. After you develop changes in your own repo, you can issue a "pull request." Here's a project with a bunch of those pull requests pending: https://github.com/dimitri/el-get/pulls The maintainer can easily review the changes, and the site will tell the maintainer which of the changes are going to merge cleanly. Then he can press a button and the merge is done. If I had something like that for Boost.Python, I promise you I'd be processing a lot of those patches that are right now idling in Trac. Combined with a testing system that made these changes easy to verify, I think Boost could be very nimble indeed, and it would be fairly easy to give other people the rights to press that "merge the changes" button.
Further in order to process patches into the "official" release, we really need "that one guy" that just knows lib X inside out and can look at a patch and just know whether it's going to be OK or not. I'd say about 3/4 of the patches I get are spot on, and frankly I never fail to be impressed by the quality of a lot of them. But there's about a quarter that are either down right dangerous, or at least will cause more trouble down the line if applied. Often these problem patches aren't obviously problems, it's just that the person supplying them quite clearly doesn't have the time to really get to know the library inside out, so they can be supplied by perfectly trustworthy individuals. Hell, I may have submitted a few myself! Too many patches like that, and the whole thing can snowball, making it much harder to fix things properly down the road.
On the issue of library maintainers.... we don't actually need permanent full time maintainers for a lot of the older stuff... all they may need is a good spruce up every 5 years or so, with maybe the odd patch applied here and there in-between if there's a new compiler that causes issues. Maybe as well as bug sprints, we should organize a few "hit squads" to go after an older library, rejuvenate it and then move on to the next target. In fact if folks think that might be a good idea, I might be moved to try and organize something....
Hey, sure, why not? Please feel free to "hit" my libraries ;-) -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 10-12-27 12:27 AM, Vladimir Prus wrote:
You*really* should use git-svn. It's trivial to push any line of history to any branch on any subversion server.
If you have the time, would you mind outlining your workflow? There is something about the manual that just doesn't work with my brain. -- Sohail Somani -- iBlog : http://uint32t.blogspot.com iTweet: http://twitter.com/somanisoftware iCode : http://bitbucket.org/cheez

Sohail Somani wrote:
On 10-12-27 12:27 AM, Vladimir Prus wrote:
You*really* should use git-svn. It's trivial to push any line of history to any branch on any subversion server.
If you have the time, would you mind outlining your workflow? There is something about the manual that just doesn't work with my brain.
I'd be happy to talk to you about that, but probably offlist, because I use git-svn for non-Boost things. But one thing that I did in past and which is relevant to current discussion is: - Mirror an SVN branch with some third-party component and a pile of patches to git. - Mirror an SVN branch with later version of third-party component to git. - Move patches from one branch to another, rearranging them heavily - Push changes to the second branch in SVN. This was pretty painful, but most of the pain was due to 'git rebase': - Being totally unreliable in some earlier version - Having rather awkward behaviour in all versions git-svn on the other hand, worked just fine. - Volodya
participants (28)
-
Bjørn Roald
-
Chad Nelson
-
Daniel Pfeifer
-
Dave Abrahams
-
Dean Michael Berris
-
Edward Diener
-
Eric Niebler
-
Felipe Magno de Almeida
-
Gordon Woodhull
-
Henrik Sundberg
-
Jarrad Waterloo
-
Jim Bell
-
Joel de Guzman
-
John Maddock
-
Klaim
-
Lars Viklund
-
Marshall Clow
-
Nelson, Erik - 2
-
Nigel Stewart
-
Oliver Kowalke
-
Paul A. Bristow
-
Rene Rivera
-
Robert Ramey
-
Sohail Somani
-
Steven Watanabe
-
Stewart, Robert
-
Vicente Botet
-
Vladimir Prus