The problems with Boost development

Hello, in a recent post, Dave listed a few things that he thinks are wrong with Boost development, at present, quoting: I know I'm not the first person to notice that, as Boost has grown, it has become harder and harder to manage, Subversion is getting slow, our issue tracker is full to overflowing, and the release process is a full-time job. It seems to be important, right now, to discuss whether this problems are real, and what problems are most important. So, I would like to ask that everybody who is somehow involved in *development* -- whether in writing code, triaging bugs, sending patches, or managing thing, list three most important problems with Boost now. Please keep the items to a sentence or two, so that we can easily collect the problems. Here's my take: - Unmaintained components. Many authors are no longer active, and we have no procedures for taking over. - Reviews that are getting rare and, IMO, less interesting then before. - Turnaround time of test results Thanks, Volodya

On 03/19/2010 04:14 AM, Vladimir Prus wrote:
Hello,
in a recent post, Dave listed a few things that he thinks are wrong with Boost development, at present, quoting:
I know I'm not the first person to notice that, as Boost has grown, it has become harder and harder to manage, Subversion is getting slow, our issue tracker is full to overflowing, and the release process is a full-time job.
It seems to be important, right now, to discuss whether this problems are real, and what problems are most important.
I have to admit that I'm not very fond of this attitude that focuses on tools, rather than process. I don't think a switch from Subversion to Git in itself will solve any problem. (In fact, the constant focus on support tools takes away attention from what I consider the real issues, so it hinders progress.) However, the suggested change implies more than a switch of support tools...
So, I would like to ask that everybody who is somehow involved in *development* -- whether in writing code, triaging bugs, sending patches, or managing thing, list three most important problems with Boost now. Please keep the items to a sentence or two, so that we can easily collect the problems.
Here's my take:
- Unmaintained components. Many authors are no longer active, and we have no procedures for taking over.
I think this is symptomatic of an underlying problem with boost's mission: More and more components are added to boost, yet its mission statement as well as infrastructure and process don't scale. A couple of months some of us suggested a change of mission, for boost to become something akin to the apache foundation, i.e. an umbrella organization for (mostly) independent projects. While I don't think Dave's current work addresses that issue directly, it seems it may be a good step into that direction. (A technical step, but technical issues are always the easiest to solve.) I can also see that one of the fallouts of this modularization is that components will live or die with their individual communities and maintainers. They won't be dragged along with a huge monolithic project any longer.
- Reviews that are getting rare and, IMO, less interesting then before.
I'm not sure that this is a problem. If there is enough interest into a new component, enough people will eventually get together to get things done. If things stall, it more often than not implies that there is not enough interest.
- Turnaround time of test results
Definitely true. By componentizing things that will improve, though, since each tested component may rely on well-defined prerequisites, so these don't need to be built each time. It also makes testing much more attractive, since it requires less resources. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld wrote:
On 03/19/2010 04:14 AM, Vladimir Prus wrote:
Hello,
in a recent post, Dave listed a few things that he thinks are wrong with Boost development, at present, quoting:
I know I'm not the first person to notice that, as Boost has grown, it has become harder and harder to manage, Subversion is getting slow, our issue tracker is full to overflowing, and the release process is a full-time job.
It seems to be important, right now, to discuss whether this problems are real, and what problems are most important.
I have to admit that I'm not very fond of this attitude that focuses on tools, rather than process. I don't think a switch from Subversion to Git in itself will solve any problem.
I think that before discussing solutions, we need to figure the set of the problems. That's why I'm trying to elicit specific list of problems, in this thread and have a discussion about solution after -- whether independently or whenever any solutions are proposed. - Volodya

On 19 March 2010 08:14, Vladimir Prus <ghost@cs.msu.su> wrote:
So, I would like to ask that everybody who is somehow involved in *development* -- whether in writing code, triaging bugs, sending patches, or managing thing, list three most important problems with Boost now.
I'll reply to this properly later, I just want to say that we should also take into account the difficultly we cause distributions. I was surprised to see us mentioned alongside much more high profile projects here: http://www.markshuttleworth.com/archives/290 But I don't think it was a complement. We didn't pay much attention to this post at the time, but probably should have: http://article.gmane.org/gmane.comp.lib.boost.devel/196669 Daniel

Daniel James wrote:
On 19 March 2010 08:14, Vladimir Prus <ghost@cs.msu.su> wrote:
So, I would like to ask that everybody who is somehow involved in *development* -- whether in writing code, triaging bugs, sending patches, or managing thing, list three most important problems with Boost now.
I'll reply to this properly later, I just want to say that we should also take into account the difficultly we cause distributions. I was surprised to see us mentioned alongside much more high profile projects here:
http://www.markshuttleworth.com/archives/290
But I don't think it was a complement. We didn't pay much attention to this post at the time, but probably should have:
That is unfortunately a completely different set of problems. On one hand, we have problems that are perceived on the developing side. On another hand, we have problems on the using side -- where lack of any API or ABI stability is surely an important concern -- but solving that concern actually requires more work from developers, and even, I think, more centralization. There's no doubt Debian folks or any other packagers will not be happy about 90 libraries on a separate release schedule. - Volodya

On one hand, we have problems that are perceived on the developing side. On another hand, we have problems on the using side -- where lack of any API or ABI stability is surely an important concern -- but solving that concern actually requires more work from developers, and even, I think, more centralization. There's no doubt Debian folks or any other packagers will not be happy about 90 libraries on a separate release schedule.
- Volodya
I think this is very important point. Boost developers are very concern about the amount of work required to maintain the software, but the question, is non-maintained library should be a part of Boost at all? Such huge code-base may become real mess when it is so widely deployed. Let's take a look on a simple example of bug in UUID: https://svn.boost.org/trac/boost/ticket/3971 This bug causes quit not so unique IDs can be generated. This has very bad security concerns that may cause in some software for example guessable session ids. Bad, very bad bug. Security bug. Solution? Upgrade to Boost 1.43 when it is released (with hopefully fixed bug). But lets go to the package maintainers of Linux distributions like Debian: 1. They can't upgrade Boost version because many programs depends on specific Boost version and it can't be transparently upgraded because Boost does not provide neither ABI nor even API compatibility 2. Boost does not release any backward compatible version with at least security bug-fixes. 3. Package maintainers should manually backport all fixes to 1.42 because they have to provide secure software. That is **bad** really **bad**. More then that, I even do not sure if any of maintainers are aware that this issue should be fixed. And this is not distributions issue, many companies stick with specific versions of Boost because upgrade may cost too much. Now, generally the bug I pointed at is not so bad, it has simple fix (fixed in trunk) and it even does not break any compatibility. But: a) What would happen if this issues happens in some unmaintained library? b) Are there any policy of bug-fixes in stable releases? So there are two major issues I see with current development: a) Boost releases should be more modular -- probably single trunk is bad idea. b) Boost should support backward compatibility at some level and incude some maintenance of existing stable libraries. Another point I had figured from reading reviews of the libraries recently was: **Very few reviewers actually do the code review!** Most of them look into API usefulness, documentation of the library but only few of them read the actual lines of code. **This is bad.** For a long time period I was sure that Boost has high quality libraries because may eyes look on them and so all bugs vanish... But it seems to be different. And if we take a look on the situation where: a) Boost is considered high quality set of libraries. b) Boost is very common library and every 2nd C++ project uses is. c) Boost is not backward compatible even and API level. d) Boost does not maintain stable releases. Boost may become a library that when you are using it you are exposed to high risks. My $0.02 Artyom

Artyom wrote:
On one hand, we have problems that are perceived on the developing side. On another hand, we have problems on the using side -- where lack of any API or ABI stability is surely an important concern -- but solving that concern actually requires more work from developers, and even, I think, more centralization. There's no doubt Debian folks or any other packagers will not be happy about 90 libraries on a separate release schedule.
- Volodya
I think this is very important point. Boost developers are very concern about the amount of work required to maintain the software, but the question, is non-maintained library should be a part of Boost at all?
Such huge code-base may become real mess when it is so widely deployed.
Let's take a look on a simple example of bug in UUID:
https://svn.boost.org/trac/boost/ticket/3971
This bug causes quit not so unique IDs can be generated. This has very bad security concerns that may cause in some software for example guessable session ids.
Bad, very bad bug. Security bug. Solution? Upgrade to Boost 1.43 when it is released (with hopefully fixed bug).
But lets go to the package maintainers of Linux distributions like Debian:
1. They can't upgrade Boost version because many programs depends on specific Boost version and it can't be transparently upgraded because Boost does not provide neither ABI nor even API compatibility
2. Boost does not release any backward compatible version with at least security bug-fixes.
3. Package maintainers should manually backport all fixes to 1.42 because they have to provide secure software.
That is **bad** really **bad**. More then that, I even do not sure if any of maintainers are aware that this issue should be fixed.
Well, for 1.41, I have created a maintenance branch (/branches/maintenance/1_41), which got some simple but important bugfixes. However, there did not seem to be much interest. The effort to keep a maintenance branch is close to zero, so if anybody wants it, we can surely create it for all future releases.
And this is not distributions issue, many companies stick with specific versions of Boost because upgrade may cost too much.
Now, generally the bug I pointed at is not so bad, it has simple fix (fixed in trunk) and it even does not break any compatibility.
But:
a) What would happen if this issues happens in some unmaintained library?
That is the problem I have also listed.. Generally, people won't even notice such issue.
b) Are there any policy of bug-fixes in stable releases?
See above on the maintenance branch. I think we should have a mechanism to deliver critical fixes to users immediately, but no officially accepted mechanism exist.
So there are two major issues I see with current development:
a) Boost releases should be more modular -- probably single trunk is bad idea. b) Boost should support backward compatibility at some level and incude some maintenance of existing stable libraries.
Another point I had figured from reading reviews of the libraries recently was:
**Very few reviewers actually do the code review!** Most of them look into API usefulness, documentation of the library but only few of them read the actual lines of code.
**This is bad.** For a long time period I was sure that Boost has high quality libraries because may eyes look on them and so all bugs vanish... But it seems to be different.
And if we take a look on the situation where:
a) Boost is considered high quality set of libraries. b) Boost is very common library and every 2nd C++ project uses is. c) Boost is not backward compatible even and API level. d) Boost does not maintain stable releases.
Thanks, (c) and (d) seem like valid problems indeed. - Volodya

On Fri, Mar 19, 2010 at 06:49:29PM +0300, Vladimir Prus wrote:
Daniel James wrote:
On 19 March 2010 08:14, Vladimir Prus <ghost@cs.msu.su> wrote:
So, I would like to ask that everybody who is somehow involved in *development* -- whether in writing code, triaging bugs, sending patches, or managing thing, list three most important problems with Boost now.
I'll reply to this properly later, I just want to say that we should also take into account the difficultly we cause distributions. I was surprised to see us mentioned alongside much more high profile projects here:
http://www.markshuttleworth.com/archives/290
But I don't think it was a complement. We didn't pay much attention to this post at the time, but probably should have:
That is unfortunately a completely different set of problems. On one hand, we have problems that are perceived on the developing side. On another hand, we have problems on the using side -- where lack of any API or ABI stability is surely an important concern -- but solving that concern actually requires more work from developers, and even, I think, more centralization. There's no doubt Debian folks or any other packagers will not be happy about 90 libraries on a separate release schedule.
Actually, Debian currently builds 38 packages from each boost release: each library that builds a dynamic lib has its own package and corresponding development package. Moving these to their own release schedule would (assuming many are slower than 4 releases/year) actually be welcomed! The trouble we face in Debian is that each Boost release brings an interesting new library that someone inevitably wants to use, so we try to package each release of Boost. This, in turn, forces an upgrade on all the other boost components. The lack of stable API (never mind ABI) causes a lot of turmoil each time this happens. My hope, for what it's worth, is that the mature libraries of Boost could be released much more slowly. This could also help in the cadence issue Mark Shuttleworth raises, at least for the mature libraries. Regards, -Steve

On 03/19/2010 09:49 AM, Vladimir Prus wrote:
Daniel James wrote:
I'll reply to this properly later, I just want to say that we should also take into account the difficultly we cause distributions. I was surprised to see us mentioned alongside much more high profile projects here:
http://www.markshuttleworth.com/archives/290
But I don't think it was a complement. We didn't pay much attention to this post at the time, but probably should have:
There's no doubt Debian folks or any other packagers will not be happy about 90 libraries on a separate release schedule.
They deal with many, many more libraries for Perl, Python, Ruby, Java and others. The major Linux distributions would not bat an eye at 90 libraries. Here are the results from Fedora 12: $ yum list "perl-*" | grep -E "noarch|x86_64" | wc -l 1481 $ yum list "python-*" | grep -E "noarch|x86_64" | wc -l 423 Most of those Perl modules likely come from CPAN. Apache alone likely provides 90+ libraries for the Java platform. What the distributors (and users!) want from Boost is some indication of what parts are stable, what parts are under active development, and what are NSFW. My guess is that the distributors might package up 1/2 - 2/3 of Boost, depending on popularity, if they were available for individual consumption. Speaking of Apache, the Apache incubator process would be a great process to adopt for Boost. It gives projects exposure, a chance for major follow-on work to occur, feedback from users, and for interfaces to settle down before being blessed as full projects. A similar process for Boost might address some of the developer issues and some of the user issues at the same time. There is no way *Boost* could currently deal with 90 libraries on separate release schedules given the inter-dependencies that exist between the libraries. Rob

On 26 March 2010 18:24, Rob Riggs <rob@pangalactic.org> wrote:
What the distributors (and users!) want from Boost is some indication of what parts are stable, what parts are under active development, and what are NSFW.
Why are we accepting *any* libraries that are NSFW? I'm assuming the only real division is stability of the interface. However, the moment you classify the libraries into stable/not stable, the "not stable" ones, now being second class citizens, will never get enough "real world" experience to improve their interface, since we are basically telling people they are too risky to use. No amount of verbiage will change this impression for most people. Ironically, the "not stable" citizens will end up being stable because of reduced usage. And if that happens, it could snowball into reducing the number of people willing to develop a library for Boost. -- Nevin Liber <mailto:nevin@eviloverlord.com> (847) 691-1404

A couple of months some of us suggested a change of mission, for boost to become something akin to the apache foundation, i.e. an umbrella organization for (mostly) independent projects.
... we should also take into account the difficultly we cause distributions.
A potentially interesting model/solution might be for Boost to become Apache-like and also to follow Eclipse's multiple simultaneous release cycle idea (e.g. Eclipse Ganymede, Galileo, etc.). Individual contributing subprojects release on whatever timescale is appropriate for them, but periodically the whole foundation makes a push and releases a cohesive, comparatively well tested, longer lived collection of the subprojects in a single release. The distributions could better plan around such multiple simultaneous releases. This model has worked extremely well for the open source and commercial needs of the Eclipse folks.
So, I would like to ask that everybody who is somehow involved in *development* -- whether in writing code, triaging bugs, sending patches, or managing thing, list three most important problems with Boost now.
My qualifications with respect to formal development, so please take this with a grain of salt. - Rhys

Rhys Ulerich wrote:
A couple of months some of us suggested a change of mission, for boost to become something akin to the apache foundation, i.e. an umbrella organization for (mostly) independent projects.
... we should also take into account the difficultly we cause distributions.
A potentially interesting model/solution might be for Boost to become Apache-like and also to follow Eclipse's multiple simultaneous release cycle idea (e.g. Eclipse Ganymede, Galileo, etc.). Individual contributing subprojects release on whatever timescale is appropriate for them, but periodically the whole foundation makes a push and releases a cohesive, comparatively well tested, longer lived collection of the subprojects in a single release. The distributions could better plan around such multiple simultaneous releases.
This model has worked extremely well for the open source and commercial needs of the Eclipse folks.
Yes, this certainly has benefits. But note that separately-installable Eclipse components are relatively large -- generally much larger than a boost library. Also, there's relatively simple dependency structure. Together, that makes release engineering less hard. - Volodya

On 03/19/2010 11:14 AM, Vladimir Prus wrote:
It seems to be important, right now, to discuss whether this problems are real, and what problems are most important. So, I would like to ask that everybody who is somehow involved in *development* -- whether in writing code, triaging bugs, sending patches, or managing thing, list three most important problems with Boost now. Please keep the items to a sentence or two, so that we can easily collect the problems.
I can't call myself an active developer of Boost, but I did create quite a few tickets, some with patches, and applied some of them. So I hope I qualify for the survey. First, here are my top 3: 1. The review procedure is failing to deliver new libraries to the users in a reasonable time frame. Some very important libraries stay in the queue for too long without even having a review manager assigned. 2. The lack of maintenance releases. In production environment it is often a rule of thumb that the first release is unstable, and the second (third?) security update is suitable for use. Not having such updates at all leaves Boost in a bad situation. 3. Monolithic design limits development and adoption of Boost. A more modular approach is needed. I'd like to draw attention to the fact that none of these issues is of instrumental nature. I recognize the problems with unsatisfactory performance of Trac and complex build system of Boost, but to my mind these are of secondary concern. A far more important thing to do is to decide the further course of Boost development. Then the instrumental layer will naturally follow the chosen direction. Now, some more details on the outlined issues. 1. The review procedure. ======================== The topic raises rather often, there are suggestions from different participants, but it seems that nothing changes eventually. Reviews still happen rarely, there are too few review managers, and sometimes reviewers are also lacking. It looks like Boost has grew too big and the core Boost community members just don't have the time to pay attention to the new proposals. Another possible reason for this effect could be that the interest for Boost is cooling down, but I don't want to believe that. After all, Boost was, is and will be the place for innovative ideas in the world of C++, and losing interest for such a place would mean losing interest in C++ as a whole. Anyway, I think, we should identify the reasons of this stagnation. Of course, the first thing that comes to mind is the lack of time of the volunteers. True, I can barely comprehend the amount of time a review manager should dedicate to the review. This is especially true in case of big submissions to Boost. This amount of time should be reduced. The following ways of doing that come to mind (some were discussed earlier on the ML, but I'll rehash): - Introduce the voting mechanism. Voting should be as easy as clicking on a yes/no link or icon on a web page + an optional small comment. No ML subscription required. The review manager may take into account those votes as an indication of public interest and appreciation of the submission. - Separate the mechanism of posting a full review and the library discussion. It would make it easier to collect the formal reviews without having to read through all the discussion. Perhaps, a separate ML for formal reviews would suffice. A web page with a few fields to fill in to post the review would also be very helpful, especially for the occasional newcomers. - Reduce the number of formal questions for the review. IMHO, three questions of design, docs and implementation quality, plus the final yes/no verdict should suffice for the review. The rest should be optional. - Provide automated ways of assisting the review, such as scripts for updating the web site for the review (e.g. post an announcement in the news section, prepare the aforementioned web page for posting reviews, etc.), formal mailings (review is upcoming, review has started, review is in progress, review has finished) and whatever other things needed. Ideally, I would like to reduce the time needed for such routine things to a minimum. Another reason is the lack of motivation. I think, it is fair to say that people that invest their time and effort into Boost should be rewarded somehow. I'm not saying that Boost should become a commercial software (please, no!), but the appropriate acknowledgement of their efforts should be in place (on the front page of the web site and the release notes). Perhaps, a donation system could be established, so that release and review managers do get monetary rewards, too. On the current stage I don't consider reviewers to be rewarded, because the library acceptance itself is a reward for them, as for the interested parties. But the library author is free to acknowledge them in the library credits page. And the third reason I'd like to outline is the lack of people. The problem is twofold. On one side, the entering barrier for a person into Boost is quite high. One has to be a quite experienced developer to participate in reviews, let alone to be a review manager. While the review manager should be experienced, I'm not sure the requirement is adequate with regard to the reviewers. I think it should be possible for the less experienced users to see if documentation and examples are clear and understandable, while the more advanced developers have more time to evaluate the implementation and the interface of the library. The other side of the problem is that Boost is rather closed to its community. I don't know how it happens, but on independent news I regularly read of such projects as KDE, GNOME, Qt, Linux Kernel and others, but nearly nothing about Boost, which, I believe, has no less importance in the world of C++. The Boost web site changes rarely - essentially, the news column only lists recent Boost releases. For an outsider, nothing really happens around Boost, and that's sad. If Boost was more open and advertised on public (perhaps, not a good wording, but I can't come up with a better phrase now), I think, there would be much more activity in Boost, and during reviews in particular. To be continued...

Andrey Semashev wrote:
... First, here are my top 3:
1. The review procedure is failing to deliver new libraries to the users in a reasonable time frame. Some very important libraries stay in the queue for too long without even having a review manager assigned.
For clarity - review managers aren't assigned, they volunteer. If no qualified Booster is interested in the library enough to volunteer, then they may not agree with you as to how important the library is.
... 3. Monolithic design limits development and adoption of Boost. A more modular approach is needed.
This is an idea that seems to be reaching critical mass inside the community. It has come up many times in the past year or two, and as is seen in a different thread there are now people concerned enough with it that they have built a proposed solution. Why didn't this happen earlier? As far as I can tell because this is the first time people who were worried about the problem took the additional step of making a potential solution.
...
Now, some more details on the outlined issues.
1. The review procedure. ========================
The topic raises rather often, there are suggestions from different participants, but it seems that nothing changes eventually. ...
I at least have not seen a consensus of the community that would be required to make a major change in the review policy. In specific, these discussions are usually dominated by people who have been neither managers nor developers of reviewed libraries. Some (such as you) do not fit that description, but the fractional representation of experienced reviewed developers or review managers in these discussions is usually small. I think it would be unwise to make major changes in the system without a strong consensus of the most experienced members of the community.
Anyway, I think, we should identify the reasons of this stagnation. Of course, the first thing that comes to mind is the lack of time of the volunteers. True, I can barely comprehend the amount of time a review manager should dedicate to the review. This is especially true in case of big submissions to Boost. This amount of time should be reduced. The following ways of doing that come to mind (some were discussed earlier on the ML, but I'll rehash):
I would point out that the increase of economic pressures around the world, and the reduction of time for managers and reviewers to dedicate to the review process was fairly well correlated. There may be a mechanism here worth paying attention to.
- Introduce the voting mechanism. Voting should be as easy as clicking on a yes/no link or icon on a web page + an optional small comment. No ML subscription required. The review manager may take into account those votes as an indication of public interest and appreciation of the submission.
Again for clarity - This appears to be similar to Paul Bristow's and others' suggestion that there be a pre-review approval phase. Is this what you intend?
- Separate the mechanism of posting a full review and the library discussion. It would make it easier to collect the formal reviews without having to read through all the discussion. Perhaps, a separate ML for formal reviews would suffice. A web page with a few fields to fill in to post the review would also be very helpful, especially for the occasional newcomers.
I'm not sure I understand what you mean here. I see two possibilities. First, you may mean separate discussion about libraries under review from the rest of the boost developer list discussion. If so, I don't think it will have much effect. It is not usually hard to see which posts are about reviews and which are not. In my experience managing reviews, this has been a non-problem. Other managers may disagree, however. Second, you may mean separating the review submission from the discussions about the library and the decisions that drove the development that usually grow out of those submissions. If this is your intent, then I strongly disagree. An important part of the role of the manager is to clarify and distill those discussions and decide whether there is some suggestion or requirement for the future development of the library that is a product of the discussions. In a good discussion of the library, by far the most valuable information for composing a good review comes from the posts that are not the formal review postings. It is the place where people who disagree provide their reasons, where examples are composed and discussed, where any consensus that ever forms can be found. Reading those discussions is essential to producing a good review report and a well reasoned recommendation.
- Reduce the number of formal questions for the review. IMHO, three questions of design, docs and implementation quality, plus the final yes/no verdict should suffice for the review. The rest should be optional.
In the current process, all of the questions are optional. The provided questions are suggested, but there are always reviews that don't answer them all.
- Provide automated ways of assisting the review, such as scripts for updating the web site for the review (e.g. post an announcement in the news section, prepare the aforementioned web page for posting reviews, etc.), formal mailings (review is upcoming, review has started, review is in progress, review has finished) and whatever other things needed.
The current web site updates are done by the wizards. The review managers have no work to do for them. The notifications to the list take a cumulative total of a few minutes to create and send.
Ideally, I would like to reduce the time needed for such routine things to a minimum.
It could possibly be reduced, but only by a few minutes, since that is all it takes. This is optimizing the part of the program that takes 1% of run time and hoping to get a meaningful reduction in the whole program.
Another reason is the lack of motivation. I think, it is fair to say that people that invest their time and effort into Boost should be rewarded somehow. I'm not saying that Boost should become a commercial software (please, no!), but the appropriate acknowledgement of their efforts should be in place (on the front page of the web site and the release notes). Perhaps, a donation system could be established, so that release and review managers do get monetary rewards, too. On the current stage I don't consider reviewers to be rewarded, because the library acceptance itself is a reward for them, as for the interested parties. But the library author is free to acknowledge them in the library credits page.
There are a darn lot of people who work to make Boost work. Many of them do so in relative obscurity and are not offended by that. They deserve thanks for their efforts (My heartfelt thanks to all of you. I know many of you do far more and harder work for Boost than I do.), and maybe even buy them a beer if you see them at BoostCon. However, listing them on the front page removes the focus from the reason why they do the work. They can be part of the "People" page, if they choose and be acknowledged there. I am personally against trying to funnel money to some subset of the volunteers. Down that path lies endless arguments about who deserves what fraction of the pot.
And the third reason I'd like to outline is the lack of people. The problem is twofold. On one side, the entering barrier for a person into Boost is quite high. One has to be a quite experienced developer to participate in reviews, let alone to be a review manager. While the review manager should be experienced, I'm not sure the requirement is adequate with regard to the reviewers. I think it should be possible for the less experienced users to see if documentation and examples are clear and understandable, while the more advanced developers have more time to evaluate the implementation and the interface of the library.
There is no requirement for reviewers to be highly experienced experts. In fact, I have personally posted exactly the opposite on both the developer list and the user list several times. (In my role as a Review Wizard.) It is well understood that the view of someone not familiar with the details of Boost, but instead a solid journeyman programmer with an interest in using a library has a very valuable perspective on the documentation, the interface, and several other aspects of the library. I have a history of encouraging them to contribute to reviews, but I am far from alone in that. Some people who consider contributing to a review are intimidated by the level of the conversation and worry that they will appear ignorant. This is a problem I'm not sure how to solve. But I don't recall any instances of someone being told they can't contribute, and I recall several instances of someone prefacing comments by saying they are new, and being told that the insights of new people are valued.
The other side of the problem is that Boost is rather closed to its community. I don't know how it happens, but on independent news I regularly read of such projects as KDE, GNOME, Qt, Linux Kernel and others, but nearly nothing about Boost, which, I believe, has no less importance in the world of C++. The Boost web site changes rarely - essentially, the news column only lists recent Boost releases. For an outsider, nothing really happens around Boost, and that's sad. If Boost was more open and advertised on public (perhaps, not a good wording, but I can't come up with a better phrase now), I think, there would be much more activity in Boost, and during reviews in particular.
I agree that finding a way to get a broader cross section of the developer community outside of Boost involved in review discussions would be good for the libraries. However, I should point out that success at this will amplify the problems for the review managers and developers. As an example, your recent Logging Library review produced a few hundred relevant posts. Many of them long and thick with technical details. Now imagine what happens if three times as many people are involved. It is unlikely that the post volume will scale linearly, but something like quadratically is more likely.
To be continued...
And I look forward to it. Please don't conclude from my post that I'm against anything changing in the review process. I'm also not happy about some of the libraries that have languished, and some of the other problems. However, I think a healthy review process is essential for the health of Boost, so I will very critically examine any suggestions put forward and share what I see as potential problems. John Phillips

Just 2 problems from my (rather exterior) point of view: 1/ about reviews. A review is often too many 'little issues' discussed at the same time. Some are interface related, other are implementation related. From the experience of the ggl/boost.geometry review, I think the interface needs to be fixed before implementation can be discussed. --> maybe an 'interface' review before a 'complete review - means code review-' would be more appropriate. ------> To submit to an 'interface' review, the library would need to be already complete though (docs + implementation), in order to make sure that 1/ the proposition is already solid 2/ technical problems which could impact the interface are already well spotted. ------> I did not do any review of boost.geometry/ggl for many reasons, one of them being that I felt I could not do a real review given that the documentation at least was not (at that time) enough for me to fully-understand the library. --> that way expert of specific domains can focus on the implementation review when the documentation is correct and the interface satisfying. They can also check things from the interface review dicussion if needed. --> I think much time and assle can be saved that way. 2/ The community seems to not be sure if boost should aim for research or production libraries. --> It is not the same work. --> If 'production' is wanted, then : ----> Maybe libraries could be tagged as 'research' or 'production'. 'production' would mean at least "maintenance release + API continuity" during 2 years for each version of such library. ----> Maybe another possibility is to have a 'production branch', which use only "production version" of such available libraries (sorry if this seems obvious/trivial...). --> Personnaly, I consider boost as 'research' libraries. IMHO developing boost can be fun as long as it is research. Production libraries can be later inspired by these works, or be proposed on the personnal website of the authors of corresponding boost library. Please note that this last point is only my point of view. Just my 2 cents! Best regards, Pierre Morcello "The most exciting phrase to hear in science, the one that heralds new discoveries, is not 'Eureka!' (I've found it!), but 'That's funny...'" Isaac Asimov.

On 03/20/2010 06:28 AM, John Phillips wrote:
For clarity - review managers aren't assigned, they volunteer.
Right.
If no qualified Booster is interested in the library enough to volunteer, then they may not agree with you as to how important the library is.
I don't think that libraries like Move or Lockfree is lacking of importance, because they come up quite often on this list. But somehow they still linger in the queue without a review manager.
The topic raises rather often, there are suggestions from different participants, but it seems that nothing changes eventually. ...
I at least have not seen a consensus of the community that would be required to make a major change in the review policy. In specific, these discussions are usually dominated by people who have been neither managers nor developers of reviewed libraries. Some (such as you) do not fit that description, but the fractional representation of experienced reviewed developers or review managers in these discussions is usually small. I think it would be unwise to make major changes in the system without a strong consensus of the most experienced members of the community.
I think, the opinion of the newcomers (for the lack of a better word) is also valuable, because their concern shows the view on Boost from the outside. These people may not have put much effort in Boost evolution, but may have experience with other projects, including opensource ones. We shouldn't ignore that experience. Regarding the point of gaining acceptance from the core Boost members, yes, I fully agree with you. But as you pointed out, these members don't participate in these discussions actively.
I would point out that the increase of economic pressures around the world, and the reduction of time for managers and reviewers to dedicate to the review process was fairly well correlated. There may be a mechanism here worth paying attention to.
That's true, although I wouldn't say that the financial crisis was the culprit of the current situation. Long waits in the review queue were common long before the crisis took place.
- Introduce the voting mechanism. Voting should be as easy as clicking on a yes/no link or icon on a web page + an optional small comment. No ML subscription required. The review manager may take into account those votes as an indication of public interest and appreciation of the submission.
Again for clarity - This appears to be similar to Paul Bristow's and others' suggestion that there be a pre-review approval phase. Is this what you intend?
Not exactly. I'm not proposing it as a pre-review phase, but as a complement to the review process. Perhaps, I should have written it in connection with the next suggestion about a web page:
A web page with a few fields to fill in to post the review would also be very helpful, especially for the occasional newcomers.
The ultimate goal of this is to make the review process as open and easy as possible. While this way of reviewing the submission does not allow for in-depth discussions, it does give the idea on the overall impression the library makes to the users.
I'm not sure I understand what you mean here. I see two possibilities.
First, you may mean separate discussion about libraries under review from the rest of the boost developer list discussion.
No, I don't think that's necessary. Typically, library-related discussions can be extracted from the rest of messages rather well by email filters.
Second, you may mean separating the review submission from the discussions about the library and the decisions that drove the development that usually grow out of those submissions.
Yes, this is what I thought of. Although I tend to think that discussions usually prepend the final formal review, rather than follow it.
If this is your intent, then I strongly disagree. An important part of the role of the manager is to clarify and distill those discussions and decide whether there is some suggestion or requirement for the future development of the library that is a product of the discussions. In a good discussion of the library, by far the most valuable information for composing a good review comes from the posts that are not the formal review postings. It is the place where people who disagree provide their reasons, where examples are composed and discussed, where any consensus that ever forms can be found. Reading those discussions is essential to producing a good review report and a well reasoned recommendation.
I understand that it is now the most part of the work of a review manager, and my intent was to reduce it. In my view, it would be easier for a review manager to have formal reviews containing the essential outcome of the discussion that happened between the reviewers and the author. The discussion, although containing a lot of reasoning, also contains a lot of technical details that may be hard to follow. I think that in many cases such details are less important than higher level issues, such as design and interface of the library.
In the current process, all of the questions are optional. The provided questions are suggested, but there are always reviews that don't answer them all.
Hm, it didn't occur to me that way. Good to know.
- Provide automated ways of assisting the review, such as scripts for updating the web site for the review (e.g. post an announcement in the news section, prepare the aforementioned web page for posting reviews, etc.), formal mailings (review is upcoming, review has started, review is in progress, review has finished) and whatever other things needed.
The current web site updates are done by the wizards. The review managers have no work to do for them.
What I meant by updating a web site is maintaining a web page, accessible from the front page of www.boost.org, that lists all reviews that are coming soon or currently going. Ideally, there would have to be the voting (or a quick review) page I mentioned accessible from there. The ongoing review should be accessible in form of blog or RSS feed, extracted from all the conversations on the ML. The review-related page should also include an excerpt of the library description and links to online docs and downloads. If review wizards will take care of that, it'll be fine, unless they are overwhelmed by the amount of work.
The notifications to the list take a cumulative total of a few minutes to create and send.
Perhaps. Since I never managed a review, I have a quite vague understanding of how much time it would take, and what other actions can be optimized.
There are a darn lot of people who work to make Boost work. Many of them do so in relative obscurity and are not offended by that. They deserve thanks for their efforts (My heartfelt thanks to all of you. I know many of you do far more and harder work for Boost than I do.), and maybe even buy them a beer if you see them at BoostCon. However, listing them on the front page removes the focus from the reason why they do the work. They can be part of the "People" page, if they choose and be acknowledged there.
I think, at least being acknowledged in the release details would be a good credit to the involved people. But I'm not insisting on anything.
I am personally against trying to funnel money to some subset of the volunteers. Down that path lies endless arguments about who deserves what fraction of the pot.
We could let the users decide, which part of Boost deserves to be donated. For instance, for each library in the review queue there could be a donation pool for a review manager. The one who manages the review will get the pool. I think, a fair approach can be worked out, if needed.
There is no requirement for reviewers to be highly experienced experts.
Formally, true. But:
Some people who consider contributing to a review are intimidated by the level of the conversation and worry that they will appear ignorant. This is a problem I'm not sure how to solve. But I don't recall any instances of someone being told they can't contribute, and I recall several instances of someone prefacing comments by saying they are new, and being told that the insights of new people are valued.
I'd say it's a psychological problem. That is why I want to simplify the review process for the reviewers, in hope that it will encourage more people to participate and see the welcoming attitude from the regular members of the community.
I agree that finding a way to get a broader cross section of the developer community outside of Boost involved in review discussions would be good for the libraries. However, I should point out that success at this will amplify the problems for the review managers and developers.
But it also could bring more review managers to the community. Also, the excess of feedback is by far not as bad as the lack of one.

Andrey Semashev wrote:
On 03/20/2010 06:28 AM, John Phillips wrote:
If this is your intent, then I strongly disagree. An important part of the role of the manager is to clarify and distill those discussions and decide whether there is some suggestion or requirement for the future development of the library that is a product of the discussions. In a good discussion of the library, by far the most valuable information for composing a good review comes from the posts that are not the formal review postings. It is the place where people who disagree provide their reasons, where examples are composed and discussed, where any consensus that ever forms can be found. Reading those discussions is essential to producing a good review report and a well reasoned recommendation.
John, this is spot on but not documented in http://www.boost.org/community/reviews.html#Review_Manager. I think it should be part of the Review Manager description.
I understand that it is now the most part of the work of a review manager, and my intent was to reduce it. In my view, it would be easier for a review manager to have formal reviews containing the essential outcome of the discussion that happened between the reviewers and the author. The discussion, although containing a lot of reasoning, also contains a lot of technical details that may be hard to follow. I think that in many cases such details are less important than higher level issues, such as design and interface of the library.
The role of Review Manager doesn't seem onerous to me, and I volunteered to be one. During a review, there is a tremendous upsurge of message traffic to which a Review Manager must pay attention, to be sure. That demands a lot of the Review Manager, but not nearly so much as of the library author who must address the technical questions and concerns raised. I think the problem for most reviews, for all involved, is they are too short. Tiny, well focused libraries don't need more than a couple of weeks, but substantial libraries, like Boost.Log, need more so the discussion can have a more comfortable pace and so more people can spend sufficient time to do a review. (Yes, they can start early, but they don't.) Any reduction in the information a Review Manager uses to make a decision, particularly given the relatively small number of reviews submitted for most libraries, seems unwise. That's why I asked John to include something like what he wrote above in the description of a Review Manager's responsibilities. It is not uncommon for reviews to be rather short because so much content was discussed previously; the review winds up being a summary rather than sufficiently detailed to provide all that the Review Manager should consider. _____ Rob Stewart robert.stewart@sig.com Software Engineer, Core Software using std::disclaimer; Susquehanna International Group, LLP http://www.sig.com IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

On 03/20/2010 02:10 AM, Andrey Semashev wrote:
First, here are my top 3:
1. The review procedure is failing to deliver new libraries to the users in a reasonable time frame. Some very important libraries stay in the queue for too long without even having a review manager assigned.
2. The lack of maintenance releases. In production environment it is often a rule of thumb that the first release is unstable, and the second (third?) security update is suitable for use. Not having such updates at all leaves Boost in a bad situation.
3. Monolithic design limits development and adoption of Boost. A more modular approach is needed.
2. The lack of maintenance releases. ==================================== Personally, this part gave me the most frustration as a user. It was really hard to convince my coworkers to make an upgrade to a new Boost release recently, and the main point of argument was the potential instability this upgrade could bring. This is not limited to API changes (which usually don't give much trouble, if they appear), but is about the potential bugs and performance degradation (and these are the real problems that took most of the porting time). I think, I'm not alone in this regard. I understand that Boost is about pulling the C++ development forward. But the users should not be forgotten in the process. There are a lot of tickets hanging in the Trac, some are there for years, some have patches and test cases. There are attempts to reduce their number during the "bug fixing runs", but that doesn't radically change the situation. I think, the release scheme should be changed slightly so that a feature release is always followed with at least one bugfix release. The release schedule may stay the same, the only thing that changes is that every odd release is focused on fixing tickets. No new libraries, no new features, no major rewrites. That policy could be maintained by the release manager, and it doesn't require additional testing resources. Another thing that would improve the Boost support is an easy way to see, which critical problems have been identified with a particular Boost release, and possible solutions (with patches, if present). This information should stay available for all Boost releases that are available for download (e.g., on the release notes page). AFAIK, Trac fails to deliver such information currently. The decision on whether a particular problem is critical enough to be added to this list, and backporting the fix into previous Boost releases should be after the library maintainer. But there should be a rule that at the next maintenance release all (or at least, most) such critical problems from the previous release are fixed. When the maintainer has no time to do it, the release manager and the maintainer can grant authority to do that to other developers. The terms on which this help is accepted can be established individually (e.g. only commit after a review, or commit without any review). There will be a problem with the libraries that are currently unmaintained. This is a very disturbing problem, indeed. I can't suggest a solution off hand, but at least it would seem reasonable to mark such libraries in the release notes or documentation, so that users would be aware that if problems arise with them, they should not expect a quick response. There could also be a call for maintainers for those libraries somewhere on the web site. The donation system I mentioned in my previous email could also build more interest for one to become a maintainer. 3. Monolithic design. ===================== This matter has already been discussed, and others have made the suggestions, so I'll just express my thoughts of improving things. First, I agree with those proposing to separate the core libraries from the more specialized and less stable ones. At first the core can be bundled in a single package, but later, if needed, it could be divided further. The main point is that the core does not depend on any other Boost libraries and includes the components that are most commonly used and are very stable. The other libraries that depend on the core are bundled separately. They can have their own versioning and the release cycle. Dependencies on other libraries, including, but not limited to the core, are allowed, but it should be stated explicitly in the docs and the release notes, which versions of the dependent libraries are required. Ideally, it would be good to have a web service of some kind, where users could select the desired libraries, and the system would suggest to also download the dependent libraries of the appropriate versions. At some period a complete Boost release should be prepared. It may not happen as often as it happens now, and it may be aligned with whatever time frame is best for the packagers. This complete release should include the latest compatible releases of the libraries, and be bundled in a single package. The web site will not require dramatic changes except for adding convenience shortcuts for the non-core libraries, such as lib.boost.org or www.boost.org/lib. Second, the entering procedure for a library under the Boost umbrella should be simpler than for becoming a core library. There is a simple reason for that: the new library is less stable, it has less users and may still contain rough edges, and this is normal. The library should show that it offers a good potential for improving, while offering good functionality in order to be useful. But it doesn't have to be perfect. This may sound like a lesser level of requirement than what Boost currently has, but I think it's adequate. On the other hand, core libraries are the beacon that all other developers aim at. The requirements to enter that set of libraries can be much higher, and also include a certain period of library use in the real world to prove its usefulness. I admit that the more I think of this part, the more it looks connected to the review system. I even think that dividing libraries into several layers (e.g. gold, silver and bronze), with each layer having the different requirements for entering, could help both the development and the users. It would be easier for the developers and review managers to bring new libraries into the bronze layer, while the users will know which libraries are the most stable and polished ones (specifically, the ones from the gold layer). Last, but not the least, I'd like to admit that I purposefully did not touch the instrumental side of the issue. I hope that Dave's proposal will address that, or at least will make a step towards to it.

On Sat, 20 Mar 2010 21:04:10 +0100, Andrey Semashev <andrey.semashev@gmail.com> wrote:
[...]3. Monolithic design. [...]I admit that the more I think of this part, the more it looks connected to the review system. I even think that dividing libraries into several layers (e.g. gold, silver and bronze), with each layer having the different requirements for entering, could help both the development and the users. It would be easier for the developers and review managers to bring new libraries into the bronze layer, while the users will know which libraries are the most stable and polished ones (specifically, the ones from the gold layer).
I also believe that there is not really a problem with the monolithic design. From a deployment point of view it can't be much easier than now: Download a ZIP file every three months and run bjam to build and install everything - done (assuming that you have figured out how this process works in detail; but that's not a design issue either; maybe there is a just a simple graphical installation wizard missing - then noone would need to care about all those bjam command line options?). Anyway if I imagine I have to search for components, figure out dependencies and try to find compatible versions I definitely prefer to download one package with everything which works out of the box. I might waste space on my hard disk as I don't need each and every library either. But I'd still prefer to do this than wasting time trying to setup my personal Boost distribution (I can imagine a perfect web-based tool which does all of this automatically; but this would require development and maintenance effort, too). Boris

On 20 Mar 2010, at 21:02, Boris Schaeling wrote:
On Sat, 20 Mar 2010 21:04:10 +0100, Andrey Semashev <andrey.semashev@gmail.com> wrote:
[...]3. Monolithic design. [...]I admit that the more I think of this part, the more it looks connected to the review system. I even think that dividing libraries into several layers (e.g. gold, silver and bronze), with each layer having the different requirements for entering, could help both the development and the users. It would be easier for the developers and review managers to bring new libraries into the bronze layer, while the users will know which libraries are the most stable and polished ones (specifically, the ones from the gold layer).
I also believe that there is not really a problem with the monolithic design. From a deployment point of view it can't be much easier than now: Download a ZIP file every three months and run bjam to build and install everything - done (assuming that you have figured out how this process works in detail; but that's not a design issue either; maybe there is a just a simple graphical installation wizard missing - then noone would need to care about all those bjam command line options?).
The point where that breaks down is where one library is found to have a fatal flaw shortly after a release, you wait 6 months (or whatever) for another version of boost which fixes the bug, but also breaks the API on a half-dozen other libraries you use.

On Sun, 21 Mar 2010 00:57:49 +0100, Christopher Jefferson <chris@bubblescope.net> wrote:
[...]
I also believe that there is not really a problem with the monolithic design. From a deployment point of view it can't be much easier than now: Download a ZIP file every three months and run bjam to build and install everything - done (assuming that you have figured out how this process works in detail; but that's not a design issue either; maybe there is a just a simple graphical installation wizard missing - then noone would need to care about all those bjam command line options?).
The point where that breaks down is where one library is found to have a fatal flaw shortly after a release, you wait 6 months (or whatever) for another version of boost which fixes the bug, but also breaks the API on a half-dozen other libraries you use.
This is then really a problem with testing if such a disastrous bug slips through? But I think I get your point: As some libraries are changed faster than others those changes would be available to users earlier if users wouldn't need to wait for the next release date? I'm not sure though if this has really been such a problem in the past? I find the release schedule pretty good (some versions I even skipped as I just didn't have the time at that point to upgrade). Boris

The point where that breaks down is where one library
is found to have a fatal flaw shortly after a release, you wait 6 months (or whatever) for another version of boost which fixes the bug, but also breaks the API on a half-dozen other libraries you use.
This is then really a problem with testing if such a disastrous bug slips through?
I think that it is terribly wrong assume that Boost can release any kind of bug-free library even free from terrible, critical bugs that make the library useless. It is programming world. There is no such thing like bug free software. See UUID example... Artyom

On Sun, 21 Mar 2010 05:55:14 +0100, Artyom <artyomtnk@yahoo.com> wrote:
The point where that breaks down is where one library
is found to have a fatal flaw shortly after a release, you wait 6 months (or whatever) for another version of boost which fixes the bug, but also breaks the API on a half-dozen other libraries you use.
This is then really a problem with testing if such a disastrous bug slips through?
I think that it is terribly wrong assume that Boost can release any kind of bug-free library even free from terrible, critical bugs that make the library useless. It is programming world. There is no such thing like bug free software. See UUID example...
I would propose then to add a patch system to bjam? If bjam supported patches (in order not to depend on various tools on different operating systems) those who urgently need a fix wouldn't need to wait for the next official release? Boris

Boris Schaeling wrote:
On Sun, 21 Mar 2010 05:55:14 +0100, Artyom <artyomtnk@yahoo.com> wrote:
The point where that breaks down is where one library
is found to have a fatal flaw shortly after a release, you wait 6 months (or whatever) for another version of boost which fixes the bug, but also breaks the API on a half-dozen other libraries you use.
This is then really a problem with testing if such a disastrous bug slips through?
I think that it is terribly wrong assume that Boost can release any kind of bug-free library even free from terrible, critical bugs that make the library useless. It is programming world. There is no such thing like bug free software. See UUID example...
I would propose then to add a patch system to bjam? If bjam supported patches (in order not to depend on various tools on different operating systems) those who urgently need a fix wouldn't need to wait for the next official release?
Why would Boost.Build support patches? The procedure for making maintenance release is pretty straigh-forward: 1. "svn merge -c" the fix to maintenance branch. 2. create a tarball/zip from the maintenance branch If there's interest, I can actually do it. [There's the problem that SF file release system is a bit convoluted and not-scriptable, but we can bypass it completely, just like it's done for Boost.Build nightly builds] - Volodya

On Sun, 21 Mar 2010 18:27:42 +0100, Vladimir Prus <ghost@cs.msu.su> wrote:
[...]Why would Boost.Build support patches? The procedure for making maintenance release is pretty straigh-forward:
1. "svn merge -c" the fix to maintenance branch. 2. create a tarball/zip from the maintenance branch
Well, it's always pretty straight forward for someone. :) But if everyone has to use bjam anyway to build the libraries I think there will be less willingness to install and learn another tool only to apply patches? As I'm not asking for a component-based approach either I have to forward the question though: Does Vladimir's proposal make sense to those who are looking forward to a Boost.UUID fix and don't want to wait for the next official release? Boris

Christopher Jefferson wrote:
On 20 Mar 2010, at 21:02, Boris Schaeling wrote:
On Sat, 20 Mar 2010 21:04:10 +0100, Andrey Semashev <andrey.semashev@gmail.com> wrote:
[...]3. Monolithic design. [...]I admit that the more I think of this part, the more it looks connected to the review system. I even think that dividing libraries into several layers (e.g. gold, silver and bronze), with each layer having the different requirements for entering, could help both the development and the users. It would be easier for the developers and review managers to bring new libraries into the bronze layer, while the users will know which libraries are the most stable and polished ones (specifically, the ones from the gold layer). I also believe that there is not really a problem with the monolithic design. From a deployment point of view it can't be much easier than now: Download a ZIP file every three months and run bjam to build and install everything - done (assuming that you have figured out how this process works in detail; but that's not a design issue either; maybe there is a just a simple graphical installation wizard missing - then noone would need to care about all those bjam command line options?).
The point where that breaks down is where one library is found to have a fatal flaw shortly after a release, you wait 6 months (or whatever) for another version of boost which fixes the bug, but also breaks the API on a half-dozen other libraries you use.
Is this not a maintenance releases problem again? That is, if there's mechanism to quickly roll point release when such a flaw is discovered, then it's relatively unimportant how monolitihic a design is. - Volodya

On 03/21/2010 08:14 AM, Vladimir Prus wrote:
Christopher Jefferson wrote:
The point where that breaks down is where one library is found to have a fatal flaw shortly after a release, you wait 6 months (or whatever) for another version of boost which
fixes the bug, but also breaks the API on
a half-dozen other libraries you use.
Is this not a maintenance releases problem again? That is, if there's mechanism to quickly roll point release when such a flaw is discovered, then it's relatively unimportant how monolitihic a design is.
It is much easier to ship a maintenance release of a single library than of the whole Boost. Users will be able to compose their local distributions of maintenance releases of libraries they use and thus strive for stability.

Andrey Semashev wrote:
It is much easier to ship a maintenance release of a single library than of the whole Boost. Users will be able to compose their local distributions of maintenance releases of libraries they use and thus strive for stability. Such a composition is not well tested (or even 'at all tested') and is itself suspect.
I think the proposal to have a monolithic stable core and then satellite libraries that mostly depend only on the core is a good one. If a satellite is becoming a common dependency itself then that should be a criteria for promoting it to core. In some respects I think the problem is library maintainers. Boost looks like 'a thing' from outsidee, but its actually a collection of seperate things that happen to share some namespace and a zip file. I guess that a lot of bugs and patches sit in trac because the maintainer lacks time and everyone else defers to him (or her?). It might be better if an executive team could take more responsibility for addressing such things if the original maintainer cannor keep up. I'm not sure if Apache has the right model - the relatonship between APR and its dependents is OK, but commons is a mess.

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Christopher Jefferson Sent: Saturday, March 20, 2010 11:58 PM To: boost@lists.boost.org Subject: Re: [boost] The problems with Boost development
The point where that breaks down is where one library is found to have a fatal flaw shortly after a release, you wait 6 months > (or> whatever) for another version of boost which fixes the bug, but also breaks the API on a half-dozen other libraries you use.
But you are going to have to deal with the broken bits sooner or later :-( Easier to deal with them one at a time? So you wait for next weeks/fortnights release? (Assuming the install process can be made much less painful). (And I'm not sure the API change rate is as high as you suggest). Paul --- Paul A. Bristow Prizet Farmhouse Kendal, UK LA8 8AB +44 1539 561830, mobile +44 7714330204 pbristow@hetp.u-net.com

Andrey Semashev wrote:
3. Monolithic design. =====================
This matter has already been discussed, and others have made the suggestions, so I'll just express my thoughts of improving things.
Could you clarify what exactly is the problem with monolithic design? Is that merely that "Boost is too big" and it makes it harder to add it as dependency for a project? - Volodya

On Sat, Mar 20, 2010 at 2:24 PM, Vladimir Prus <ghost@cs.msu.su> wrote:
Andrey Semashev wrote:
3. Monolithic design. =====================
This matter has already been discussed, and others have made the suggestions, so I'll just express my thoughts of improving things.
Could you clarify what exactly is the problem with monolithic design?
The problem is that you can't have just a little bit of it. For example I use only a few pieces of Boost and it bugs me to wait through long SVN updates and sourceforge downloads for things I do not need. Emil Dotchevski Reverge Studios, Inc. http://www.revergestudios.com/reblog/index.php?n=ReCode

Emil Dotchevski wrote:
On Sat, Mar 20, 2010 at 2:24 PM, Vladimir Prus <ghost@cs.msu.su> wrote:
Andrey Semashev wrote:
3. Monolithic design. =====================
This matter has already been discussed, and others have made the suggestions, so I'll just express my thoughts of improving things. Could you clarify what exactly is the problem with monolithic design?
The problem is that you can't have just a little bit of it. For example I use only a few pieces of Boost and it bugs me to wait through long SVN updates and sourceforge downloads for things I do not need.
I see. However, I assume you won't be happy to individually pick from 90 options? What is the right balance? Say, 10 components? - Volodya

On 03/21/2010 12:24 AM, Vladimir Prus wrote:
Andrey Semashev wrote:
3. Monolithic design. =====================
This matter has already been discussed, and others have made the suggestions, so I'll just express my thoughts of improving things.
Could you clarify what exactly is the problem with monolithic design? Is that merely that "Boost is too big" and it makes it harder to add it as dependency for a project?
As a user, I constantly find myself excluding parts of Boost from building. There are many header-only components I also don't use. Overall, my estimations is that I use no more than 50% of libraries. So I want to be able to exclude the unneeded part. As a developer, I would like to be able to ship releases as often as needed, not necessarily bound to the current release schedule of Boost. I also think that the monolithic design limits the appearance of new libraries in Boost, as it implies the same quality standards on the well established libraries and the new coming ones. Some performance problems of SVN and Trac have been identified by users. I think, they resulted from centralized storage of Boost artifacts (which are the source code and tickets). Some have concerns about testing results turnaround. Since now the whole Boost should be tested as a whole, that problem will grow bigger over time. A modular approach would allow each tester to run tests only for one or several libraries and thus produce the results more often.

On 21 March 2010 08:48, Andrey Semashev <andrey.semashev@gmail.com> wrote:
Some performance problems of SVN and Trac have been identified by users. I think, they resulted from centralized storage of Boost artifacts (which are the source code and tickets).
We recently reached bug #4000. Mozilla reached bug 400000 in 2007. If Trac can't deal with 4000 bugs, then we need a better bug tracking system. Our branching structure is, as many people have pointed out, very inefficient. But it's required for our testing setup. If we had more flexible testing, then we could do better version control. Daniel

Andrey Semashev wrote:
On 03/21/2010 12:24 AM, Vladimir Prus wrote:
Andrey Semashev wrote:
3. Monolithic design. =====================
This matter has already been discussed, and others have made the suggestions, so I'll just express my thoughts of improving things.
Could you clarify what exactly is the problem with monolithic design? Is that merely that "Boost is too big" and it makes it harder to add it as dependency for a project?
As a user, I constantly find myself excluding parts of Boost from building. There are many header-only components I also don't use. Overall, my estimations is that I use no more than 50% of libraries. So I want to be able to exclude the unneeded part.
Are you concerned about disk space of sources, disk space of build products (can be easily skipped already), download size, or something else?
As a developer, I would like to be able to ship releases as often as needed, not necessarily bound to the current release schedule of Boost. I also think that the monolithic design limits the appearance of new libraries in Boost, as it implies the same quality standards on the well established libraries and the new coming ones.
Well, it's possible to ship individual releases already, no biggie -- users would just have to rm boost/component and libs/component and unzip new release on top of that.
Some performance problems of SVN and Trac have been identified by users. I think, they resulted from centralized storage of Boost artifacts (which are the source code and tickets).
I believe they stem from current server hosting both. For example, KDE is a couple of orders of magnitute larger than Boost, and I never noticed significant performance problems with SVN.
Some have concerns about testing results turnaround. Since now the whole Boost should be tested as a whole, that problem will grow bigger over time. A modular approach would allow each tester to run tests only for one or several libraries and thus produce the results more often.
OK. - Volodya

On 03/21/2010 03:58 PM, Vladimir Prus wrote:
As a user, I constantly find myself excluding parts of Boost from building. There are many header-only components I also don't use. Overall, my estimations is that I use no more than 50% of libraries. So I want to be able to exclude the unneeded part.
Are you concerned about disk space of sources, disk space of build products (can be easily skipped already), download size, or something else?
Disk space and the number of files. We use nightly builds of our products, and this includes a complete checkout of our repository, which includes Boost. Checking out a lot of needless files takes time and space.
As a developer, I would like to be able to ship releases as often as needed, not necessarily bound to the current release schedule of Boost. I also think that the monolithic design limits the appearance of new libraries in Boost, as it implies the same quality standards on the well established libraries and the new coming ones.
Well, it's possible to ship individual releases already, no biggie -- users would just have to rm boost/component and libs/component and unzip new release on top of that.
But that's kind of detached from Boost, IIUC. There is no common place for there point releases, there is no way to determine compatibility between releases of different libraries. I'm not aware of any library in Boost that does that.
Some performance problems of SVN and Trac have been identified by users. I think, they resulted from centralized storage of Boost artifacts (which are the source code and tickets).
I believe they stem from current server hosting both. For example, KDE is a couple of orders of magnitute larger than Boost, and I never noticed significant performance problems with SVN.
Perhaps.

AMDG Andrey Semashev wrote:
Some have concerns about testing results turnaround. Since now the whole Boost should be tested as a whole, that problem will grow bigger over time. A modular approach would allow each tester to run tests only for one or several libraries and thus produce the results more often.
Nothing outside the regression testing system itself prevents testing a single library at a time. The reporting system is not designed to handle it, and there would be a few issues with the Boost.Build tests, since they aren't handled by bjam. In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Andrey Semashev wrote:
Some have concerns about testing results turnaround. Since now the whole Boost should be tested as a whole, that problem will grow bigger over time. A modular approach would allow each tester to run tests only for one or several libraries and thus produce the results more often.
Nothing outside the regression testing system itself prevents testing a single library at a time. The reporting system is not designed to handle it, and there would be a few issues with the Boost.Build tests, since they aren't handled by bjam.
Also, I'm would like to remind anyone interested that it's very easy to run tests for one library at a time on your own system by invoking library_test.sh (or bat) from the library's test directory This works with the current boost/build bjam files etc. and generates a table of test results. You don't to wait for tests to be be run for your platform and the libraries you use. . The only thing required is to build one executeable whose source included in boost - BOOSTDIR/tools/regression/src/library_test.. Robert Ramey
In Christ, Steven Watanabe
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Meet me at BoostCon! On Mar 21, 2010, at 12:27 PM, "Robert Ramey" <ramey@rrsd.com> wrote:
Also, I'm would like to remind anyone interested that it's very easy to run tests for one library at a time on your own system by invoking library_test.sh (or bat) from the library's test directory
What does that script add to a simple invocation of bjam with no arguments, which does the same?

David Abrahams wrote:
Meet me at BoostCon!
On Mar 21, 2010, at 12:27 PM, "Robert Ramey" <ramey@rrsd.com> wrote:
Also, I'm would like to remind anyone interested that it's very easy to run tests for one library at a time on your own system by invoking library_test.sh (or bat) from the library's test directory
What does that script add to a simple invocation of bjam with no arguments, which does the same?
It IS the same - except that creates an easily viewable html table which has all the tests and all the platforms and all the build combinations (debug/release, static/dynamic build, etc). Also all test failures are linked to another page which shows the test/build failures. I'm feeling I'm missing something really dumb as I can't figure out how other authors run all the tests in their particular libraries prior to checking in or how users verify that any particular library works in their environment without some sort of tool such as this. Maybe they just do them by hand one by one? Or maybe they're just adding to their app without running the tests? Or? It's a mystery to me. OT - I know I've complained about bjam - but for me - along with this table generator it has worked well for me. Robert Ramey
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

AMDG Robert Ramey wrote:
I'm feeling I'm missing something really dumb as I can't figure out how other authors run all the tests in their particular libraries prior to checking in
I personally use bjam directly.
or how users verify that any particular library works in their environment without some sort of tool such as this.
I would guess that most users don't run the tests.
Maybe they just do them by hand one by one? Or maybe they're just adding to their app without running the tests? Or? It's a mystery to me.
In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Robert Ramey wrote:
I'm feeling I'm missing something really dumb as I can't figure out how other authors run all the tests in their particular libraries prior to checking in
I personally use bjam directly.
The serialization library testing has been referred to as a "carpet bombing" approach. a full run on my local machine is compilers (gcc 4.32, msvc 7.1, msvc 9.0), two builds (debug and release) , two flavors (static and dynamic lib). There about 60 tests. About 40 of them are run on each kind of archive class (text_, xml_, binary, text_w and xml_w.) So the total number of tests run is approximately 3 * 2 * 2 * ( 20 + 5 * 40) = 2640 test results. So I just let it run - and the next morning I am rewarded with a really nice giant table of 3*2*2 columns and 20 + 5*40 rows. It's hard to describe the satisfaction that derives from scrolling all over it. I check the table and click on the red failures. It's much easier than examing the bjam logs then finding the results test directory. When I rerun just some of the tests, the table is rebuilt. This processess continues until my next "ouvre" is ready to check in.
or how users verify that any particular library works in their environment without some sort of tool such as this.
I would guess that most users don't run the tests.
lol - of course I knew that. They build their application and when it doesn't work they query the boost user's list. It would make helping users on the list easier if I could know that in fact the library does in fact build and test as expected before they even start to ask thequestion. Robert Ramey
Maybe they just do them by hand one by one? Or maybe they're just adding to their app without running the tests? Or? It's a mystery to me.
In Christ, Steven Watanabe
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Robert Ramey <ramey <at> rrsd.com> writes:
I can't figure out how other authors run all the tests in their particular libraries prior to checking in or how users verify that any particular library works in their environment without some sort of tool such as this.
I just run bjam (from within emacs using `M-x compile RET bjam RET'). At the end it tells me if there were failures. If there were, I hit f4 (which I've bound to "show me the next error") and go look at the problem. It brings me right to the source line in question and highlights the error, right there in the program I use to edit code. I find that much easier than trying to deal with it through a web interface. I guess it never occurred to me what a PITA this must be fore people who don't have something similar set up! not-an-editor-but-an-operating-system-ly y'rs Dave

Dave Abrahams wrote:
Robert Ramey <ramey <at> rrsd.com> writes:
I can't figure out how other authors run all the tests in their particular libraries prior to checking in or how users verify that any particular library works in their environment without some sort of tool such as this.
I just run bjam (from within emacs using `M-x compile RET bjam RET'). At the end it tells me if there were failures. If there were, I hit f4 (which I've bound to "show me the next error") and go look at the problem. It brings me right to the source line in question and highlights the error, right there in the program I use to edit code. I find that much easier than trying to deal with it through a web interface. I guess it never occurred to me what a PITA this must be fore people who don't have something similar set up!
Actually this is an entirely different problem - with an entirely different solution. What I do is the following: * I use msvc 7.1 for my "default" development platform. * I have a VCIDE "project" for each serialization library (narrow and wide characters), a project for each test, and a project for each demo. Each test project has a "after build" command which runs the test any time it is rebuilt. * I have a VCIDE "solution" which contains all of the above projects. * I also have some "configurations" for switching build types to dll, static lib, debug, release, etc. So here's my work flow. a) User reports a problem with small example b) I paste his source into a special project I have for this purpose (test_zmisc). c) I build the test_zmisc project. If it fails to build with a compile error, I can jump right to the code and address it - this includes code in other projects such as libraries. d) I use the MSVC debugger (the gold standard in my opinion) to trace through and discover the problem. e) I tweak code in headers/libraries until I think i've addressed the problem. Now, I want to re-run ALL the tests so that I'm not playing whack-a-mole. For THIS I use library_test (.sh or bat) to update my giant test results table. I can't imagine doing this by running bjam for each combination of compiler, and build variant. It seems that it's either that or just check-in the changes and watch the trunk tests. The latter doesn't provide the instant gratification that I require. When my table is updated, I can click to see the messages associated with any failure. Then I go back to my MSVC environment to described above. To my mind, there is no feature of this procedure that I think I can do without. I can't see how anyone can do this kind of work without these components: * IDE to build, test and debug particular tests (I would guess EMACS fullfills this need) * The ability to run a large number of tests in one's local environment and permit the browsing of results. How do people do this later task without something like library_test? What do other people do instead? Just so you can see what I'm talking about, here is sample output from library_test. http://www.rrsd.com/software_development/boost/library_status.html Robert Ramey
not-an-editor-but-an-operating-system-ly y'rs Dave
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

AMDG Robert Ramey wrote:
Now, I want to re-run ALL the tests so that I'm not playing whack-a-mole.
For THIS I use library_test (.sh or bat) to update my giant test results table. I can't imagine doing this by running bjam for each combination of compiler, and build variant.
I can imagine it. In fact, it's what I do. Boost.Build can execute all the combinations in a single run. In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Robert Ramey wrote:
Now, I want to re-run ALL the tests so that I'm not playing whack-a-mole. For THIS I use library_test (.sh or bat) to update my giant test results table. I can't imagine doing this by running bjam for each combination of compiler, and build variant.
I can imagine it. In fact, it's what I do. Boost.Build can execute all the combinations in a single run.
Ahhh yes, I forgot about that. But one then has to troll through all the bjam output which I suppose is OK. I still love my table.
In Christ, Steven Watanabe
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Robert Ramey wrote:
Steven Watanabe wrote:
AMDG
Robert Ramey wrote:
Now, I want to re-run ALL the tests so that I'm not playing whack-a-mole. For THIS I use library_test (.sh or bat) to update my giant test results table. I can't imagine doing this by running bjam for each combination of compiler, and build variant.
I can imagine it. In fact, it's what I do. Boost.Build can execute all the combinations in a single run.
Ahhh yes, I forgot about that. But one then has to troll through all the bjam output which I suppose is OK. I still love my table.
Not really. You need to either (i) have IDE that shows the first error messages automatically or (ii) run bjam with the "-q" option so that it stops on the first error. When developing locally, it's typically rare to have 'expected failures' so you might was well jump on the first failure you get. - Volodya

Vladimir Prus wrote:
Robert Ramey wrote:
Ahhh yes, I forgot about that. But one then has to troll through all the bjam output which I suppose is OK. I still love my table.
Not really. You need to either (i) have IDE that shows the first error messages automatically or (ii) run bjam with the "-q" option so that it stops on the first error. When developing locally, it's typically rare to have 'expected failures' so you might was well jump on the first failure you get.
Actually, I'm running this when I THINK I'm done. It takes so long I have to let it run overnight. I don't want it to stop on the first error. Basically this is the local version of running the trunk tests. I'm intrigued that it was deemed necessary to provide tables of the test results done by the remote testers but there never seemed to be interest in something equivalent for running the local tests. I know you've manifested a lack of enthusiam for my table generator, but I'm think you should re-consider this. It's a perfect compliment to the bjam build system and makes it more valuable and useful. Also I've always been unhappy about the chart displayed for the remote testing. Other than the compiler, it doesn't show me the build features debug/release, static/dynamic, etc. much less test the combinations. I really need this for the serialization library because a lot of features such as export, DLL functionality, etc depend upon behavior which is undefined by the standard. Examples of this is code-stripping which varies depending on the compiler and the build settings. Maybe these characteristics of the serialization library make it require more testing than other libraries. Robert Ramey

AMDG Robert Ramey wrote:
Also I've always been unhappy about the chart displayed for the remote testing. Other than the compiler, it doesn't show me the build features debug/release, static/dynamic, etc. much less test the combinations.
You can force Boost.Build to build different combinations for your library. I'm pretty sure that if you do, the table cell for the test will link to a page that shows the different variants. In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
Robert Ramey wrote:
Also I've always been unhappy about the chart displayed for the remote testing. Other than the compiler, it doesn't show me the build features debug/release, static/dynamic, etc. much less test the combinations.
You can force Boost.Build to build different combinations for your library.
I'm doing that. That's why the library_test script shows a separate cell with results for each variation. That is I get 5 cells if there are 5 variations.
I'm pretty sure that if you do, the table cell for the test will link to a page that shows the different variants.
I'm not sure what you're referring to, but if it's the table shown in the trunk test I don't see where I can decypher the build settings. In this particular example, I get a failure and click on the link to get http://tinyurl.com/ykqdaba which doesn't help me at all - though I guess that's a separate issue Robert Ramey
In Christ, Steven Watanabe
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

AMDG Robert Ramey wrote:
Steven Watanabe wrote:
Robert Ramey wrote:
Also I've always been unhappy about the chart displayed for the remote testing. Other than the compiler, it doesn't show me the build features debug/release, static/dynamic, etc. much less test the combinations.
You can force Boost.Build to build different combinations for your library.
I'm doing that. That's why the library_test script shows a separate cell with results for each variation. That is I get 5 cells if there are 5 variations.
I mean that you can adjust the Jamfiles so that the regression tests will run multiple variants automatically. Something like alias all_variants : serialization : <variant>debug <variant>release ; should do the trick, I think. (Warning, I haven't actually tried this)
I'm pretty sure that if you do, the table cell for the test will link to a page that shows the different variants.
I'm not sure what you're referring to, but if it's the table shown in the trunk test I don't see where I can decypher the build settings.
You generally can't unless there are multiple build variants or something fails. Unless the tester explicitly sets them, they should just use the defaults.
In this particular example, I get a failure and click on the link to get
which doesn't help me at all - though I guess that's a separate issue
It looks like something is failing that isn't getting picked up by the reporting tools. I don't see what it could be. I guess we'd need to see the bjam log to figure it out. Ask on the testing list? In Christ, Steven Watanabe

Robert Ramey wrote: <snip>
For THIS I use library_test (.sh or bat) to update my giant test results table. I can't imagine doing this by running bjam for each combination of compiler, and build variant.
<snip>
To my mind, there is no feature of this procedure that I think I can do without. I can't see how anyone can do this kind of work without these components:
* IDE to build, test and debug particular tests (I would guess EMACS fullfills this need) * The ability to run a large number of tests in one's local environment and permit the browsing of results.
How do people do this later task without something like library_test? What do other people do instead?
I think many of us run all tests within emacs using boost build and look at the results via the method described by Dave. michael -- ---------------------------------- Michael Caisse Object Modeling Designs www.objectmodelingdesigns.com

On 3/19/2010 4:14 AM, Vladimir Prus wrote:
Hello,
in a recent post, Dave listed a few things that he thinks are wrong with Boost development, at present, quoting:
I know I'm not the first person to notice that, as Boost has grown, it has become harder and harder to manage, Subversion is getting slow, our issue tracker is full to overflowing, and the release process is a full-time job.
It seems to be important, right now, to discuss whether this problems are real, and what problems are most important. So, I would like to ask that everybody who is somehow involved in *development* -- whether in writing code, triaging bugs, sending patches, or managing thing, list three most important problems with Boost now. Please keep the items to a sentence or two, so that we can easily collect the problems.
Here's my take:
- Unmaintained components. Many authors are no longer active, and we have no procedures for taking over. - Reviews that are getting rare and, IMO, less interesting then before. - Turnaround time of test results
I think it is also important to ask boost end-users what are the perceived problems with Boost. If I may answer that, as an end-user, I first want to say that from the end-users perspective the problems with Boost are highly overrated. It is still the premier set of libraries for native C++ development. The main problem from an end-user's perspective is the first one you mentioned above. There are Boost libraries which have a few bugs but which simply do not get maintained and it is simply because the library developer, for very human reasons, does not want to continue to maintain the library. I do not know the solution to this but I strongly suspect that when the developer of a library does not want the responsibility to maintain it anymore that it needs to get passed to someone else who is willing to maintain it, or else marked in Boost as not being maintained and thertefore deprecated. I know the latter sounds harsh but the truth is that if no developer wants to maintain software which has problems, or is possibly in need of additions or changes, then the software eventually become unusable. So I would say that anyone submitting a library to Boost needs to understand that once that person no longer wishes to maintain that library, the ability to maintain it reverts to Boost and the original developer can no longer claim that library as his own. To me this seems to be realistic. I view it as very human for a developer to not want to maintain software in perpetuity so I see nothin wrong with this suggestion.

On 19 March 2010 22:13, Edward Diener <eldiener@tropicsoft.com> wrote:
If I may answer that, as an end-user, I first want to say that from the end-users perspective the problems with Boost are highly overrated.
Given any perspective, I don't know how to rate the concerns people bring up. We have no metrics. Addressing the needs of those who are vocal at the expense of those who are silent may not turn out to be a sound strategy. Someone brought up the UUID issue. Is it a one off anomaly, or are there many similar issues? Honestly, I don't know. I do know that we will never prevent the isolated incidents, but we do need to address actual (not theoretical) systemic problems. Are monolithic releases holding adoption back? I don't know. As one of the people who championed Boost into my company, that was a slight extra burden after getting over the hurdle of allowing any part of Boost in our system. As a developer, I'm glad the whole thing is there, because I never would have been able to successfully fight for every single library we've used over the years. IMNSHO, Boost works (and it does work) because 1. The technical burden on volunteer developers is high while the bureaucratic burden on volunteer developers is low. 2. Volunteers do a lot of rarely thanked work behind the scenes. Unfortunately, most of the "fixes" that are proposed are more along the lines of "here is what I want the volunteers to do" instead of "here is what I am volunteering to do". In my experience, the more you try to dictate to the volunteers what they must do, the fewer volunteers you end up with. I'm not arguing for no direction or rules; rather, it is a delicate balance. If one of the problems is that there aren't enough volunteers (say, as in finding review managers), making that job harder will accomplish the opposite of getting more review managers. Managing volunteers is hard, because the rules are opposite of managing paid employees. The start of this thread was someone saying we have to talk about Dave Abrahams concerns on Boost development. Somewhere in the thread came a rebuttal of "This won't boost Boost.". In my view, we are talking, but Dave is doing. Dave *is* boosting Boost. Talk is cheap. Not much will happen if you are only doing it hoping that someone else will carry your proposal. Instead, when you mention the problems, also mention what you are willing to do to help fix it. You don't have to do it alone; just make a commitment that is more than just talking. Finally, someone mentioned "buying a beer" for the rarely thanked Boost volunteers at BoostCon. While nice, a better gesture would be to volunteer yourself for something. The Boost community is what we make of it. Really. -- Nevin Liber <mailto:nevin@eviloverlord.com> (847) 691-1404

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Vladimir Prus Sent: Friday, March 19, 2010 8:15 AM To: boost@lists.boost.org Subject: [boost] The problems with Boost development
Hello,
in a recent post, Dave listed a few things that he thinks are wrong with Boost development, at present, quoting:
I know I'm not the first person to notice that, as Boost has grown, it has become harder and harder to manage, Subversion is getting slow, our issue tracker is full to overflowing, and the release process is a full-time job.
It seems to be important, right now, to discuss whether this problems are real, and what problems are most important. So, I would like to ask that everybody who is somehow involved in *development* -- whether in writing code, triaging bugs, sending patches, or managing thing, list three most important problems with Boost now. Please keep the items to a sentence or two, so that we can easily collect the problems.
Here's my take: - bjam is unfamiliar and inscrutable to many, and lacks *effective* documentation and jamfile comments. I see this as a mega barrier to many who might like to help. - build/download/install process is not easy enough and not well enough documented - the same user problems keep cropping up again and again on the lists. (And very probably this is the tip of a much, much bigger iceberg). - Not enough people willing and able to give time to Boost. Blame the Bankers? ;-) - Unmaintained components. Many authors are no longer active, and we have no procedures for taking over. I'd make it easier for others to implement changes to trunk - seems to work now with an informal "Ok to change this?" protocol. - Bugs that are identified and patches provided take far, far too long to get into trunk, let alone releases. Bugs whose fix is identified (reasonably well) cannot easily be patched into a users release. - Nobody has cracked the problem of automating the release process to achieve Beman's laudable 'release early, release much more often' aim. But this would force users to accept new versions frequently. Personally I don't think this would anywhere as much trouble as people fear. I'm astonished that people are still expecting answers to questions about using 1.34 etc - I would just say: "you must try the most recent version first". My gut reaction to Dave's proposal was "This won't boost Boost". But I hope to be proved wrong. Paul --- Paul A. Bristow Prizet Farmhouse Kendal, UK LA8 8AB +44 1539 561830, mobile +44 7714330204 pbristow@hetp.u-net.com

On Sat, 20 Mar 2010 13:25:35 +0100, Paul A. Bristow <pbristow@hetp.u-net.com> wrote:
[...] - bjam is unfamiliar and inscrutable to many, and lacks *effective* documentation and jamfile comments. I see this as a mega barrier to many who might like to help.
- build/download/install process is not easy enough and not well enough documented - the same user problems keep cropping up again and again on the lists. (And very probably this is the tip of a much, much bigger iceberg).
I agree with you that the documentation is not sufficient. I had used Boost libraries for many years without ever understanding the difference between Boost.Jam and Boost.Build. I also never had an idea which option to use when and where (eg. is it msvc, --toolset=msvc, toolset=msvc or <toolset>msvc). After having sat down for a weekend last year and trying to understand the entire build process I wrote down what I learned. The article can be found at http://www.highscore.de/cpp/boostbuild/ and is not linked anywhere on the Boost website as far as I can tell. From the feedback I get this article should definitely help you to understand the big picture (some developers find the article via the usual search engines). The entire build process and the tools made very much sense to me when I finally got the big picture. That said I'm pretty much impressed - but it took me also years to become impressed. :) Boris PS: I only look forward to a Python port of the tools instead of using another scripting language in Jamfiles. I didn't make my mind yet whether switching to something else like CMake makes sense.

On 03/20/2010 05:46 PM, Boris Schaeling wrote:
After having sat down for a weekend last year and trying to understand the entire build process I wrote down what I learned. The article can be found at http://www.highscore.de/cpp/boostbuild/ and is not linked anywhere on the Boost website as far as I can tell.
From the feedback I get this article should definitely help you to understand the big picture (some developers find the article via the usual search engines). The entire build process and the tools made very much sense to me when I finally got the big picture. That said I'm pretty much impressed - but it took me also years to become impressed. :)
The article is really impressive. I must say, I've been missing for something like this in the Boost.Build docs.

Cross-posing to boost-build... On 3/21/2010 1:46 AM, Boris Schaeling wrote:
I agree with you that the documentation is not sufficient. I had used Boost libraries for many years without ever understanding the difference between Boost.Jam and Boost.Build. I also never had an idea which option to use when and where (eg. is it msvc, --toolset=msvc, toolset=msvc or <toolset>msvc).
After having sat down for a weekend last year and trying to understand the entire build process I wrote down what I learned. The article can be found at http://www.highscore.de/cpp/boostbuild/ and is not linked anywhere on the Boost website as far as I can tell.
From the feedback I get this article should definitely help you to understand the big picture (some developers find the article via the usual search engines). The entire build process and the tools made very much sense to me when I finally got the big picture. That said I'm pretty much impressed - but it took me also years to become impressed. :)
Boris
PS: I only look forward to a Python port of the tools instead of using another scripting language in Jamfiles. I didn't make my mind yet whether switching to something else like CMake makes sense.
Thank you, Boris! This (http://www.highscore.de/cpp/boostbuild/) is a great introduction to boost.build. I wonder if we can integrate it somehow into boost's official boost.build documentation. Anyone? -- Eric Niebler BoostPro Computing http://www.boostpro.com

Eric Niebler wrote:
Thank you, Boris! This (http://www.highscore.de/cpp/boostbuild/) is a great introduction to boost.build. I wonder if we can integrate it somehow into boost's official boost.build documentation. Anyone?
While we're at it, I wrote an article on Boost.Build design which is available here: http://syrcose.ispras.ru/2009/files/04_paper.pdf While it's not a tutorial, it might clarify some of high-level design points. - Volodya

AMDG Paul A. Bristow wrote:
- Unmaintained components. Many authors are no longer active, and we have no procedures for taking over. I'd make it easier for others to implement changes to trunk - seems to work now with an informal "Ok to change this?" protocol.
From my perspective, I don't see how to improve this protocol. Sure, I could just commit, and most likely no one would complain unless I broke something, but when dealing with code that I'm not perfectly familiar with, I /want/ more pairs of eyes looking at the patch, before it goes in. Maybe no one else will look at it, but the delay this imposes still gives me an extra chance to realize if I did something stupid. In Christ, Steven Watanabe

On 19 March 2010 08:14, Vladimir Prus <ghost@cs.msu.su> wrote:
So, I would like to ask that everybody who is somehow involved in *development* -- whether in writing code, triaging bugs, sending patches, or managing thing, list three most important problems with Boost now.
Here's my list, in no particular order. The first two might be a side effect of the review system. - A lot of people asking for the moon, very few who'll build the rocket. - Too many boost developers work in their silo. There's not enough collective responsibility. - We are too constrained by our testing system. Daniel

At Fri, 19 Mar 2010 11:14:32 +0300, Vladimir Prus wrote:
Hello,
in a recent post, Dave listed a few things that he thinks are wrong with Boost development, at present, quoting:
I know I'm not the first person to notice that, as Boost has grown, it has become harder and harder to manage, Subversion is getting slow, our issue tracker is full to overflowing, and the release process is a full-time job.
It seems to be important, right now, to discuss whether this problems are real, and what problems are most important.
This is a great discussion to have; I encourage it for those that are so inclined. IIUC, we're also going to have a talk about that topic from Robert Ramey at BoostCon. If you don't see me participating this time around, don't take it as a lack of support or interest—it's only because I'm too busy working on (partial) solutions to the problems-as-I-see-them and having them ready for BoostCon. Cheers, -- Dave Abrahams Meet me at BoostCon: http://www.boostcon.com BoostPro Computing http://www.boostpro.com

David Abrahams wrote: [snip]
Vladimir Prus wrote:
Hello,
in a recent post, Dave listed a few things that he thinks are wrong with Boost development, at present, quoting:
I know I'm not the first person to notice that, as Boost has grown, it has become harder and harder to manage, Subversion is getting slow, our issue tracker is full to overflowing, and the release process is a full-time job.
It seems to be important, right now, to discuss whether this problems are real, and what problems are most important.
This is a great discussion to have; I encourage it for those that are so inclined. IIUC, we're also going to have a talk about that topic from Robert Ramey at BoostCon. If you don't see me participating this time around, don't take it as a lack of support or interest-it's only because I'm too busy working on (partial) solutions to the problems-as-I-see-them and having them ready for BoostCon.
[snip] [1] As a Boost user, I primarily expect libraries in Boost are ones that are being proposed, developed, or tested for possible inclusion into a future C++ Standard. To me, this means that the API will be continuously changing and refined as the libraries receive review and feedback on their use. This is what I get from the vision statement and am grateful that the authors allow me to use their exceptionally high quality libraries in my own projects. I use these libraries at my own risk, my own maintenance, my own quality control, etc. (and hopefully, I report my results back to them.) This is my understanding from the Boost vision statement and the origins of Boost. I think that Boost.Threads and the locking libraries are excellent examples of this evolutionary process. I thought that providing an early adopter TR1 implementation was quite appropriate too. However, because of a few outstanding libraries (serialization, lexical, gil, soci, etc), I've started to like to use / depend on Boost libraries whenever I can because Boost libraries are usually very efficient and well thought out and support generic programming (and already part of my build tree). I think there is another category of libraries that are very useful, but realistically, will never be considered for inclusion into a C++ standard. A question is whether Boost should host these libraries or not. In a nutshell, I believe that the Boost Vision statement needs to be revisited and determine what Boost is. To me, it seems to have wandered a bit away from it's originally established goals. If I could have my cake and eat it too, I would like to see Boost divided into three subprojects: 1) Research and development of C++ Standards and libraries, 2) Repository for complementary (and integrated) but non standards bound libraries, and 3) sandbox projects. Within 1 and 2, there should be unstable, testing, and stable libraries. I believe this would set user's expectations appropriately. BoostBuild is a tremendous tool and I would be very sad to see it get totally replaced by a Make based system. To me, the greatest problem with BoostBuild is lack of user education, and that from lacking the right obviously accessible documentation. When BoostBuild works, it is magic and it works great. When it doesn't, you're committed to multiple hours of pouring through Jamfiles and Googling. After you've done that a few times and figured things out, you only spend minutes when things blow up... Someone said, "Using Unix/Linux is easy, learning it is hard." I think the same applies to C++ and to Boost Build. Once you learn it, you have versatile and problem solving tools. I'm quite excited by the Ryppl page and development. I'm anxiously following it's development and waiting for the testing candidate.

On 22 March 2010 12:43, Schrom, Brian T <brian.schrom@pnl.gov> wrote:
In a nutshell, I believe that the Boost Vision statement needs to be revisited and determine what Boost is. To me, it seems to have wandered a bit away from it's originally established goals. If I could have my cake and eat it too, I would like to see Boost divided into three subprojects: 1) Research and development of C++ Standards and libraries, 2) Repository for complementary (and integrated) but non standards bound libraries, and 3) sandbox projects. Within 1 and 2, there should be unstable, testing, and stable libraries. I believe this would set user's expectations appropriately.
I really like that split. I always liked the idea of a "core", but trying to define what that would be was quite hard. With "standards-track" libraries, we can require, in essence, that it comes along with a committee paper describing it in addition to the normal Boost requirements. I'd also be glad to have the category 2, since it echoes my feelings about some of the recent libraries that I don't expect to ever use, though were plausibly useful to some or many. I'd suggest shipping even the category 3 libraries in releases, though only in a clearly separate area. "Determine interest" would get it into category 3, and people could start using -- and hopefully even reviewing -- the libraries while they're there. A fairly typical Boost review could then examine the the usability to move it into category 2, allowing the possibly of multiple libraries in the same domain. A final review of design and implementation could then move certain libraries into category 1.

On 03/19/2010 02:14 AM, Vladimir Prus wrote:
It seems to be important, right now, to discuss whether this problems are real, and what problems are most important. So, I would like to ask that everybody who is somehow involved in *development* -- whether in writing code, triaging bugs, sending patches, or managing thing, list three most important problems with Boost now. Please keep the items to a sentence or two, so that we can easily collect the problems.
As a potential contributor, here are my main stumbling blocks: 1. Jam -- I hate it with a passion. Boost is the only software I interact with that uses it. I have had to learn imake, make, autoconf, cmake, scons, and others. I resent having to learn yet another obtuse build system. 2. Unmaintained code. I run into bugs that were reported in Trac a while ago -- and Trac even contain patches for the fix. And feature enhancements to such code is impossible. 3. Keeping up with the mailing list. The traffic on the mailing list is way too high keep up with on a regular basis. With that said, I think it is important to know also what Boost gets right. 1. There is some control over what gets in. 2. The software is generally of high quality. 3. The documentation and automated tests are very good. Regards, Rob
participants (29)
-
Andrey Semashev
-
Artyom
-
Boris Schaeling
-
Christopher Jefferson
-
Daniel James
-
Dave Abrahams
-
David Abrahams
-
David Abrahams
-
Edward Diener
-
Emil Dotchevski
-
Eric Niebler
-
James Mansion
-
John Phillips
-
Mathias Gaunard
-
Michael Caisse
-
Nevin Liber
-
Paul A. Bristow
-
Pierre Morcello
-
Rhys Ulerich
-
Rob Riggs
-
Robert Ramey
-
Schrom, Brian T
-
Scott McMurray
-
Stefan Seefeld
-
Steve M. Robbins
-
Steven Watanabe
-
Stewart, Robert
-
Vladimir Prus
-
Vladimir Prus