is review system in place is extremely slow? (was Re: [rfc] rcpp)

Hi, ----- Original Message ----- From: "Mathias Gaunard" <mathias.gaunard@ens-lyon.org> To: <boost@lists.boost.org> Sent: Tuesday, February 23, 2010 12:47 PM Subject: Re: [boost] [rfc] rcpp
Ivan Sorokin wrote:
Why do you think the review system in place is extremely slow? Currently there are a lot of libraries to review, but no review managers. That means that the user community don't want to spend a little bit of their time to manage a review. In addition, the last review didn't had too much of reviewers (I'm also concerned by this point) I'm the review manager of Boost.Task but the library is not ready for review, because now Boost.Task depends on Boost.Move, Boost.Fiber and Boost.Atomic (which is not yet on the review queue). Maybe the review withards could add this on the schedule page. What do you propose to improve it? Best, _____________________ Vicente Juan Botet Escribá

vicente.botet wrote:
Why do you think the review system in place is extremely slow?
Because some libraries that are in the review queue won't make it to boost before several years, even if they're polished and ready for usage, and that's quite a shame. I would even say that's a problem that is putting boost at risk as it cannot scale to more projects than it already has, which means it may be losing on the innovation side. I may however be exaggerating, but it doesn't seem like that many people share those sentiments within Boost and feel concerned about this issue.
Currently there are a lot of libraries to review, but no review managers. That means that the user community don't want to spend a little bit of their time to manage a review.
In addition, the last review didn't had too much of reviewers (I'm also concerned by this point)
So you can see, indeed, that this is fairly concerning. I think this is mostly the case for specialized libraries, though. The more precise the domain, the more little people know about it; and you wouldn't want people that know very little about something to review it. Maybe separating a "core" boost containing only small-ish general-purpose utilities from other libraries might help.
What do you propose to improve it?
I don't claim to have a solution, but I've got what I think may be an idea worth investigating. What I suggest is creating an unstable branch where all libraries that satisfy simple quality criteria can be added, with the same layout as the trunk, with some automatic tests being run, a bug tracker etc. That way they get exposure, usage, and thus become easier to review later on. Authors also feel more involved, and work is not done in isolation as the branch contains the whole of boost. A library may then get elected to the stable branch, which means you simply have to merge it to the trunk. It could be said it is just a glorified sandbox, but I believe the changes from the sandbox are enough to make it significantly different. What the "simple quality criteria" are remains to be defined, but it should be something that only requires review from one single person within the pool of approved reviewers and that isn't too time-consuming. You would also probably need a community coordinator to make sure the unstable branch doesn't become too much of a mess.

This is one of the subjects I was planning to talk about at http://www.boostcon.com/program Robert Ramey

Mathias Gaunard wrote:
I may however be exaggerating, but it doesn't seem like that many people share those sentiments within Boost and feel concerned about this issue.
Just to avoid any misunderstanding, I meant "because it doesn't seem", not "but it doesn't seem".

<snip>
So you can see, indeed, that this is fairly concerning.
Agree.
Agree - But you do *ALSO* want ' little people - mere users' to say if they find the presentation and documentation are good enough that they could use it.
This is in essence what I have been muttering about for some time - a collection of what one might describe as 'candidate libraries'. Libraries that have passed a first hurdle - being in a useful, usable state. (Their documentation might use a 'Proposed for Boost' or 'Candidate for Boost' logo? -see attached.) This has the considerable potential advantage - encouraging a user base. Users (both naive and expert) will smoke out many bugs and weaknesses in design and implementation. And allow those users to provide informed full reviews later. I also believe that this will encourage developers to work on documentation, if only to reduce the burden of responding to user queries. But those who manage the sandbox and trunk, and testing, might regard it as too much work? Paul --- Paul A. Bristow Prizet Farmhouse Kendal, UK LA8 8AB +44 1539 561830, mobile +44 7714330204 pbristow@hetp.u-net.com

On Wed, Feb 24, 2010 at 7:56 AM, Paul A. Bristow <pbristow@hetp.u-net.com> wrote:
I think Mathias has come to the same conclusion that you arrived at a while ago, Paul. I remember hearing your ideas a few months before and then again when the logo discussion came up. Both of you are on to something IMHO. If I may add another suggestion: A candidate library should have its own webpage on the boost site, where people can Rate it and leave a review, just like a product page at an online store (e.g. Amazon). Any library that reaches N number of reviews is now ready to be reviewed for official status as a Boost library. I think Robert Ramey also proposed something similar.
But those who manage the sandbox and trunk, and testing, might regard it as too much work?
Setting up the web backends for all of this sounds like a lot of work, but the end result is that there is no longer a review manager required until N number of reviews have been received. At that point, a few review managers might read over the reviews and make the decision as to whether to promote the library or not, or maybe that means it goes to formal review. If there was a graph on the website showing how many cycles you have donated to testing, it might encourage more people to set up old boxes as testing resources. I know I have an older box just sitting in storage doing nothing and I just recycled 2 other ones - how many other people are like that? --Michael Fawcett

On 24 February 2010 11:10, Michael Fawcett <michael.fawcett@gmail.com>wrote:
Setting up the web backends for all of this sounds like a lot of work,
Given that we don't have enough volunteers to be review managers, where exactly are you going to find the volunteers to do all this setting up and maintaining web backends? In a volunteer community, proposals of the form "here is what I am going to do to make this happen" tend to go a lot further than "here is what I want the community to do to make this happen". -- Nevin Liber <mailto:nevin@eviloverlord.com> (847) 691-1404

Nevin Liber wrote:
True.. But it's not a terrible amount of work with modern web tech, like Drupal. Along those lines I recently mentioned to Harmut how nice it would be to have each Boost library have their own sub-site (ex. spirit.boost.org). As it would promote the model that Boost is a set of independent libraries under one umbrella. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

Rene Rivera wrote:
I was about to respond to Nevin's post with similar comment. As a brand new member of Boost developers community (thanks to Boost Geometry approval), I have got an impression that once a library accepted, it's unclear what would be a next step and how its development cycle would look like in terms of infrastructure support. Given the Boost Geometry, we have had quite a long brainstorm and discussion with Barend and Bruno about how to organize the project as a part of Boost collection. Number of questions raised and I personally admit we have not found any ideal solution. A few of the issues: 1) Where Boost Geometry website should go? SourceForge, OSGeo Foundation (where it is now hosted, ), should we buy hosting as Spirit or perhaps arrange everything at boost.org. Where to put a regular website? Where to put a project specific Wiki or FAQ? 2) Where bug tracker goes? Should we ask Boost Geometry users to submit reports to Boost Trac exclusively, or should we maintain it on our own. We have actually not decided what to do as neither of choices seem best options. Adding hundreds of reports to the general population at Boost Trac may make things difficult to maintain and searching for existing bugs may become a complex task (i.e. to confirm if a problem has been submitted before reporting new bug, etc.) 3) Where mailing lists go? The boost and boost-users seem a natural choice for Boost Geometry users, however plenty if not most of discussions would be boring to general audience of Boost developers/users. Geometry is one of wide variety of subjects Boost addresses. We likely need our own mailing list server, but where? lists.boost.org or somewhere else? How to avoid confusions in users so they know where to post their questions about Boost Geometry. ATM, we host it at lists.osgeo.org In general, there is no problem with finding virtual home for a project. The problem is that if it is outside Boost project, which in fact a library is a part of, then it will likely cause confusions and impression of disintegration. The big question is how to avoid schizophrenic way of maintaining project infrastructure and a little split of personality as I observe in for instance with Boost/Adobe GIL. It is quite important to keep things well integrated, otherwise it may prevent wide adoption of a piece of software by users (it's well explained by Karl Vogel in http://producingoss.com/) I have experience with self-organised community of OSGeo Foundation (http://osgeo.org, http://wiki.osgeo.org) which could be compared to Boost as domain-specific (GIS/RS/geo*) community. OSGeo accepts projects by conducting incubation process similar to Boost reviews. Shortly, there is a bunch of projects projects living under the umbrella of OSGeo. Each project gets its own instance of: - overview website at project.osgeo.org or it is a subdomain which points to project own website. - Trac/Wiki at trac.osgeo.org/project/ - SVN: at svn.osgeo.org/project/ - mailing lists at lists.osgeo.org Some projects get other services like buildbot (http://buildbot.osgeo.org), FTP at download.osgeo.org, etc. Everything works on volunteer basis, so it's a self-supported system. It is coordinated by volunteers willing to join SAC to support the community. (http://wiki.osgeo.org/wiki/SAC and http://lists.osgeo.org/pipermail/sac) From a project point of view, it works nearly perfectly. However, I can admit it, it costs a lot of work to administer and maintain all the services. It is a load of work, indeed. I've given the long story to share some observations and experiences in terms of brainstorming, however, I'm not sure what capabilities Boost holds in its hand in terms of server-side infrastructure. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

Steven Watanabe wrote:
Yes, true, it actually works as long as one knows how to use custom queries in Trac. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

----- Original Message ----- From: "Mateusz Loskot" <mateusz@loskot.net> To: <boost@lists.boost.org> Sent: Thursday, February 25, 2010 4:00 PM Subject: Re: [boost] review system in place is extremely slow? (was Re: [rfc] rcpp)
AFAIK, Boost don't provided a website by library, so you will need to host where you prefer.
Where to put a project specific Wiki or FAQ?
There is a wiki associated to the Trac system (https://svn.boost.org/trac/boost/wiki). You can add you own page and organize your wiki as you like. I suppose you will need to request to have the right to modify it.
I would prefer you request your users to submit reports to Boost Trac. This allow to check all the Boost tickets with only one tool. You can add a specific query to show the trickets specific to the component Geometry.
There are some specific mailing lists, e.g. Threads, Spirit, Doc, .. . all that you need is to have a moderator I think. Have you request such a ML?
I agree.
Maybe just doing what you are doing now. Requesting to this ML. IMO things are not so static as people could think.
Currently missing in Boost.
- Trac/Wiki at trac.osgeo.org/project/
Available on request.
- SVN: at svn.osgeo.org/project/
Already available.
- mailing lists at lists.osgeo.org
Available on request.
Some projects get other services like buildbot (http://buildbot.osgeo.org), FTP at download.osgeo.org, etc.
Currently missing in Boost.
Thanks to share with us all the questions about how to organize you project. Best, Vicente

vicente.botet wrote:
There are some concerns, actually. For example, our Trac is a bit too slow at times. As of recent, the amount of spam that hits boost-build mailing list is going through the rough, presumably due to no filtering. Further, I don't think it's documented who I am supposed to talk with about those issues. And even if it were documented, I believe that would be a system administrator at OSL -- and I would not be comfortable bothering a person who is already doing us a courtesy. It probably would be great if Boost had a dedicated server managed by a couple of folks from the active community. - Volodya

Vladimir Prus wrote:
It is indeed a disease of civilization, in general.
FYI, OSGeo servers also live at OSL http://wiki.osgeo.org/wiki/Infrastructure_Transition_Plan_2010 Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

Hmm... spam to this list is basically zero (at least spam that the moderators have to deal with). Is the Boost.Build list set to automatically reject posts from unsubscribed addresses? That's what this list does (it sends a message telling you how to subscribe and why it's necessary), and since we only moderate first-posts from new subscribers, acting as moderator implies next to no work at all.... which is how it should be really... you guys are just all too polite ;-) HTH, John.

John Maddock wrote:
No, I have explicitly turned of that feature.
Well, this is very nice behaviour for moderators, but very annoying for newcomers. It would be much nicer if there was spam filtering -- for all I can tell, there's essentially none. - Volodya

Vladimir Prus wrote:
One's first message being rejected with a request to subscribe, assuming one hadn't subscribed first, and having one's first message slowed for moderation is "very annoying?" Compare that with spam filtering in which some spam gets through every filter and some legitimate messages are filtered wrongly. The spam filtering approach is annoying to all list subscribers every day. The moderation approach is "annoying" once for each newcomer. I prefer the latter. _____ Rob Stewart robert.stewart@sig.com Software Engineer, Core Software using std::disclaimer; Susquehanna International Group, LLP http://www.sig.com IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

Stewart, Robert wrote:
Yes.
Compare that with spam filtering in which some spam gets through every filter
I don't see any spam on Boost.Build mailing list, somehow ;-) Probably because first-time posters are still moderated, just not required to subscribe.
and some legitimate messages are filtered wrongly.
I don't believe this the case the technical mailing list where almost nobody is sending HTML message with attached exes and pictures of kittens.
The spam filtering approach is annoying to all list subscribers every day.
I fail to see why this is so. In fact, a large number of technical mailing lists I participate with don't have any kind of pre-moderation, for all I know, and are virtually spam-free for years. - Volodya

vicente.botet wrote:
Yes, it's clear.
I didn't know it is possible. I assume the Trac Wiki is dedicated to general Boost maintenance, administration, commonalities.
Vincente, this is a very important recommendation actually. I was looking at Boost GIL which in fact maintains two bug trackers and I was a bit worried about usability of this approach. Having all bugs reported to Boost Trac would be best option indeed.
You can add a specific query to show the trickets specific to the component Geometry.
Yes, it's a nice feature of Trac
No, AFAIK we have not requested (yet). The ggl@lists.osgeo.org was created in April 2009, so before approval submission to Boost.
Yes, it seems so. I'll propose to discuss this idea. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net Charter Member of OSGeo, http://osgeo.org

Mateusz Loskot wrote:
I should clarify that the question marks do not mean I am specifically looking for answers. I used question marks to denote questions raised during discussions within Boost Geometry team, just considerations in my own head as well. Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

----- Original Message ----- From: "Nevin Liber" <nevin@eviloverlord.com> To: <boost@lists.boost.org> Sent: Thursday, February 25, 2010 10:10 AM Subject: Re: [boost] review system in place is extremely slow? (was Re:[rfc] rcpp)
Yes, this is the kind of improvements I was waiting for. Good ideas are great, but help to implement not so good ideas can be better. Have the people that claims the process is slow thouth about proposing themselves as review managers? Note, that I have already forgot who told that the process is slow :) Best, Vicente

On Thu, Feb 25, 2010 at 4:10 AM, Nevin Liber <nevin@eviloverlord.com> wrote:
I hate to think ideas are only ever considered if they are brought up by someone with the time and knowledge to implement. You might find the amount of ideas drops to near zero. --Michael Fawcett

vicente.botet wrote:
He must be comparing it to something else that is faster, like getting new features into the linux kernel...oops, that's slow too, hmmm. I think if people used the pace of the C++ standardization system as a baseline for comparison they would find that boost is extremely fast paced and exciting. People who propose a library to boost should frankly expect determining interest, iterative revision, review, acceptance and release of a library to take years (literally). Perhaps we should put a warning to that effect in the description of the process on the web site so that it doesn't come as a surprise every time. I didn't mind because it took nine months (literally) just to get permission from my employer to open source my code. By comparision the boost review process was quite reasonable. Actually there are some steps to the open source permission process I went through that might be reasonably added to the boost review process, like running protexIp on libraries under review to make sure that they aren't infringing anyone's copyright, which could expose users of boost libraries to liability. There are benefits to a slower paced review and acceptance process. If a library author shows the perseverance to get through the review process and get their library into a boost release they have usually demonstrated the dedication required to maintain the library for at least several years to come. We already have several libraries abandoned after acceptance but before release sitting in limbo. If anyone could come along, propose a worthy library and get it accepted in a few weeks then how many unmaintained libraries would we have in boost after only a couple years? Is that what we want? Steven Watanabe can only maintain a hundred or so orphaned boost libraries before his youthful energy will be exhausted and then what will we do? We need more than libraries in boost, we need active community members too. The cup is half full. If you want to drink from the top half you have to make sure the bottom half stays full. Regards, Luke

Simonson, Lucanus J wrote:
He must be comparing it to something else that is faster, like getting new features into the linux kernel...
I'm comparing it to what I'd like it to be. I don't think that because some high-quality successful projects use certain management techniques that it's necessarily the best ones to follow in all situations.
I think if people used the pace of the C++ standardization system as a baseline for comparison they would find that boost is extremely fast paced and exciting.
When the libraries that provide workarounds for the lack of certain future C++0x facilities probably won't be included until after the C++0x standard is released and well implemented, you could ask yourself if that really is the case.
I think there are different situations, and more importantly different types of libraries within Boost. Please note that what follows is my (potentially naive) opinion, and does not necessarily reflect that of the Boost community. Certain libraries exist by themselves and are pretty much big standalone entities, including Polygon. Others are pretty much just programming tools, that are also more prone to coupling. I would say that the majority of the libraries in the current review queue falls under the second category, and should be reviewed differently from the rest, on a fast-track. While I agree the slow paced review is good for large standalone libraries, it is hurting tools because, since they're pretty much very general and small, people -- and by people, I mean both boost users and developers -- tend to rewrite them when they need them, which causes duplication and maintenance issues. While for a large library, you would choose to use it or not at the beginning of a project (or at an important decision-making step), a tool you would just come to use when the need to do so arises. What makes Boost popular is really that it has versatile tools for everyday C++ programming problems -- which is often achieved through generic programming -- as well as operating system abstractions that are pleasable from a C++ point of view. It also provides finely crafted libraries covering specific problem domains, but they're often as much popular by themselves than by being part of Boost. A tool for a specific task in Boost also means that the tool is a recommended way -- with its trade-offs, of course -- to perform that task, and thus contributes to some kind of unified vision of modern C++ programming that Boost is envisioning by being the testbed for the standards to come. Of course, the line between the two categories are much more fuzzy than I just made them appear to be with my nice tirade.

vicente.botet wrote:
Offload more of the work needed for doing reviews to the community. Make that low-barrier, e.g., by making the website interactive( drupal, or the like), introducing a couple of on-line tools, like voting systems. Introduce forums, etc. IMHO, trac / mailing lists are not accessible enough for this more non- development oriented stuff. Cheers, Rutger

on 24.02.2010 at 12:13 Rutger ter Borg wrote :
IMHO, trac / mailing lists are not accessible enough for this more non- development oriented stuff.
sorry for offtopic although i am on this list for only half a year i want to say i find this mailing list (and perhaps any mailing list) very inconvenient for the purposes it serves imho discussion and other activities are MUCH MORE convenient in a forum-like environment (for example see intel software network: http://software.intel.com/en-us/forums/) so i support this suggestion with both my limbs for example i like SMF based forum engines: http://www.simplemachines.org/ also i believe it is easy to deploy and maintain -- Pavel

DE wrote:
My personal opinion: Please do not migrate to Web-based forum! Traditional form of mailing list *is* a very convenient form of discussion forum. I allows to participate in discussions in single place - my own e-mail reader. I participate in number of mailing lists and it is very important for me to be able to participate easily, from single place. Using e-mail reader, I can check new messages of all those lists, skim and filter them quickly and get focused on those I'm interested in and archive those I'm interested in. All in single place. Having limited amount of free time, as every one here I suppose, I can't even imagine how much time it would take to do the same after all mailing lists are replaced by forums. Using e-mail client, checking out new messages is constant time O(c) with Web forum it is O(n) what would significantly lead to cut down number of communities I'd like to participate. Having N forums, I'd need to load N websites, input my credentials N times, perform N x M clicks to browse a thread, another bunch of clicks to read a post, another bunch of clicks to reply. I have participated in number of Web forums and every time it was taking order of magnitude longer to reply/post than it takes with simple Thunderbird e-mail client. Single gateway for all discussions.
A week ago, I posted my first message to Intel forum and this interface is very unusable. The search function is unusable comparing to simplicity and accuracy of: +word +another site:http://lists.boost.org/Archives/boost Simply, Web forums do not work well at all with high volume technical discussions. There *is* a reason and well-settled rationale why most technical communities prefer mailing lists. And, the reason is *not* because they are a legacy type of old school hackers! It's about performance and usability, but usability does not mean HTML formatting which is useless for technical forums. For those who prefer Web-based UI, there is gmane and nabble [1] http://news.gmane.org/gmane.comp.lib.boost.devel [2] http://old.nabble.com/Boost-f14200.html Best regards, -- Mateusz Loskot, http://mateusz.loskot.net

on 24.02.2010 at 21:10 Mateusz Loskot wrote :
My personal opinion:
Please do not migrate to Web-based forum!
+word +another site:http://lists.boost.org/Archives/boost
-- Pavel

----- Original Message ----- From: "Vladimir Prus" <vladimir@codesourcery.com> To: <boost@lists.boost.org> Sent: Wednesday, February 24, 2010 12:15 PM Subject: Re: [boost] is review system in place is extremely slow? (was Re:[rfc] rcpp)
Hi, This is what I proposed to Oliver. Boost.Fiber and Boost.Atomic are internal details, but Boost.Move is seen by the user. So we will need just to wait for Boost.Move. Oliver let me know what do you think. I could manage with the review of Boost.Move if the review withards and the author agree, but I think this library should be managed by someone having a very good knowledge of the emulated C++ Move semantics. Is there a candidate? Best, Vicente

On 2/24/2010 3:15 AM, Vladimir Prus wrote:
It's perfectly OK to move those 3 libraries to the 'detail' namespace of Boost.Task and have review as it is, as opposed to waiting. What do you think?
IIRC Boost.Thread already uses Boost.Move internally, and adding it as a detail of Boost.Task would duplicate the code. If changes are ever necessary in Boost.Move, they would have to be applied in both places. --Jeffrey Bosboom

Hi, ----- Original Message ----- From: "Jeffrey Bosboom" <jbosboom@uci.edu> To: <boost@lists.boost.org> Sent: Wednesday, February 24, 2010 5:34 PM Subject: Re: [boost] is review system in place is extremely slow? (was Re: [rfc] rcpp)
This is wrong. Boost.Thread has its own specific move semantics emulation. Best, Vicente

On 2/24/2010 9:45 PM, vicente.botet wrote:
Hmmm, I was pretty sure this was mentioned on this list. Was it another Boost library that's already using it, or is it not used at all yet? Sorry for the misinformation, --Jeffrey Bosboom

Vladimir Prus wrote:
I think I caught hell for doing something similar in the serialization library. I had to make a number of components such as BOOST_STRONGTYPEDEF, state_saver, smart_cast, etc. which I put into boost - (not detail) and year afterwards this was raised as a huge problem. And this was even though the components had been their through two reviews. So I would be careful about doing this. Another issue is: if Boost.Task depends upon Boost.Fiber and Boost.Atomic, what happens if the Boost.Fiber or Boost .Atomic are not approved? This also happened to me. I need a singleton in the serialization library. At the time, there as a singleton in the review queue. I depended upon it and damn - it didn't get accepted. There is still no such component in the library. (except for the one I had to add to the serialization library - this one is pretty similar to one's recently proposed). Robert Ramey

Robert Ramey wrote:
This really seems to make the "have a layered boost" proposal sensible. We should definetively separate core boost tools and utility library from system wide scale libraries. -- ___________________________________________ Joel Falcou - Assistant Professor PARALL Team - LRI - Universite Paris Sud XI Tel : (+33)1 69 15 66 35

On Wed, Feb 24, 2010 at 11:08 AM, joel falcou <joel.falcou@lri.fr> wrote:
I have to agree with this. It is both annoying and unnecessary that if all I want is one simple library, I have to have almost every other library on my system. Things like mpl, regex, preprocessor, variant, unordered, etc should certainly be at a completely different layer than extremely high level libraries like GIL, wave, etc. How to define these layers is of course the question. Perhaps a first step would be to carefully draw up a chart of the current dependency graph between the entire set of libraries, and see what can be extracted from there.

On 2/24/2010 3:55 PM, Zachary Turner wrote:
We've done this kind of dependency analysis before, although I don't have the chart handy. bcp also tries to do this intelligently, but it doesn't seem to work well enough; I remember a discussion on this list where BGL was pulling in MPI even though the user didn't need it, because there was one source file that implemented MPI support and that counted as a dependency. What we might end up needing is a ports-like installer that can be told 'I want regex, unordered, variant and multi_index' and have it figure out the dependencies and install just that set of libraries. There will also need to be something like Gentoo's USE flags for optional support; e.g., if you need MPI support, you specify USE=MPI and MPI will be pulled in, otherwise it won't. This may need some #defines for configuration of this support (BOOST_USE_MPI or whatever). I'm working on a project (saiph, a NetHack-playing bot) that could benefit greatly from boost libraries, but the other programmers aren't willing to depend on Boost because it's too big. Modularity is important. --Jeffrey Bosboom

Hi, ----- Original Message ----- From: "Robert Ramey" <ramey@rrsd.com> To: <boost@lists.boost.org> Sent: Wednesday, February 24, 2010 6:04 PM Subject: Re: [boost] is review system in place is extremely slow? (wasRe:[rfc] rcpp)
Boost.Fiber and Boost.Atomic are used internally so not matter for these dependencies. Oliver had its own atomic implementation, and he has changed recently to use Boost.Atomic. He could include Boost.Atomic in a detail namespace if necesary. In addition Oliver is the author of Boost.Fiber. The single problem is for libraries used at the user level interface, and for Boost.Task tis is the case for Boost.Move. So the single imperative dependency to solve is Boost.Move. Best, Vicente

On 02/24/2010 02:15 PM, Vladimir Prus wrote:
It's perfectly OK to move those 3 libraries to the 'detail' namespace of Boost.Task and have review as it is, as opposed to waiting. What do you think?
Please, don't go that way. At least Boost.Atomic is a widely demanded addition to Boost, and if it goes as some closed implementation detail for an other library, it would be a great shame for users (it would surely be for me). As an alternative I would suggest to settle a common review for the three components, while leaving them all top level libraries. That would resolve the issue of "partial approval" that Robert pointed out.

Hi, ----- Original Message ----- From: "Andrey Semashev" <andrey.semashev@gmail.com> To: <boost@lists.boost.org> Sent: Wednesday, February 24, 2010 8:19 PM Subject: Re: [boost] is review system in place is extremely slow? (was Re: [rfc] rcpp)
Oliver had its own specific atomic implementation. He has changed to use the recent Boost.Atomic library, and I think this is good. The issue is that this library is not on the review schedule, so I don't see a problem if Oliver push its implementation to a detail namespace.
Andrey do you think you could take the responsability for Boost.Move or Boost.Fiber? Thanks, Vicente

On 02/25/2010 08:50 AM, vicente.botet wrote:
I think, this would at least delay the official acceptance of Boost.Atomic, as there will be less spur for it to happen.
Do you mean responsibility for accepting or rejecting these libraries, were I a review manager? Yes, I would, at least regarding Boost.Move, as I have relatively good understanding of the domain. It's harder with Boost.Fiber as I'm not competent in its domain. The main obstacle for me is lack of time, which, I think, is common for many of us.

On 25 February 2010 20:16, Andrey Semashev <andrey.semashev@gmail.com> wrote:
I think, this would at least delay the official acceptance of Boost.Atomic, as there will be less spur for it to happen.
Since it looks unlikely that Boost.Atomic is going to put be up for review in the foreseeable future, delaying another library to encourage it would be counter productive. Daniel

On 02/26/2010 02:04 AM, Daniel James wrote:
Is that so? Is it known why?
delaying another library to encourage it would be counter productive.
I think helping the author to bring the library into Boost would do a better job to the community.

On 26 February 2010 04:15, Andrey Semashev <andrey.semashev@gmail.com> wrote:
That's the impression I got from this thread. Correct me if I'm wrong.
That's a good idea if someone is willing to do it and it looks like getting good results. Otherwise it's worthless. If you consider Boost.Move, it's author is a major boost contributor and it's still taking it's time. Meanwhile, several libraries are using their own move emulation, which isn't ideal but it works fine. Is it really a good idea to delay a library that's apparently ready until three other libraries are fully reviewed and in trunk? That's just making a slow process slower. Daniel

----- Original Message ----- From: "Daniel James" <daniel_james@fmail.co.uk> To: <boost@lists.boost.org> Sent: Friday, February 26, 2010 9:08 AM Subject: Re: [boost] is review system in place is extremely slow? (was Re:[rfc] rcpp)
Why do you say the author of Boost.Move is taking his time? I though the library was ready and waiting/looking for review managers volunters.
I would not wait until the libraries are in trunk for a review. I will just plan the review of Boost.Task after the review of Boost.Move and Boost.Fiber. When I raised this discussion I expected that some review managers would volunter for these libraries, so I could planify my own. For the moment this has not been the case. :( Best,Vicente

On 26 February 2010 08:27, vicente.botet <vicente.botet@wanadoo.fr> wrote:
I meant that the library is taking its time. Ion puts a lot of effort into boost, but the process is still slow.
When I raised this discussion I expected that some review managers would volunter for these libraries, so I could planify my own. For the moment this has not been the case. :(
That's pretty much my point. Daniel

El 26/02/2010 9:36, Daniel James escribió:
I meant that the library is taking its time. Ion puts a lot of effort into boost, but the process is still slow.
Not really in these months because I can't find time for Boost, but I promise I'll be back shortly. The issue with the move library is to decide which emulation alternative should we choose seeing each ones's pro/cons. I plan to update the library soon. Best, Ion

On 26 February 2010 09:05, Helge Bahmann <hcb@chaoticmind.net> wrote:
Actually I was going to propose it for fast-path review once I find the time to restructure unit tests.
Sorry I suggested otherwise. Daniel

Andrey Semashev wrote:
As pointed out by Daniel down the thread, this is only good point if you expect somebody to help with Boost.Atomic. If there's nobody willing to help, then you either get it inside Boost as implementation detail, or you bot don't get it inside Boost and block some other potentially useful libraries. - Volodya

On 02/26/2010 12:06 PM, Vladimir Prus wrote:
I thought the authors of the dependent libraries would be the first interested in Boost.Atomic acceptance. However, I see that the work on it is ongoing, so it doesn't matter.

vicente.botet wrote:
Here are my 2 cents. I've expressed this opinion previously, but nothing really changed since then. IMHO The review system is both too slow and too fast. Why it's too slow: * Not enough review managers * Not enough reviewers - reviews keep being extended * Not enough reviews per year. Too many limiting factors, like holidays, upcoming or just completed releases Why it's too fast: * IMHO any short period of time is too short to properly evaluate most non-trivial libraries * To accumulate proper number of non-trivial reviews usually require time for people who are not regular on a mailing list to actually come and see that there is one. Also take into an account that we are loosing people who are for whatever reason indispensable during scheduled review * Some libraries come up without proper substantiation leading to review process and only being rejected by "lack of interest" argument * Some libraries comes not being ready for review. There is automatically checked list of requirements before scheduling the review. That's said, here's how better procedure might look like IMO. This will require some initial investment in writing scripts for process automation, but in a long run we should be very well compensated. 1. Any library author interrested in submission of new library should come to the "Candidate" page and register. Once registered candidate gets: a) svn repository for the library b) standardized page on boost website (something like boost.org/candidate/<candidate name> c) announcement post is sent automatically (with abstract and link to above page) to the mailing list. 2. The candidate page should contain abstract and links to the sources and docs. Also it should include some kind of "voting" mechanism, where people would express the interest. Preferable with authentication, which would link to the mailing list members. To qualify for the review candidate should exceed some predefined threshold of minimum number of "supporters". These people are expected to post a review later on for the library to have a chance of being accepted. 3. Once candidate have proper number of supporters and passed all other formal requirements (docs, tests, directory structure) - all validated against repository, candidate author can schedule a review from reviews schedulers (whatever the proper name). Once review manager is assigned candidate page is transformed into "candidate review" page. 4. Review process. The candidate review can start at any time by the review manager (no queue) and should take at least 2-4 month. There can be any number of reviewed being run concurrently. The "candidate review" page should include abstract, review package, and some kind of review submission mechanism (maybe boolean yes/no + an actual review). The review should be per person and each reviewer should have an ability to modify the review. Review discussion mechanism can be web based on rely on mailing list or some mixture of these. 5. Review manager have a right to stop a review at any time and make a decision if there is an overwhelming evidence that the library is going to be accepted/rejected. 6. If there is not enough reviews with first 2-4 month, the library is rejected due to lack of support. 7. If there is no review manager found within a year, the library is rejected due to lack of support. Time periods here are tentative are subject for discussion. Also specific collaboration between candidate review page and mailing list need clarification.

Hi Gennadiy, While I mostly agree with you pointing out the drawbacks of the current system, I don't quite agree with your proposal. On 02/28/2010 04:24 PM, Gennadiy Rozental wrote:
Good. Having a central place for potential Boost libraries to evolve may simplify development. Although I'm not sure there are resources to maintain this kind of hosting.
Voting is good. I appreciated the feature on SourceForge. Although I don't think that the right to vote should be tied with posting a review later. I consider voting as afeedback mechanism, nothing more. Regarding the candidate page, do you mean that the library docs should be hosted somewhere outside the Boost web site? If so, I don't like that idea. IMO, if we pursue the idea of a central hosting for the candidate libraries (with SVN, web access, etc.), it should include online documentation hosting, too.
It's not clear how it's transformed and in what way. Regarding the review scheduling, it's pretty much like it happens nowdays.
I disagree, in several points. * 2-4 months is a very long period. You can't expect review manager and the library authors focused on the review that long. Also, for simple tools, such as Boost.Move that is in the queue now, there's nothing to review during all that time. On the other hand, I agree that a few weeks may not be enough for some larger scaled libraries. Which leads me to conclusion that the review duration should be individual, decided by the author, review manager and review wizards, taking into account other reviews. * Concurrent reviews is wrong. We don't have enough reviewers and wizards to make sequential reviews. Allowing parallel reviews won't make it better. The review quality will also drop. * Review mechanism should be convenient for both the reviewers and the author/review manager. It should allow an easy conversation between the reviewers and the author. Mailing list is good enough, I think.
Ok.
6. If there is not enough reviews with first 2-4 month, the library is rejected due to lack of support.
Hmm, arguable, at least. If it made it to the review, there surely is an interest to the library.
7. If there is no review manager found within a year, the library is rejected due to lack of support.
I think, there are several useful libraries in the queue that fit that criteria. My Boost.Log was surely longer than a year without a review manager, and I can't say there's no interest. For both 6 and 7, bouncing candidates away won't help the situation. And the most important objection from my side is that your proposal doesn't change anything to solve the root problem - there are not enough people (or free time of theirs) to manage and write full reviews. I actually makes it a bit worse.

Andrey Semashev wrote:
Hi Gennadiy,
snip...
I disagree with you here and agree with Gennadiy Rozental. In order to get more libraries reviewed and possible approved more quickly overall, and also to allow reviewers more time to review a library than is currently given for Boost reviews, I feel it is important that concurrent reviews take place with each one lasting over a longer time period than currently usually occurs. One of the biggest factors in keeping possible reviewers from reviewing a Boost library is that the usual two week time frame is just not enough. One month would not be unjustified and perhaps two months would not be too long. In order to get more libraries reviewed given a longer time frame for each review, it would be necessary to allow reviews of more than one Boost library at a time.

----- Original Message ----- From: "Edward Diener" <eldiener@tropicsoft.com> To: <boost@lists.boost.org> Sent: Sunday, February 28, 2010 4:45 PM Subject: Re: [boost] is review system in place is extremely slow? (was Re:[rfc] rcpp)
Currently the reviewer can sent reviews before the review starts. The single problem I see is that we don't use to do it. IMO, the contents of the library to reviex muste be fixed as soon as a date is announced. The review manager could call for reviews at the same time s/he announce the date of the review. Best, Vicente

Andrey Semashev wrote:
We already hosting most of the library wait in svn. Candidate page should be comparatively tiny and only contains abstracts and links.
I am not gonna push this point. IMO it's fair to expect people who seconded the submission to provide a review within rather extended review period. On the other hand we can't really put a "requirement" clause to the support vote.
No. I think the link should be somewhere inside the svn, or be the part of candidate page. If former case we need a script+link which extracts docs from svn, in later case - overhead is bigger.
Not really. The transition occurs almost automatically and asynchronously. There is no any queue. If one library has bigger support and interest in a community, it will go through the phases much faster.
Somehow you are fine with C++ standard changes being reviewed for years, but find 2-4 month for non-trivial C++ library too long?
You can't expect review manager and the library authors focused on the review that long.
Actually the point was to decrease the pressure. Longer time period means that both reviewers and author can take their time doing their job. I expect short short period of times with high activity with some gaps in between, where sides consider the matter.
For smaller/simple components we might have a policy of fast track review (no more than a week. These we can have a queue for, but there should be rather strict requirements to qualify: * 1 header only (or totally no more than N lines). Header only libraries qualify * All tests should work on all required platforms * Docs should be in boostbook format * To be accepted the library should receive 90%(?) approval within review period with at least 10(?) reviewers * Review period is never extended
Ultimately yes, but only if library qualify for fast track review. Otherwise I do not see basis for review wizard to warrant it.
Even in rare situation where the same person is interested in several concurrent reviews, long review period should give one a chance to participate in both.
IMO review mechanism can be much better. Ideally coming to the one should have a change to get a summary of current discussions, open issues and author comments to them. Not sure if we can automate the procedure or it should be a review manager job. Maybe we can combine active discussion going on mailing list with summaries appearing on candidate review page.
There maybe number of reasons. It possible that the final product did not meet the expectations or original supported disappeared and no one else come through. In any case there is no reason to expect the situation will change. IMO in this case library can do back to the candidate phase and wait for new set of supporters.
I do not believe the Boost.Log waiting for review solely due to the lack of the review manager. I managed last one. And I believe there were couple candidates for the job for new take.
IMO : 1. It should allow more people to write full review given more time and less pressure to do it within short period of time 2. It should allow more libraries to go through the review process ultimately, due to concurrent nature of the process. 3. It should make more people willing to manage the review, given that procedure does not require paying significant attention to the library within the review period. Gennadiy

On 02/28/2010 07:24 PM, Gennadiy Rozental wrote:
Ok, then the only thing needed is to expose these web pages on the web site.
Yes, you can expect a voter to write a review, but forcing it is not correct. There will be no votes then. I think, a silver bullet would be sending an invitation e-mail to all voters when the review begins.
Well, the current queue isn't really a queue, as reviews can happen in any order.
Who said that I'm fine with C++ standard evolution paces? :) Really, some things were begging for standardization for years, and the final paper is yet to appear. But we're talking of libraries here, not the standard. It's a much lighter, fluid and flexible matter than any standard, and it should not take that long time to accept.
It's much easier for the author and review manager to schedule their time for a few weeks to pay more attention to a short but active review, than try to do that for several months. I admit that long reviews give more opportunity to participate but then again, people are not constrained by review period boundaries in order to meet with the library. They can actually start reviewing it before the official period and only express their opinion in written form during the period.
I think, the review wizard is already required to have some expertise in the domain of the library under review. I believe, he and the author will be able to work out a suitable amount of time for the review.
It also requires more review managers. I don't think sharing a review manager between several parallel reviews is a good idea.
I'm fine with keeping some king of a summary of the ongoing review somewhere public (perhaps, on the proposed candidate page). I just think that the discussion should stay on the ML.
Ever since I proposed Boost.Log for the review, I considered it ready. There were discussions of it, I made improvements, but I don't think that was the reason for the delay. There was no review manager until recently I explicitly called for one. And the practice shows that even that does not always help.
Is that so? The manager still has to check if the library meets formal criteria for the review, read through all the reviews (not for a few weeks, but for a few months) and take the final decision, doesn't he?

----- Original Message ----- From: "Andrey Semashev" <andrey.semashev@gmail.com> To: <boost@lists.boost.org> Sent: Sunday, February 28, 2010 11:14 PM Subject: Re: [boost] is review system in place is extremely slow? (was Re: [rfc] rcpp)
I think that the review managers do this already to the people that presneted an interest on the library through the ML.
Yes, but as the period is short, people tries to avoid holidays, other reviews, Quarter Boost releases, ... With a 2-4 month period, these do not enter in consideration, as during this period you will have holidays, other reviews in parallel and boos releases.
I think that this is also a test over the continuity of the Boost candidate. Maitinaing a Boost library is a long work. We don't expect to have an answer to a question some hours later, but that the future Boost author manage the discussions related to his library within a short interval of time (2 or 3 days).
I think that you are talking of the review manager,not the review wizard.
I think that people are not interested on all the specific libraries. I'm sure that there will be people participating in the Lexer library that don't want to participate in the Shifted pointer library or viceversa, for example.
This don't require more review managers, at least not more than now (1 by library)
I don't think sharing a review manager between several parallel reviews is a good idea.
I agree, but I don't think this would be a problem, we can expect that we don't need the same review manager to run two review in parallel.
I agree.
Andrey, I think that 1 year is a good compromise. This let the author the time to find a review manager, and make some preasure on s/he to look for activiely, requesting interested people to manage the review, ... I would add even that if the library has not been integrated on a release before a year, since his acceptation, to reject the library.
Of course, but you can accept easier to review a library if you don't have to be present every day of a short period, but present from time to time during a longer period. Maybe we can let the choice of the duration to the review manager. Best, Vicente

Andrey Semashev wrote:
Are they? Who is making the decision who comes first? Aside of lack of review manager?
Unexpected delays aside I find the current rate of C++ standard changes reasonable. We are dealing with non-trivial matters, which requires in most cases some time to think about and discuss.
Well, I do not expect years either. Take Boost.Log for example. I honestly expect that for the library to get accepted it should be reviewed for period of at least 6 month. Until people try it in actual projects, write some code with it, it will never be clear if the candidate is viable. No amount of theorization within 1-2 week is going to be acceptable IMO.
I guess it matter of opinion. Sometimes deadlines are good motivational tool, sometimes they lead to suboptimal solutions. All participants here are volunteers. It's more reasonable to expect them to be able to donate several hours a week in a period of 4 month, then expect them to drop everything else and dedicate whole week to boost.
Somehow this does not happened. It's quite possible that person have time today to review the candidate, but is not able to attend during the review period (even simply to write up the review). I propose to formalize and simplify the procedure for reviewers. The boost.org will have a list of ongoing reviews and anyone can go to the candidate page read the review progress summary and add something.
I do not see why we need a strict rule here. If person is willing and/or if, for example, there is one month left in one review one can start another one taking 4 month... Ultimately review wizard has to approve the review manager.
Look for my other post on the subject, but in general I believe library author should be more active in soliciting the review managers. If we can split review manager's job in two, the first part can be done by someone submitted by the author oneself. Even if we keep the status quo, the library author should engage other authoritative figures on a list and ask for the help with review. And again, ultimately review wizard has to approve the review manager.
But the pressure to do this within short period of time is smaller. In both cases it's the same amount of work, but spread over the different amount of time. Gennadiy

Gennadiy Rozental wrote:
I agree that the usual review period is too short for non-trivial libraries. Many reviews are extended and many reviewers or would-be reviewers express problems with lack of time. I'm not sure four months (or six as suggested elsewhere) is warranted however. As Andrey noted elsewhere, reviewers can submit, or at least write, reviews before the review period. If a reviewer won't have list access during the review period, then submit the review early. Unfortunately, that's not done. Reviews are typically announced a month or more ahead of time now, so the review manager can call for reviews beginning immediately, making it clear that early submissions are welcome. Doing so effectively extends the review period back a month before the official start time. Gennadiy's longer review period idea would simply make that unofficial start (the announcement and call for early reviews) an official part of the review period.
I agree that a longer period means the review period will be more relaxed. The author must look for an respond to queries and concerns about the library over a longer period, but that's no different than what follows acceptance.
I disagree. The heightened attention demanded by the current approach is almost impossible to support. Lengthening the review period means one can take a day or two, rather than hours, to respond to a post.
Concurrent reviews won't be a problem is the review periods are longer and if a subsequent review must follow the current review by, say, a month. IOW, if non-trivial reviews are two months, then they would only overlap by one month. This approach makes the scheduling more flexible. Rather than avoiding an opportune review time because another review is schedule, a review manager and author can share that period instead of delaying much longer to find another period. _____ Rob Stewart robert.stewart@sig.com Software Engineer, Core Software using std::disclaimer; Susquehanna International Group, LLP http://www.sig.com IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

Hi, ----- Original Message ----- From: "Stewart, Robert" <Robert.Stewart@sig.com> To: <boost@lists.boost.org> Sent: Monday, March 01, 2010 5:48 PM Subject: Re: [boost] is review system in place is extremely slow? (wasRe: [rfc] rcpp)
I suggested up to six month for the support period during which the author receives enough interest on the library, not the review period.
As Andrey noted elsewhere, reviewers can submit, or at least write, reviews before the review period. If a reviewer won't have list access during the review period, then submit the review early. Unfortunately, that's not done.
I think this is not done because the library to review is not fixed.
Reviews are typically announced a month or more ahead of time now, so the review manager can call for reviews beginning immediately, making it clear that early submissions are welcome. Doing so effectively extends the review period back a month before the official start time. Gennadiy's longer review period idea would simply make that unofficial start (the announcement and call for early reviews) an official part of the review period.
Do you think we can demand the library is fixed as soon as the review date is fixed?
I agree. I think that the quality of exchanges will increase if people can take the time to respond to a post.
maybe we need to limit the number of parallel reviews, but two will be to restrictive. Best, Vicente

vicente.botet wrote:
I'm sorry missed what you meant. Do you agree with one to two months for the review, then?
You're possibly right. If the review period is extended, then there would be a one to two month period during which the review target is fixed. I think it would be reasonable for the author to manage a branch containing the latest code with suggested changes and fixes, too. That would permit those interested in monitoring progress and the effects of their comments to see what's closer to the final version as the review progresses.
Do you think we can demand the library is fixed as soon as the review date is fixed?
I'm not sure we should tie it to when the date is fixed, but rather to a fixed duration leading to the review deadline. That is, the state of the library should be fixed during the review period, but that period should be longer.
Two is one more than now. Wouldn't it be good to start with an incremental change? _____ Rob Stewart robert.stewart@sig.com Software Engineer, Core Software using std::disclaimer; Susquehanna International Group, LLP http://www.sig.com IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

On 03/01/2010 07:52 AM, Gennadiy Rozental wrote:
Review wizards, I assume. When they, along with the author and review manager select a time slot for the review.
I understand, and last thing I would've wanted in this area is being hasty. But that doesn't change the shape of things - the standard is way behind modern tendencies and technologies that are needed on a daily basis. IMHO, by the time C++1x is out, it will be outdated already.
If you want to base the accept/reject decision on practical experience of the library users, then even 6 months is not enough. It becomes a matter of years. But if that is the case I would suggest to drop review practice altogether, at least in its current form. It would be much more practical to follow the idea of separated Boost distributions - the core libraries that were verified by time and other, less mature libraries (perhaps, in individual packages). The barrier for a library to enter the "hall of fame" of core libraries can be rather high, and may require X years of practical usage and include a review. But that review should not be too long since the library itself should be well known already. On the other hand, in order for a library to become a newbie under Boost umbrella, there should not be a requirement of a long usage in the fields. It should be fairly quick and easy to put the library into Boost, provided that formal requirements are met.
I remember cases when a review manager was not able to conclude the review results long after the review ended because he didn't have enough time. It looks like shared/overlapping reviews will honor this situation. I would prefer review managers to be dedicated to a single subject at a time.
Well yes, active position on the author part may help but it's not guaranteed to succeed. Quoting yourself, people here are volunteers and you can't be sure to have a review manager once you need one. And according to your proposal, if I don't get lucky, my library is rejected.
Perhaps, you're right and it will be easier for the review managers. I never managed a review and my judgment may not be correct.

Andrey Semashev wrote:
On 03/01/2010 07:52 AM, Gennadiy Rozental wrote:
Somehow I do not agree. It will take good several years at best for the new features to be implemented and even more to properly being used in users' code. On the other hand it depends on what you mean by "modern tendencies". Maybe it's good we are behind.
IMO Boost.Log is one of those libraries which are rather difficult to do right. Primarily due to wast problem domain. This is general purpose library with huge variety of different applications with different priorities. IMO week is way below reasonable time to review the library like this. I may not hit the use case which is going to be showstopper for me well after the review ends.
This maybe be the case with one review per manager as well. Any person here can easily become busy for the period of month or so. And that's exactly my point.
Well, yes. If one can't find anyone interested enough in the library within a year to manage a review for it. Gennadiy

On 03/02/2010 12:18 AM, Gennadiy Rozental wrote:
I did not mean the new features that will be standardized in C++1x. They are great, I'm really anxious to get my hands on many of them. But that's not enough. Dynamic/static modules, properties, network and asynchronous IO in general, binary serialization (portable between different architectures), XML support - for a start. Let alone the holy grail of GUI, even the most basic one. These things are there for ages, and they are needed almost all the time. I understand that there are great libraries, including Boost, but every other library added to the project adds complexity, which is not required with other languages. For some things there are no libraries for there's just not enough language support (modules, for instance).
I agree that Boost.Log in particular is a very large library, and a few weeks are not enough to fully evaluate it. On the other hand, it is enough to evaluate the approach and architecture, and overall implementation quality without diving too much into details. Like I said in my post, if you're trying to base your decision on the real world practice with the library, even 6 months of review are not enough. It becomes not a review, but an active (or not - depends on the demand in that particular time frame) usage of the library, which is being held in a hanging state. Other people in this conversation suggested that the library should be frozen during the review period. Do you expect an author to be willing to freeze the library for 6 months? Let alone for years in order to collect real world experience, as you suggest. I understand and share your reasoning, but the solution you propose doesn't look valid for me. It would be interesting to hear what you think of the idea I expressed in my previous post - divide Boost into stable core and experimental outskirts, with the accordingly matched acceptance criteria.
Yes, but if the person has several review on his hands, the probability of such situation gets higher.

On Mar 2, 2010, at 12:26 PM, Andrey Semashev wrote:
Well...I am not really sure whether I can agree in this case. Languages are in general slow to adopt. C++ is certainly one of the slowest that I know, however, there are other languages where features are getting into the language after they have been out in the wild for a bit. Support for DLS is starting to be built into some of the mainstream languages such as C# which can alleviate the lack of specific features in the language to some extent. Nonetheless, there is always a thin line for a language designer to decide whether or not a particular feature should be added to the language itself. Once languages become bloated due to their richness in features and scenarios they try to attack, their usability and effectiveness tends to decrease somewhat. Certainly, it is up to the language design to hopefully prevent this but history has shown that this is not always successful. The current trend seems to go towards having a rich library support rather than putting everything into the language itself. Microsoft's .NET is certainly a prime example for this. As I mentioned above, DSL's are another area that seems to grow very fast at the moment although one could argue that they are essentially similar to libraries but just called a language....granted this is certainly a little bit trivialized. Personally, I do not necessarily think that the language should support every little area that is possible. Languages should focus on providing the fundamental architecture instead. One can certainly argue about what is considered fundamental as different users have different opinions and needs. From my point of view, any language with support for handling libraries in an easy and flexible manner - allowing users to pick the library that fits their needs best in a particular situation - is going to be successful in the near future. As you said, this will add complexity, however, may be one of the goals for a language should actually be trying to remove this complexity while dealing with libraries.... Certainly this is only my $0.02 cents.... Ciao, Andreas

On 03/06/2010 08:44 PM, Andreas Masur wrote:
Personally, I do not necessarily think that the language should support every little area that is possible.
Yes, pulling every nifty feature into the language support library is wrong. But I don't want that either. Some things I've mentioned are no less fundamental than filesystem support or threading nowdays. To be specific, I'm speaking of serialization and ASIO here.
Surely, that is a good direction for the language improvement. It's kind of in line with my wish for modules support on the language level. It's a pity that in current shape the standard does not state any guarantees regarding parts of the application that are in different modules, nor does it define any means to define interfaces and integrate modules. However, integrating libraries in an application also involve things that are clearly out of C++ scope, such as compatibility, configuration and building issues. I doubt that C++ committee is able to help in this area. Therefore I still think that a rich language support library is required.

You are not alone is having made similar suggestions before, which I support (though I would urge to 'Keep is Simple Sir' leaving the review manager and moderators to make up the rules as they go along rather than setting up a complex set of rules. It is ain't broke, don't fix it!). (And I've been suggesting use of additional logos to make clear what is reviewed and released, and what is not). But nobody has yet responded to the vital question of whether there are resources to support a parallel tree to trunk, in addition to sandbox, for what we are calling 'candidate' libraries. It really needs to have an identical structure, and tests which are run regularly, like trunk. This will encourage more users, who have an important, and often informed, voice in reviews. Can those good souls doing the thankless task of providing SVN files and testing (thanks!) comment please? Paul --- Paul A. Bristow Prizet Farmhouse Kendal, UK LA8 8AB +44 1539 561830, mobile +44 7714330204 pbristow@hetp.u-net.com

Gennadiy Rozental wrote:
This is pretty straight-forward to implement: 1. Create a branch off the last release 2. For each proposed library living in sandbox, add a couple of svn:externals into the new branch. 3. Modify status/Jamfile.v2 to only run the tests for the new libraries. 4. Have one, or more people run tests on the new branch. 5. Adjust reporting process to produce one more set of tables. Of this, (1) and (2) is done in a matter of minutes. (3) requires really minimal hacking. (4) requires a volunteer. I personally don't know how to do (5) but should not be hard either. So, where do we go from here? - Volodya

Vladimir Prus wrote:
Yeah, which as you mention is fairly easy.. So this is for others that don't read bjam source as easily as Volodya and I..
Which could be automated, and it was always my intent to do so, by the test scripts, and they already operate partially that way. But since we never agreed on a structure of sandbox libraries it hasn't really been possible. But I guess my suggestion years ago of the sandbox structure is apparently the defacto standard now perhaps it is possible.
3. Modify status/Jamfile.v2 to only run the tests for the new libraries. 4. Have one, or more people run tests on the new branch.
3 & 4 are already partially supported by status/Jamfile.v2 by using the "--limit-tests=*" option. For example to only run tests for program_options.. --limit-tests=program_options. And it would really easy to add a "--run-tests=some_lib" such that the list of libs doesn't need to be edited at all.
The main problem is #5. And it's the main problem because the report system is not really designed for that. And it's a big resource hog. SO perhaps the best alternative is to have separate results for each tested library. That way it's also easier to find someone to run the reports as they wont take much resources. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

Rene Rivera wrote:
Oh, I did not realize this is implemented!
Alternatively, find a volunteer to rewrite reporting to not use XSLT. I guess the display format itself is pretty good, and I did not see any other system that offers similar, but the use XSLT is clearly failed experiment. - Volodya

Vladimir Prus wrote:
I think this is even simpler 1) use the release branch 2) put the library to be tested into that branch - only on the local testing machine. 3) CD to boost/libs/newlib/test 4) run ../../../tools/src/library_test.sh (or bat) --toolset=msvc-7.1 (or whatever) 5) this will create a pair of local html tables in boost/libs/newlib/test which show all test results. This only requires a tester. Robert Ramey

Robert Ramey wrote:
I think all these tests require manual steps from the tester. The automated procedure I have suggested requires just adding a single command invocation to the periodic command invocation service of your operating system -- which is *much* less burden. - Volodya

On 28 February 2010 16:47, Paul A. Bristow <pbristow@hetp.u-net.com> wrote:
I think I did a while ago. The sandbox used to be organised as a parallel tree to the main tree (in CVS). It ended up a complete mess. If someone's willing to actively maintain it, then that might be averted, but experience suggests that they won't. But using separate repositories is a much better organisation anyway, it means that your history doesn't get all messed up as it does in trunk. Daniel

Ok - but are there resources to provide those repositories? (disk space may be cheap, but the backup and up/ download traffic costs?) Or is each developer supposed to provide his own??? And what about testing? Is each developing group responsible for testing its own stuff, against latest release (and/or trunk?), or some 'test farm' as for trunk? Paul --- Paul A. Bristow Prizet Farmhouse Kendal, UK LA8 8AB +44 1539 561830, mobile +44 7714330204 pbristow@hetp.u-net.com

On 28 February 2010 17:54, Paul A. Bristow <pbristow@hetp.u-net.com> wrote:
Ok - but are there resources to provide those repositories? (disk space may be cheap, but the backup and up/ download traffic costs?)
Sorry I wasn't clear, I meant space within subversion, i.e. what we've got.
That's how it is now. I think I'm starting to sound like a broken record, but unless someone organises something new, that's how it'll stay. Daniel

Hi Paul, On Feb 28, 2010, at 9:47 AM, Paul A. Bristow wrote:
As one of those souls, I'd point out that a full (not incremental) test of Boost trunk for one nightly toolset with fast hardware and network running 8 cores is at least an hour, much longer for some toolsets. For people with less capable systems, scale accordingly. I mention this to reinforce that testing Boost is already a fair commitment in resources and I want to ensure that our core libraries continue to be well tested and not slighted testing resources at the expense of candidate libraries. That said, it would not represent particular hardship for us to run additional tests for the candidate libraries. But I would like to encourage broader participation in testing Boost as, for example, our tester may not be able to continue with Boost. I think that developers that get core libraries into Boost should consider helping with the testing load, given that they now have a vested interest in seeing Boost succeed (perhaps candidate library developers could also offer up some resources as well). -- Noel

Of course - release(d) libraries must have priority. It sounds as though just chucking all the 'candidate' libraries into the 'pot' would push the cooking time over the top?
That said, it would not represent particular hardship for us to run additional tests for the candidate libraries.
What mechanism would be sensible to test 'candidate' libraries separately? Some weekly test run? With some pass/fail report? Or is it best on rely on authors running their own tests whenever they make some changes? So users should be able to rely on WYSIWYG? This would not provide the very valuable pass/fail list available for trunk. But I would like to
So how would you suggest this would work in practice? Thanks again for your testing! Paul --- Paul A. Bristow Prizet Farmhouse Kendal, UK LA8 8AB +44 1539 561830, mobile +44 7714330204 pbristow@hetp.u-net.com

Paul A. Bristow a écrit :
I think it's important to not depend on authors, as they could all have a different setup, making managing the repository according to disparate results more difficult.

I agree that the testing structure should be as the trunk testing. But actually running regular (daily) tests may be too big a cost. (There will be many more things to test than trunk). Perhaps a weekly or monthly testing schedule would suffice? Paul --- Paul A. Bristow Prizet Farmhouse Kendal, UK LA8 8AB +44 1539 561830, mobile +44 7714330204 pbristow@hetp.u-net.com

Hi Gennadiy, first of all thenkw for trying to setup some improvements to the review process. ----- Original Message ----- From: "Gennadiy Rozental" <rogeeff@gmail.com> To: <boost@lists.boost.org> Sent: Sunday, February 28, 2010 2:24 PM Subject: Re: [boost] is review system in place is extremely slow? (was Re:[rfc] rcpp)
Agree.
The author and the review manager should start already a review only if they have checked that the library has enough interest..
* Some libraries comes not being ready for review. There is automatically checked list of requirements before scheduling the review.
This should be checked already by the review manager.
If I've understood you are proposing a separate space from the sandbox. Isn't it? I agree that it is better if people that pretend to be candidate to Boost use to use the Boost environment and tools.
This seems good. If people wants a library in Boost they must support the library at least engaging themself to make a review.
candidate author can schedule a review from reviews schedulers (whatever the proper name).
I don't understand this. Could you clarify?
Once review manager is assigned candidate page is transformed into "candidate review" page.
Who will assign the review manager?
I agree with the extension of the review period.
People wanting the library to be included should be active supporters and don't expect until the end of the review period, an extension of the period should not correspond to an inactive period. Thus I agree.
6. If there is not enough reviews with first 2-4 month, the library is rejected due to lack of support.
I will extend this period to 4-6month.
7. If there is no review manager found within a year, the library is rejected due to lack of support.
This seems OK to me. A clarification of who can be the review manager will be needed.
Thanks, Vicente

Vicente Botet wrote:
Hi Gennadiy,
More formal procedure for people to second/support the submission would be helpful
I guess. I was after a script that can perform basis automatic checks.
Well Sandbox v.2.
The review wizards. Forgot the term.
There are two parts of review manager job, which in fact can be split. The first part is actually managing the review, including the checking of the formal requirements, making announcements, producing summaries (as per my suggestion), making decision about review start, review length, review stop. The second part is making the final decision. As I see it the first part does not require any particular experience and can be delegated to any one (even relative newcomer or someone nominated by library author). The second part is where we expect to have some authority in the matter. If review results are overwhelmingly positive or negative (like acceptance by 95% of reviewers, or rejection by more than 50-60%) the review result decision can be done automatically. In more complicated case we need another person(s) of authority. One option I see here is for the review manager (one who performs first step) to write detailed summary with final tally of votes and all major pro and con discussion points and allow review wizard(s) to make this final decision (or some new post, like review arbitrager). Gennadiy

Gennadiy Rozental wrote:
I don't know of any managers using them, but I think many such checks already can be done with scripts we have and use for other things. The script that checks for license inclusion and other such things in the release libraries could just as easily be run on a submission, after all.
YMMV, but for me the administrative part of the job is a small commitment. The technical part of the job is what takes time. I place the production of useful summaries in the technical side.
I'm not a big fan of a straight vote based cutoff. (I'm speaking as a Boost member, not as a Review Wizard when expressing such opinions.) If 20 people vote in favor of a library, but one person with a show stopping reason for a rejection is right that it is a big deal issue, then the library should be rejected. If half the respondents object to a library, but there is an important flaw in their reasons for that objection, then it might be a good idea to accept the library. I think of the votes as a well intentioned advisory body, not a definitive response. The Manager should work to understand the reasons for those votes, and then weight the reasons instead of the votes as the method to reach a decision.
If this lesser manager is not technically competent to make the decision, why should anyone believe there is competence to write a summary that provides all of the important points in the discussion with appropriate justifications? It is harder to write such a summary than to understand the arguments, after all. Even given such a summary, I strongly doubt that any one person has the breadth and depth of expertise to be able to make such judgments for every library that might be proposed for Boost. I openly state that I claim no such development omniscience. As I always say in these discussions, my preference is to trust the review managers. Put someone in the post who will look carefully at the arguments during the review and reason carefully about them. Monitor the reviews to make sure things stay on track (I read every post in every review in Boost as part of being a Wizard, to be sure I know what is happening in them. I don't like surprises in the review process.), and let the managers apply their skills to come to a good conclusion. It is imaginable that a manager could make an unsupportable decision. If this happens, then the wizards and the monitors will have to step in and determine the best course of action. This has not happened while I have been a Wizard, and I know of no instance before I was a Wizard, either. I sincerely hope it doesn't happen, but will act as needed if it does. Thankfully, we have had some very good people acting as managers through the years, and I appreciate how much easier they have made my job. John Phillips Review Wizard
participants (32)
-
Andreas Masur
-
Andrey Semashev
-
Daniel James
-
Daniel James
-
DE
-
Edward Diener
-
Gennadiy Rozental
-
Helge Bahmann
-
Ion Gaztañaga
-
Jeffrey Bosboom
-
joel falcou
-
John Maddock
-
John Phillips
-
K. Noel Belcourt
-
Kai Schroeder
-
Mateusz Loskot
-
Mathias Gaunard
-
Michael Fawcett
-
Nevin Liber
-
Paul A. Bristow
-
Phil Endecott
-
Rene Rivera
-
Robert Ramey
-
Rutger ter Borg
-
Simonson, Lucanus J
-
Steven Watanabe
-
Stewart, Robert
-
Vicente Botet
-
vicente.botet
-
Vladimir Prus
-
Vladimir Prus
-
Zachary Turner