
As of recent, we had quite a lot of discussion about process. In true open-source spirit, it was a fairly open discussion, with everybody offering their perspectives and experience. However, while we surely learned many things, it does not seem like we're going anywhere. For a quick experiment, I tried to assess whether the discussion actually reflects the needs of Boost developers, so I created a table of Boost developers sorted by the number of commits in 2010. It is here: http://tinyurl.com/5suwfbn It seems that 5 top Boost comitters did not participate much in recent discussions. And going down the list, it seems like many of active developers did not say anything, while most of discussions is fueled by folks who don't commit much. Of course, everybody can offer valuable thoughts, but if the goal is to fix things for Boost developers, it would make sense if developers say that needs fixing, as opposed to other people doing it for them. Maybe I suggest that for some time, we outright ban freeform discussion about process, and instead, we restrict them to threads started by a Boost developers and saying this: "I am maintainer of X, and had N commits and M trac changes in the last year. I most hate P1, P2 and P3. I would propose that we use T1, T2, and T3 to fix that". Then, everybody could join to suggest better way of fixing P1, P2 and P3 -- without making up other supposed problems. Thoughts? -- Vladimir Prus Mentor Graphics +7 (812) 677-68-40

On Mon, Jan 31, 2011 at 4:31 PM, Vladimir Prus <vladimir@codesourcery.com> wrote:
Maybe I suggest that for some time, we outright ban freeform discussion about process, and instead, we restrict them to threads started by a Boost developers and saying this: "I am maintainer of X, and had N commits and M trac changes in the last year. I most hate P1, P2 and P3. I would propose that we use T1, T2, and T3 to fix that". Then, everybody could join to suggest better way of fixing P1, P2 and P3 -- without making up other supposed problems.
Thoughts?
Not to sound like a sour grape, but this is precisely the attitude that makes the Boost project so unwelcoming. Sometimes I wonder why I personally even bother trying to contribute. That said probably my not being here at all wouldn't even be missed if we just count the commits and ignore the bug reports, patches, helping out the people with issues, people who submit reviews, people who provide feedback, those that try to get Boost adopted in more projects and situations, etc. Maybe I just come from a generation which largely don't hold back what we think or that I am just really vocal about issues. That said then, I'm done with this. Have a good one everyone. -- Dean Michael Berris about.me/deanberris

Maybe I suggest that for some time, we outright ban freeform discussion about process, and instead, we restrict them to threads started by a Boost developers and saying this: "I am maintainer of X, and had N commits and M trac changes in the last year. I most hate P1, P2 and P3. I would propose that we use T1, T2, and T3 to fix that". Then, everybody could join to suggest better way of fixing P1, P2 and P3 -- without making up other supposed problems.
Thoughts?
Not to sound like a sour grape, but this is precisely the attitude that makes the Boost project so unwelcoming. Sometimes I wonder why I personally even bother trying to contribute.
Ugh? I think the thing is this: most folks around here just don't really care about tools - they really don't - they just want to "get stuff done". Please bare in mind too that if the effort that has been expended on this discussion had been expended on bug fixing instead, we could have got an awful lot done... John.

On Mon, Jan 31, 2011 at 5:35 PM, John Maddock <boost.regex@virgin.net> wrote:
Maybe I suggest that for some time, we outright ban freeform discussion about process, and instead, we restrict them to threads started by a Boost developers and saying this: "I am maintainer of X, and had N commits and M trac changes in the last year. I most hate P1, P2 and P3. I would propose that we use T1, T2, and T3 to fix that". Then, everybody could join to suggest better way of fixing P1, P2 and P3 -- without making up other supposed problems.
Thoughts?
Not to sound like a sour grape, but this is precisely the attitude that makes the Boost project so unwelcoming. Sometimes I wonder why I personally even bother trying to contribute.
Ugh?
/me should have a better thought to writing translator... What I meant here is that: Boost seems unwelcoming at first because of the "rules" and even the notion of "banning discussions". This hasn't stopped me from trying to contribute and therefore I wonder why it hasn't (maybe it's me).
I think the thing is this: most folks around here just don't really care about tools - they really don't - they just want to "get stuff done".
I thought because people want to get stuff done that they would care about the tools they use and whether they're using the right tool or whether the tools are making them effective.
Please bare in mind too that if the effort that has been expended on this discussion had been expended on bug fixing instead, we could have got an awful lot done...
Hmmm... See, I would agree with you except that there's currently just so many hands/people to fix things, and those who want to be able to help see this barrier of having to go through Trac and checking out with SVN and waiting for the network updates to finish. The only reason we're even having this discussion about the process and the tools is because some contributor wannabe like me feels this drag when I try to make progress on some front. I have access to the Sandbox sure, but even working on the sandbox now is just painful for me compared to how I deal with the development of cpp-netlib which is on git. Note at the same time I complain I send in the patches through Trac/ML anyway. I'm just saying it could all be so much easier than the way it's going now and short of suggesting a different process, I don't see how I can help accomplish that goal of making things much easier to do. HTH -- Dean Michael Berris about.me/deanberris

On 1/31/2011 6:06 PM, Dean Michael Berris wrote:
On Mon, Jan 31, 2011 at 5:35 PM, John Maddock<boost.regex@virgin.net> wrote:
I think the thing is this: most folks around here just don't really care about tools - they really don't - they just want to "get stuff done".
I thought because people want to get stuff done that they would care about the tools they use and whether they're using the right tool or whether the tools are making them effective.
Pardon me, Dean, but... I'm with John here. And I can say this: we, the top commiters have no complaints about the tools. We will use whatever tools are available. (at least I speak for Hartmut, John, Volodya and I; but I have a strong feeling that Steven and Daniel agree too). Again, let me emphasize this: (now I am quoting the positive version): A Good Craftsman Never Blames His Tools And I think what Volodya meant is that more than anyone else, it is us who are in the position to determine if we are using the right tool or whether the tools are making us effective. And... hmmm, I think we are effective ;-) Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

On Mon, Jan 31, 2011 at 6:32 PM, Joel de Guzman <joel@boost-consulting.com> wrote:
On 1/31/2011 6:06 PM, Dean Michael Berris wrote:
On Mon, Jan 31, 2011 at 5:35 PM, John Maddock<boost.regex@virgin.net> wrote:
I think the thing is this: most folks around here just don't really care about tools - they really don't - they just want to "get stuff done".
I thought because people want to get stuff done that they would care about the tools they use and whether they're using the right tool or whether the tools are making them effective.
Pardon me, Dean, but...
I'm with John here. And I can say this: we, the top commiters have no complaints about the tools. We will use whatever tools are available. (at least I speak for Hartmut, John, Volodya and I; but I have a strong feeling that Steven and Daniel agree too).
Ok. Maybe I should have said "as effective as you can be". ;)
Again, let me emphasize this: (now I am quoting the positive version):
A Good Craftsman Never Blames His Tools
And I think what Volodya meant is that more than anyone else, it is us who are in the position to determine if we are using the right tool or whether the tools are making us effective. And... hmmm, I think we are effective ;-)
Yep. Question though is do you think you and a lot more others who want to contribute can be more effective with a different tool? I can understand that sometimes there's no point in asking the question when you don't see a problem. But for me personally who would want to be able to be as contributory to the project as you guys, I do think there's a problem -- especially with scaling the effort. If everyone else was in agreement that Boost is fine as it is now and that there wasn't a want to grow the contributor base and the community around the project, then I guess it is pointless because the top contributors are happy the way it is. Although of course, us contributor wannabe's who want to be able to reach the same level as you guys would really want to be able to do it without causing too much trouble for either you guys or ourselves -- hence the question on the current process. So if you guys think the tools and process are fine as they are now in allowing the project to scale to accommodate more contributors, then I just have to disagree from a wannabe perspective from now on... until maybe the next discussion on the matter comes up again. :P Thanks and I HTH. -- Dean Michael Berris about.me/deanberris

On 1/31/2011 7:31 PM, Dean Michael Berris wrote:
On Mon, Jan 31, 2011 at 6:32 PM, Joel de Guzman
Question though is do you think you and a lot more others who want to contribute can be more effective with a different tool?
What's the point of comparison when you say "more" effective? How do you measure effectivity in the first place? As far as I know, Boost produces top notch libraries that are more effective than even a thousand GIT hosted open source libraries out there. As an analogy, an instant camera in the hands of a pro will produce superb results, while a top of the line DSLR camera in the hands of an amateur will never reach the same level of quality.
I can understand that sometimes there's no point in asking the question when you don't see a problem. But for me personally who would want to be able to be as contributory to the project as you guys, I do think there's a problem -- especially with scaling the effort.
If everyone else was in agreement that Boost is fine as it is now and that there wasn't a want to grow the contributor base and the community around the project, then I guess it is pointless because the top contributors are happy the way it is.
Although of course, us contributor wannabe's who want to be able to reach the same level as you guys would really want to be able to do it without causing too much trouble for either you guys or ourselves -- hence the question on the current process.
So if you guys think the tools and process are fine as they are now in allowing the project to scale to accommodate more contributors, then I just have to disagree from a wannabe perspective from now on... until maybe the next discussion on the matter comes up again. :P
There are problems with the process. I'm sure there's no argument there. I think, and this is my just my humble opinion of course, that the discussions are focusing too much on the tools as like a silver bullet that will somehow solve all the process problems that we are having. Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

What's the point of comparison when you say "more" effective? How do you measure effectivity in the first place? As far as I know, Boost produces top notch libraries that are more effective than even a thousand GIT hosted open source libraries out there.
OTOH, there are a number of large domains where Boost does not compete, either at all or effectively : ITK, VTK, FFTW, ATLAS are a few open-source projects I have personally used and have found to be substantially easier to deal with than Boost. That being said, I have no doubt that Boost expertise could be very profitably applied in those problem domains.
As an analogy, an instant camera in the hands of a pro will produce superb results, while a top of the line DSLR camera in the hands of an amateur will never reach the same level of quality.
Unfortunately, it is extremely rare for the genius camera engineer to also be a genius photographer, and vice versa... It is a little egotistical to believe that one's facility and expertise in one area extends to another completely different one. Matthias

On 2/1/2011 1:05 AM, Matthias Schabel wrote:
What's the point of comparison when you say "more" effective? How do you measure effectivity in the first place? As far as I know, Boost produces top notch libraries that are more effective than even a thousand GIT hosted open source libraries out there.
OTOH, there are a number of large domains where Boost does not compete, either at all or effectively : ITK, VTK, FFTW, ATLAS are a few open-source projects I have personally used and have found to be substantially easier to deal with than Boost. That being said, I have no doubt that Boost expertise could be very profitably applied in those problem domains.
Sure, Boost may be lacking in some areas. But in general, you can't deny that the ability of Boost to produce top notch libraries is unparalleled in the domain of C++.
As an analogy, an instant camera in the hands of a pro will produce superb results, while a top of the line DSLR camera in the hands of an amateur will never reach the same level of quality.
Unfortunately, it is extremely rare for the genius camera engineer to also be a genius photographer, and vice versa... It is a little egotistical to believe that one's facility and expertise in one area extends to another completely different one.
Agreed. Those are very good points. Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

On 31 January 2011 10:32, Joel de Guzman <joel@boost-consulting.com> wrote:
I'm with John here. And I can say this: we, the top commiters have no complaints about the tools. We will use whatever tools are available. (at least I speak for Hartmut, John, Volodya and I; but I have a strong feeling that Steven and Daniel agree too).
I'm actually a bit of a tool geek, but I tend to 'mute' those discussions because if I joined in I'd never escape. FWIW I develop off line using git-svn. Git wasn't my first choice of version control system, but none of the others had good enough subversion integration. I think others have explained the problems with using git for boost, and unless they're addressed I don't think it's viable. Anyway, git is old hat, we should use fossil. Daniel

At Mon, 31 Jan 2011 18:32:17 +0800, Joel de Guzman wrote:
A Good Craftsman Never Blames His Tools
This is a nice soundbite, but let's be serious for a moment: good craftsmen regularly switch to better tools that help them get their jobs done more easily, and before they do, they often discuss the merits of such a switch. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 2/1/2011 12:45 AM, Dave Abrahams wrote:
At Mon, 31 Jan 2011 18:32:17 +0800, Joel de Guzman wrote:
A Good Craftsman Never Blames His Tools
This is a nice soundbite, but let's be serious for a moment: good craftsmen regularly switch to better tools that help them get their jobs done more easily, and before they do, they often discuss the merits of such a switch.
Agreed. And that's what we are discussing now. I guess I am just wary of statements like "this XYZ tool is broken and should be replaced by this ABC tool that gives you IJK more features" while many share the same sentiment that they are just fine with the tool. I am also hesitant of wholesale change to many tools. Familiarity with a certain tool takes time for one to reach a certain level of mastery. If we are to change our tools, let's do it incrementally with extreme care so as not to cause big disruptions. Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

I'm with John here. And I can say this: we, the top commiters have no complaints about the tools. We will use whatever tools are available. (at least I speak for Hartmut, John, Volodya and I; but I have a strong feeling that Steven and Daniel agree too).
One should recognize here that there is a strong form of self-selection happening; the "top committers" are those individuals who have the desire/motivation/incentive to spend a substantial fraction of their professional and/or personal time refining their knowledge of the details of Boost. For someone who is a Boost consultant (several of whom appear at the top of the aforementioned list) this is clearly a worthwhile investment and, in fact, having a complex tool chain and difficult to master systems is advantageous in that it increases the potential demand for consulting work. A simpler and easier-to- understand tool chain would presumably lower barriers to entry and increase participation from individuals
A Good Craftsman Never Blames His Tools
And I think what Volodya meant is that more than anyone else, it is us who are in the position to determine if we are using the right tool or whether the tools are making us effective. And... hmmm, I think we are effective ;-)
Speaking from personal experience, I would almost certainly not have been able to persevere long enough to see Boost.Units through to completion - without Steven's deep knowledge of the Boost build system and C++ in general, my ability to define an appropriate architecture for dimensional analysis and establish expectations for its function would have been stymied by my inability to achieve a "professional" implementation. I won't speak for Steven, but I guess that it is unlikely that he would have produced that library on his own, either. The proliferation of "toy" dimensional analysis libraries (c.f. MPL library examples) attests to that. Matthias

On 2/1/2011 1:04 AM, Matthias Schabel wrote:
I'm with John here. And I can say this: we, the top commiters have no complaints about the tools. We will use whatever tools are available. (at least I speak for Hartmut, John, Volodya and I; but I have a strong feeling that Steven and Daniel agree too).
One should recognize here that there is a strong form of self-selection happening; the "top committers" are those individuals who have the desire/motivation/incentive to spend a substantial fraction of their professional and/or personal time refining their knowledge of the details of Boost. For someone who is a Boost consultant (several of whom appear at the top of the aforementioned list) this is clearly a worthwhile investment and, in fact, having a complex tool chain and difficult to master systems is advantageous in that it increases the potential demand for consulting work. A simpler and easier-to- understand tool chain would presumably lower barriers to entry and increase participation from individuals
And I am talking mostly about transitioning to git here. You can't be seriously saying that git is a simpler and easier-to-understand tool?
A Good Craftsman Never Blames His Tools
And I think what Volodya meant is that more than anyone else, it is us who are in the position to determine if we are using the right tool or whether the tools are making us effective. And... hmmm, I think we are effective ;-)
Speaking from personal experience, I would almost certainly not have been able to persevere long enough to see Boost.Units through to completion - without Steven's deep knowledge of the Boost build system and C++ in general, my ability to define an appropriate architecture for dimensional analysis and establish expectations for its function would have been stymied by my inability to achieve a "professional" implementation. I won't speak for Steven, but I guess that it is unlikely that he would have produced that library on his own, either. The proliferation of "toy" dimensional analysis libraries (c.f. MPL library examples) attests to that.
Fair enough, but I have a feeling that we are not in the same page here. I am talking mostly about version control and switching to a distributed VCS like git. I won't go so far as to say that the boost build system is broken, but I can say that I do want something simpler so we're probably in agreement there. Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

One should recognize here that there is a strong form of self-selection happening; the "top committers" are those individuals who have the desire/motivation/incentive to spend a substantial fraction of their professional and/or personal time refining their knowledge of the details of Boost. For someone who is a Boost consultant (several of whom appear at the top of the aforementioned list) this is clearly a worthwhile investment and, in fact, having a complex tool chain and difficult to master systems is advantageous in that it increases the potential demand for consulting work. A simpler and easier-to- understand tool chain would presumably lower barriers to entry and increase participation from individuals
And I am talking mostly about transitioning to git here. You can't be seriously saying that git is a simpler and easier-to-understand tool?
No. What I'm saying is that using the "top committers" to determine if the toolchain is working well is only a reasonable thing to do if you want to exclude everyone with less time to spend mastering said toolchain from contributing. Mastery of the toolchain or, more importantly, lack thereof, doesn't necessarily say much about someone's knowledge of a problem domain that would benefit Boost. I really have no opinion on git vis-a-vis any of the myriad other version control systems out there. In general, I agree that it is best to err on the side of sticking with tried and true rather than hopping on the latest bandwagon, but only if the latest bandwagon doesn't represent a real and significant step forward in simplicity/transparency/usability... And I feel that Boost would benefit from such steps. I guess, as a physicist who uses computational/software tools as a means to an end, I feel the same way about complex and arcane software build systems as a software engineer would feel about having to open up their computer and solder stuff on the motherboard in order to get the compiler to work... Matthias

Maybe I suggest that for some time, we outright ban freeform discussion about process, and instead, we restrict them to threads started by a Boost developers and saying this: "I am maintainer of X, and had N commits and M trac changes in the last year. I most hate P1, P2 and P3. I would propose that we use T1, T2, and T3 to fix that". Then, everybody could join to suggest better way of fixing P1, P2 and P3 -- without making up other supposed problems.
OK let me give my pet hates: * The only tool comment I have is that SVN is awfully slow for big merges (Math lib docs for example), I probably need to find a better way of using the tool better though. * OK I have one more tool comment :-) When we changed from CVS to SVN I suspect I lost about a month of "Boost time", changing over repositories figuring out how the heck to use this new tool etc. It *was* worth it in the end, but it wasn't pleasant at the time. In short - big bang tool changes are disruptive. * I think we could organize the testing more efficiently for faster turnaround and better integration testing, and much to my surprise I'm coming round to Robert Ramey's suggestion that we reorganize testing on a library-by-library basis, with each library tested against the current release/stable branch. * I think the release branch is closed for stabilization for too long, and that beta's are too short. ~~~~ Here's a concrete suggestion for how the testing workflow might work: * Test machine pulls changes for lib X from version control (whatever tool that is). * Iff there are changes (either to lib X or to release), only then run the tests for that library against current release branch. * The testers machine builds it's own test results pages - ideally these should go into some form of version control as well so we can roll back and see what broke when. * When a tester first starts testing they would add a short meta-description to a script, and run the script to generate the test results index pages. ie there would be no need for a separate machine collecting and processing the results. * The test script should run much of the above *in parallel* if requested. The aim would be to speed processing of testing by reducing the cycle time (most libraries most of the time don't need re-testing). The version control system used would be a tiny part of the above changes, the open question, is whether we would need to reorganize Trunk more like the sandbox on a library by library basis in order to facilitate the new testing script. ie a directory structure more like: Trunk/ Jamfile // Facilitates integration testing by pointing to other libraries in Release. MyLib/ libs/mylib/ boost/mylib/ And yes, Trunk/Mylib could be an alias for some DVCS somewhere "out there", I don't care, it's simply not part of the suggestion, it would work with what we have now or some omnipotent VCS of the future. I have one concern about this model - from time to time my stuff depends upon some bleeding edge feature from another library or Boost tool - sometimes too development of that new feature goes hand in hand with my usage - which is to say it's developed specifically to handle problem X, and the only way to really shake down the new feature is to put it to work. For example Boost.Build's "check-target-builds" rule was developed for and tested with Boost.Regex's ICU usage requirements. Development of the Boost.Build and Regex went hand in hand. Not sure how we deal with this in the new model? ~~~~~ Release process: How about if once the release is frozen we branch the release branch to "Version-whatever" and then immediately reopen release. Late changes could be added to "version-whatever" via special pleading as before, but normal Boost development (including merges to release) would continue unabated. That would also allow for a slightly longer beta test time before release. ~~~~~~~ All of the above is more "thinking out loud" that solidly thought through, but I would welcome feedback, Regards, John.

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of John Maddock Sent: Monday, January 31, 2011 10:10 AM To: boost@lists.boost.org Subject: Re: [boost] Process discussions
In short - big bang tool changes are disruptive.
+1 (Like Homer, my brain is full ;-)
* I think we could organize the testing more efficiently for faster turnaround and better integration testing, and much to my surprise I'm coming round to Robert Ramey's suggestion that we reorganize testing on a library-by-library basis, with each library tested against the current release/stable branch.
* I think the release branch is closed for stabilization for too long, and
+1 that beta's are MUCH too short. +1 I thought we were trying to move to a 'release early, release often' policy. Are there really showstoppers for more than a few users using a particular library? They can use 'patch' using trunk, or wait for the next release - provided it's soon enough?
The aim would be to speed processing of testing by reducing the cycle time (most libraries most of the time don't need re-testing).
The version control system used would be a tiny part of the above changes,
open question, is whether we would need to reorganize Trunk more like the sandbox on a library by library basis in order to facilitate the new testing script. ie a directory structure more like:
Trunk/ Jamfile // Facilitates integration testing by pointing to other
My experience is that unless one has the full suite of platforms (and who does) testing on just ones favourite development platform isn't enough. There are still too many compiler idiosyncrasies. (Unless we removed old compilers from our testing remit? Personally I would do this, if no other reason to encourage users to upgrade! I am also puzzled at the number of support requests which start with "I'm using Boost 1.3x". Should we not be saying "If you are more than n (=2?) releases behind - Tough! Upgrade before you as again!") the libraries in Release.
MyLib/ libs/mylib/ boost/mylib/
That sounds sensible but very disruptive and a big lot of jamfiling. (And - aside - why it is not Trunk/ Jamfile mylib/ boost/ libs/ Why are there two 'extra' /mylib folders? Is this historical or is/was there some logic to it?) As far as maintenance, the /Guild/MyLib/... structure also sounds a good model for the less experienced/confident to try to fix things and test their 'improvements', without the risk of messing things up big time. The big problem for would-be fixers is this risk? It would be also be good to be able to move from /Sandbox or /Guild to /Trunk 'seamlessly' as soon as code is accepted by some review process. My 2p, FWIW Paul --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

On 31 January 2011 13:05, Paul A. Bristow <pbristow@hetp.u-net.com> wrote:
Should we not be saying "If you are more than n (=2?) releases behind - Tough! Upgrade before you as again!")
It's quite difficult for some people to update. Certainly not something they can do several times a year.
(And - aside - why it is not
Trunk/ Jamfile mylib/ boost/ libs/
Why are there two 'extra' /mylib folders? Is this historical or is/was there some logic to it?)
It's need in the header directory so that the include paths for different libraries don't clash. Daniel

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Daniel James Sent: Monday, January 31, 2011 1:18 PM To: boost@lists.boost.org Subject: Re: [boost] Process discussions
On 31 January 2011 13:05, Paul A. Bristow <pbristow@hetp.u-net.com> wrote:
Should we not be saying "If you are more than n (=2?) releases behind - Tough! Upgrade before you as again!")
It's quite difficult for some people to update. Certainly not something they can do several times a year.
(And - aside - why it is not
Trunk/ Jamfile mylib/ boost/ libs/
Why are there two 'extra' /mylib folders? Is this historical or is/was there some logic to it?)
It's need in the header directory so that the include paths for different libraries don't clash.
That seems lot of extra sub folders for something that a modest about of name control could avoid? But I would not even think changing it now. I haven't found that explanation of why anywhere - but of course there isn't an effective index to much of Boost ;-) (I note that there are some projects in sandbox that fail to follow this layout, so they are going to get into trouble :-( A template might help?) Paul --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

On 1/31/2011 9:38 AM, Paul A. Bristow wrote:
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Daniel James Sent: Monday, January 31, 2011 1:18 PM To: boost@lists.boost.org Subject: Re: [boost] Process discussions
On 31 January 2011 13:05, Paul A. Bristow<pbristow@hetp.u-net.com> wrote:
Should we not be saying "If you are more than n (=2?) releases behind - Tough! Upgrade before you as again!")
It's quite difficult for some people to update. Certainly not something they can do several times a year.
(And - aside - why it is not
Trunk/ Jamfile mylib/ boost/ libs/
Why are there two 'extra' /mylib folders? Is this historical or is/was there some logic to it?)
It's need in the header directory so that the include paths for different libraries don't clash.
That seems lot of extra sub folders for something that a modest about of name control could avoid?
My understanding is that it replicates the Boost structure. So one can refer to one's header file paths as if one's top level directory is the main directory of a Boost distribution. Then there is no need to change header file includes when one's library is put inside a Boost distribution tree ( or some SVN branch like 'trunk' which duplicates a Boost tree ). I think if you will look at the structure you will see this pretty easily.
But I would not even think changing it now.
I haven't found that explanation of why anywhere - but of course there isn't an effective index to much of Boost ;-)
(I note that there are some projects in sandbox that fail to follow this layout, so they are going to get into trouble :-( A template might help?)
I agree with you. I honestly don't see why all possible libraries for Boost to be reviewed are not in the sandbox using the recommended layout. It would certainly make it much easier for others to use, test, review, and get updates to such libraries when they occur. I think that this should be a Boost mandate: "if you want to submit your library for a Boost review you need to get sandbox access and put your library into it using the recommended directory structure." I find that much easier than getting some library from some URL address on the Internet or from the Boost vault, as a monolithic zip file, and unzipping and hoping that the directory structure corresponds to something I can try without wasting a great deal of time figuring out how to use said library.

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Edward Diener Sent: Monday, January 31, 2011 6:35 PM To: boost@lists.boost.org Subject: Re: [boost] Process discussions
(And - aside - why it is not
Trunk/ Jamfile mylib/ boost/ libs/
Why are there two 'extra' /mylib folders? Is this historical or is/was there some logic to it?)
It's need in the header directory so that the include paths for different libraries don't clash.
That seems lot of extra sub folders for something that a modest about of name control could avoid?
My understanding is that it replicates the Boost structure. So one can refer to one's header file paths as if one's top level directory is the main directory of a Boost distribution. Then there is no need to change header file includes when one's library is put inside a Boost distribution tree ( or some SVN branch like 'trunk' which duplicates a Boost tree ).
I think if you will look at the structure you will see this pretty easily.
(I note that there are some projects in sandbox that fail to follow this layout, so they are going to get into trouble :-( A template might help?)
I agree with you. I honestly don't see why all possible libraries for Boost to be reviewed are not in the sandbox using the recommended layout. It would certainly make it much easier for others to use, test, review, and get updates to such libraries when they occur. I think that this should be a Boost mandate: "if you want to submit your library for a Boost review you need to get sandbox access and put your library into it using the recommended directory structure." I find that much easier than getting some library from some URL address on the Internet or from the Boost vault, as a monolithic zip file, and unzipping and hoping that the directory structure corresponds to something I can try without wasting a great deal of time figuring out how to use said library.
Your explanations are fine - and I strongly support enforcement of the 'Boost Standard File Layout. But IMO this shows: * Judging by a fair number of projects in sandbox, Boost has failed to get over to wannabe authors the requirement/desirability for this structure. * Boost search and/indexing is ineffective. Using the Boost page clicking on "Search Boost" doesn't produce any helpful links. (nor does refining the search to only www.boost.org). FAQ doesn't help. The index is tiny for such a massive site, and is no help for this question. * Eventually one might find http://www.boost.org/community/sandbox.html . * This tells you *what* you need to know, but not *why* it is like that. * Telling *why* often gives a big push to compliance. Paul --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

On 2/1/2011 5:02 AM, Paul A. Bristow wrote:
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Edward Diener Sent: Monday, January 31, 2011 6:35 PM To: boost@lists.boost.org Subject: Re: [boost] Process discussions
(And - aside - why it is not
Trunk/ Jamfile mylib/ boost/ libs/
Why are there two 'extra' /mylib folders? Is this historical or is/was there some logic to it?)
It's need in the header directory so that the include paths for different libraries don't clash.
That seems lot of extra sub folders for something that a modest about of name control could avoid?
My understanding is that it replicates the Boost structure. So one can refer to one's header file paths as if one's top level directory is the main directory of a Boost distribution. Then there is no need to change header file includes when one's library is put inside a Boost distribution tree ( or some SVN branch like 'trunk' which duplicates a Boost tree ).
I think if you will look at the structure you will see this pretty easily.
(I note that there are some projects in sandbox that fail to follow this layout, so they are going to get into trouble :-( A template might help?)
I agree with you. I honestly don't see why all possible libraries for Boost to be reviewed are not in the sandbox using the recommended layout. It would certainly make it much easier for others to use, test, review, and get updates to such libraries when they occur. I think that this should be a Boost mandate: "if you want to submit your library for a Boost review you need to get sandbox access and put your library into it using the recommended directory structure." I find that much easier than getting some library from some URL address on the Internet or from the Boost vault, as a monolithic zip file, and unzipping and hoping that the directory structure corresponds to something I can try without wasting a great deal of time figuring out how to use said library.
Your explanations are fine - and I strongly support enforcement of the 'Boost Standard File Layout.
But IMO this shows:
* Judging by a fair number of projects in sandbox, Boost has failed to get over to wannabe authors the requirement/desirability for this structure.
It is possible that quite a number of sandbox libraries are there from before the time when the desired file layout was documented.
* Boost search and/indexing is ineffective. Using the Boost page clicking on "Search Boost" doesn't produce any helpful links. (nor does refining the search to only www.boost.org). FAQ doesn't help. The index is tiny for such a massive site, and is no help for this question.
* Eventually one might find http://www.boost.org/community/sandbox.html .
* This tells you *what* you need to know, but not *why* it is like that.
* Telling *why* often gives a big push to compliance.
Agreed. If no one enforces the desired directory structure or, better yet, if no one enforces the idea that libraries up for review need to be in the sandbox using that directory structure, then people will not do it. I especially think the latter would be most effective. At one time there was the suggestion, I believe, that the Boost Vault's projects should be folded into the sandbox and then done away with, but nothing was ever done in that direction. Part of a problem with Boost is that there does not seem to be any document on who is empowered to get things done in various areas regarding Boost decisions. We all generally know who are the leaders among the developers of Boost, but if you were to ask Boost developers who is empowered to make changes in various areas I have the feeling that few would know, and I certainly do not.

On Tue, 1 Feb 2011 10:02:03 -0000 "Paul A. Bristow" <pbristow@hetp.u-net.com> wrote:
[...] such libraries when they occur. I think that this should be a Boost mandate: "if you want to submit your library for a Boost review you need to get sandbox access and put your library into it using the recommended directory structure." I find that much easier than getting some library from some URL address on the Internet or from the Boost vault, as a monolithic zip file, and unzipping and hoping that the directory structure corresponds to something I can try without wasting a great deal of time figuring out how to use said library.
Your explanations are fine - and I strongly support enforcement of the 'Boost Standard File Layout.
But IMO this shows:
* Judging by a fair number of projects in sandbox, Boost has failed to get over to wannabe authors the requirement/desirability for this structure. [...]
There's also this: <http://www.boost.org/development/requirements.html#Directory_structure> When I was preparing XInt, that's what I saw, so that's how I laid out my directory structure: xint build doc example src test I didn't know that was wrong until you (Paul) told me.
* Eventually one might find http://www.boost.org/community/sandbox.html .
FWIW, I never thought to look at that one when setting up XInt for submission.
* This tells you *what* you need to know, but not *why* it is like that.
* Telling *why* often gives a big push to compliance.
The requirements.html that I linked to above includes a partial rationale. Perhaps there should be prominent links on both pages to one another, one on the requirements.html page in place of the Sub-directory table, and one on the sandbox.html page telling people to review the requirements page for more details? Those would have helped me a great deal. -- Chad Nelson Oak Circle Software, Inc. * * *

On 1/31/2011 9:05 PM, Paul A. Bristow wrote:
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of John Maddock
In short - big bang tool changes are disruptive.
+1 (Like Homer, my brain is full ;-)
+1 I think this is my main reason for my hesitation. Let us evolve, not revolutionize. If we want to go from here to there, let us do it a step at a time, instead of a giant leap. Regards, -- Joel de Guzman http://www.boostpro.com http://boost-spirit.com

On Mon, Jan 31, 2011 at 01:05:18PM -0000, Paul A. Bristow wrote:
I thought we were trying to move to a 'release early, release often' policy.
[ ... ]
I am also puzzled at the number of support requests which start with "I'm using Boost 1.3x".
I'd speculate that these two observations are connected. Plenty of folks will not be willing to update at the release rate of boost. For example, I might select a linux distribution (that contains boost) and use it for 2-3 years before switching. At the current rate of 4 Boost releases/year, I could be 8-12 releases behind.
Should we not be saying "If you are more than n (=2?) releases behind - Tough! Upgrade before you as again!")
I can sympathize with this reaction. But consider the user's point of view, too. -Steve

On 2011-01-31 08:05, Paul A. Bristow wrote:
I am also puzzled at the number of support requests which start with "I'm using Boost 1.3x". Should we not be saying "If you are more than n (=2?) releases behind - Tough! Upgrade before you as again!")
That drags in yet another point that was already discussed many times over recent years: As long as the boost community makes it so hard for users to upgrade, I think it is very much to be expected that people pick a release, adjust their own platform to it, and then stay with it until they absolutely have to upgrade. The point I'm driving at here is API and ABI compatibility. Without that, two distinct boost releases have to be considered as two mostly unrelated products, at least in environments with strict testing and validation rules. I don't think this is the right context to discuss this controversial topic, but questions such as yours seem to suggest a lack of awareness for this kind of needs. I understand that for developers it's more convenient to always focus on new features, instead of backward compatibility, and I'm not bringing the topic up to judge. I'm just trying to raise awareness for these concerns. FWIW, Stefan -- ...ich hab' noch einen Koffer in Berlin...

At Tue, 01 Feb 2011 18:38:03 -0500, Stefan Seefeld wrote:
The point I'm driving at here is API and ABI compatibility. Without that, two distinct boost releases have to be considered as two mostly unrelated products, at least in environments with strict testing and validation rules.
I don't think this is the right context to discuss this controversial topic...
I understand that for developers it's more convenient to always focus on new features, instead of backward compatibility, and I'm not bringing the topic up to judge. I'm just trying to raise awareness for these concerns.
+1 I hope we can address that topic once we've been modularized, which I consider a higher priority. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 31 January 2011 10:10, John Maddock <boost.regex@virgin.net> wrote:
* The only tool comment I have is that SVN is awfully slow for big merges (Math lib docs for example), I probably need to find a better way of using the tool better though.
Maybe we could work on making boostbook generate more consistent output. I'm not sure how much of a difference that would make. Alternatively, you could just not check in the documentation, and put the development and release version somewhere convenient.
* OK I have one more tool comment :-) When we changed from CVS to SVN I suspect I lost about a month of "Boost time", changing over repositories figuring out how the heck to use this new tool etc. It *was* worth it in the end, but it wasn't pleasant at the time. In short - big bang tool changes are disruptive.
Git is probably more disruptive than most. It's very quirky.
* I think we could organize the testing more efficiently for faster turnaround and better integration testing, and much to my surprise I'm coming round to Robert Ramey's suggestion that we reorganize testing on a library-by-library basis, with each library tested against the current release/stable branch.
I mostly agree. But I'm not sure how workable Robert's suggestion is, sometimes we need to make changes to more than one library at the same time (ah sorry, you say that later and it'll take me too long to redo my response).
* I think the release branch is closed for stabilization for too long, and that beta's are too short.
You might be right about this. By the way, I'm thinking about how to have better website support for the beta. We really need the beta information to appear on the main site during the beta, but at the moment it's hard to do that without 'announcing' the final release.
Here's a concrete suggestion for how the testing workflow might work:
* Test machine pulls changes for lib X from version control (whatever tool that is).
Could be from a branch so that we could use a single branch for multiple libraries.
* Iff there are changes (either to lib X or to release), only then run the tests for that library against current release branch.
Sometimes we also need to test dependent libraries. As you know, changes I make to unordered, can cause failures in tr1. But maybe it's acceptable if they only show up after integration (which is often the case at the moment).
* The testers machine builds it's own test results pages - ideally these should go into some form of version control as well so we can roll back and see what broke when. * When a tester first starts testing they would add a short meta-description to a script, and run the script to generate the test results index pages. ie there would be no need for a separate machine collecting and processing the results. * The test script should run much of the above *in parallel* if requested.
Is anyone willing to work on something like this? Everyone seems a bit scared of the testing scripts, although I thnk Rene is working on a new reporting system.
Trunk/ Jamfile // Facilitates integration testing by pointing to other libraries in Release. MyLib/ libs/mylib/ boost/mylib/
I think this is a good idea. Would probably need some way to weave together the headers for easy use (this could be in the release scripts or as part of installation).
And yes, Trunk/Mylib could be an alias for some DVCS somewhere "out there", I don't care, it's simply not part of the suggestion, it would work with what we have now or some omnipotent VCS of the future.
Exactly right.
How about if once the release is frozen we branch the release branch to "Version-whatever" and then immediately reopen release. Late changes could be added to "version-whatever" via special pleading as before, but normal Boost development (including merges to release) would continue unabated. That would also allow for a slightly longer beta test time before release.
I do like this, but the problem is how we test these late changes. Maybe we just accept that on the less popular platforms they're tested on a slightly different version. Daniel

AMDG On 1/31/2011 5:12 AM, Daniel James wrote:
On 31 January 2011 10:10, John Maddock<boost.regex@virgin.net> wrote:
* The testers machine builds it's own test results pages - ideally these should go into some form of version control as well so we can roll back and see what broke when. * When a tester first starts testing they would add a short meta-description to a script, and run the script to generate the test results index pages. ie there would be no need for a separate machine collecting and processing the results. * The test script should run much of the above *in parallel* if requested.
Is anyone willing to work on something like this? Everyone seems a bit scared of the testing scripts, although I thnk Rene is working on a new reporting system.
I suspect that this is largely a matter of unfamiliarity. I translated most of the report generation xsl scripts into C++ last year, and I didn't find them that hard to understand. In Christ, Steven Watanabe

Steven Watanabe wrote:
AMDG
On 1/31/2011 5:12 AM, Daniel James wrote:
On 31 January 2011 10:10, John Maddock<boost.regex@virgin.net> wrote:
* The testers machine builds it's own test results pages - ideally these should go into some form of version control as well so we can roll back and see what broke when. * When a tester first starts testing they would add a short meta-description to a script, and run the script to generate the test results index pages. ie there would be no need for a separate machine collecting and processing the results. * The test script should run much of the above *in parallel* if requested.
Is anyone willing to work on something like this? Everyone seems a bit scared of the testing scripts, although I thnk Rene is working on a new reporting system.
I suspect that this is largely a matter of unfamiliarity. I translated most of the report generation xsl scripts into C++ last year, and I didn't find them that hard to understand.
Oh -- but are we still using XSLT? Do you think we can easily switch to your C++ translation? -- Vladimir Prus Mentor Graphics +7 (812) 677-68-40

On 1/31/2011 10:40 AM, Vladimir Prus wrote:
Steven Watanabe wrote:
AMDG
On 1/31/2011 5:12 AM, Daniel James wrote:
On 31 January 2011 10:10, John Maddock<boost.regex@virgin.net> wrote:
* The testers machine builds it's own test results pages - ideally these should go into some form of version control as well so we can roll back and see what broke when. * When a tester first starts testing they would add a short meta-description to a script, and run the script to generate the test results index pages. ie there would be no need for a separate machine collecting and processing the results. * The test script should run much of the above *in parallel* if requested.
Is anyone willing to work on something like this? Everyone seems a bit scared of the testing scripts, although I thnk Rene is working on a new reporting system.
I suspect that this is largely a matter of unfamiliarity. I translated most of the report generation xsl scripts into C++ last year, and I didn't find them that hard to understand.
Oh -- but are we still using XSLT? Do you think we can easily switch to your C++ translation?
Yes, as I've mentioned offhandedly recently.. I'm working on a replacement for the reporting which is scalable, easy to use, etc. Obviously not written in XSLT, but written in Python for Google's App Engine Cloud service. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

Rene Rivera wrote:
On 1/31/2011 10:40 AM, Vladimir Prus wrote:
Steven Watanabe wrote:
AMDG
On 1/31/2011 5:12 AM, Daniel James wrote:
On 31 January 2011 10:10, John Maddock<boost.regex@virgin.net> wrote:
* The testers machine builds it's own test results pages - ideally these should go into some form of version control as well so we can roll back and see what broke when. * When a tester first starts testing they would add a short meta-description to a script, and run the script to generate the test results index pages. ie there would be no need for a separate machine collecting and processing the results. * The test script should run much of the above *in parallel* if requested.
Is anyone willing to work on something like this? Everyone seems a bit scared of the testing scripts, although I thnk Rene is working on a new reporting system.
I suspect that this is largely a matter of unfamiliarity. I translated most of the report generation xsl scripts into C++ last year, and I didn't find them that hard to understand.
Oh -- but are we still using XSLT? Do you think we can easily switch to your C++ translation?
Yes, as I've mentioned offhandedly recently.. I'm working on a replacement for the reporting which is scalable, easy to use, etc. Obviously not written in XSLT, but written in Python for Google's App Engine Cloud service.
Do you need help? - Volodya -- Vladimir Prus Mentor Graphics +7 (812) 677-68-40

On 1/31/2011 10:58 AM, Vladimir Prus wrote:
Rene Rivera wrote:
On 1/31/2011 10:40 AM, Vladimir Prus wrote:
Steven Watanabe wrote:
AMDG
On 1/31/2011 5:12 AM, Daniel James wrote:
On 31 January 2011 10:10, John Maddock<boost.regex@virgin.net> wrote:
* The testers machine builds it's own test results pages - ideally these should go into some form of version control as well so we can roll back and see what broke when. * When a tester first starts testing they would add a short meta-description to a script, and run the script to generate the test results index pages. ie there would be no need for a separate machine collecting and processing the results. * The test script should run much of the above *in parallel* if requested.
Is anyone willing to work on something like this? Everyone seems a bit scared of the testing scripts, although I thnk Rene is working on a new reporting system.
I suspect that this is largely a matter of unfamiliarity. I translated most of the report generation xsl scripts into C++ last year, and I didn't find them that hard to understand.
Oh -- but are we still using XSLT? Do you think we can easily switch to your C++ translation?
Yes, as I've mentioned offhandedly recently.. I'm working on a replacement for the reporting which is scalable, easy to use, etc. Obviously not written in XSLT, but written in Python for Google's App Engine Cloud service.
Do you need help?
I will at some point.. But not yet. Note, the reason I'm being cagey about this is that in order to support the effort I'm doing it as a commercial endeavor. I.e. so I can hopefully get enough income to pay for the Google server usage. Although parts of it will be open-source, which is what I'll need help with soon. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

AMDG On 1/31/2011 8:40 AM, Vladimir Prus wrote:
Steven Watanabe wrote:
On 1/31/2011 5:12 AM, Daniel James wrote:
On 31 January 2011 10:10, John Maddock<boost.regex@virgin.net> wrote:
* The testers machine builds it's own test results pages - ideally these should go into some form of version control as well so we can roll back and see what broke when. * When a tester first starts testing they would add a short meta-description to a script, and run the script to generate the test results index pages. ie there would be no need for a separate machine collecting and processing the results. * The test script should run much of the above *in parallel* if requested.
Is anyone willing to work on something like this? Everyone seems a bit scared of the testing scripts, although I thnk Rene is working on a new reporting system.
I suspect that this is largely a matter of unfamiliarity. I translated most of the report generation xsl scripts into C++ last year, and I didn't find them that hard to understand.
Oh -- but are we still using XSLT? Do you think we can easily switch to your C++ translation?
It would take more work to get it into a usable state. The existing tools are a combination of xslt and python, and I significantly restructured the way the xsl part works, because the limitations of xsl forced them to do a number of things that didn't make a lot of sense when using a real language. I was working on integrating everything into a single executable, but I never finished. Since Rene is also working on a rewrite, I don't know that finishing this would be worth the effort. From a performance standpoint, the main bottleneck seemed to be writing out so many small files. Writing directly to a zip archive improved the performance by an order of magnitude. In Christ, Steven Watanabe

Daniel James wrote:
On 31 January 2011 10:10, John Maddock <boost.regex@virgin.net> wrote:
* The only tool comment I have is that SVN is awfully slow for big merges (Math lib docs for example), I probably need to find a better way of using the tool better though.
Maybe we could work on making boostbook generate more consistent output. I'm not sure how much of a difference that would make.
Alternatively, you could just not check in the documentation, and put the development and release version somewhere convenient.
It would be really great, if the ever changing global unique identifiers in boostbook generated output could be made to change less frequently, no matter which version control system is used. Because changing files that haven't really changed makes it more difficult to find out which changes have actually happened. Regards, Thomas

AMDG On 1/31/2011 11:39 AM, Thomas Klimpel wrote:
Daniel James wrote:
On 31 January 2011 10:10, John Maddock<boost.regex@virgin.net> wrote:
* The only tool comment I have is that SVN is awfully slow for big merges (Math lib docs for example), I probably need to find a better way of using the tool better though.
Maybe we could work on making boostbook generate more consistent output. I'm not sure how much of a difference that would make.
Alternatively, you could just not check in the documentation, and put the development and release version somewhere convenient.
It would be really great, if the ever changing global unique identifiers in boostbook generated output could be made to change less frequently, no matter which version control system is used. Because changing files that haven't really changed makes it more difficult to find out which changes have actually happened.
It shouldn't require huge changes. For the most part, the anchor names are based on the name of whatever they're for. There are just a few cases that aren't handled. In Christ, Steven Watanabe

On 31 January 2011 21:27, Steven Watanabe <watanabesj@gmail.com> wrote:
It shouldn't require huge changes. For the most part, the anchor names are based on the name of whatever they're for. There are just a few cases that aren't handled.
I had a look at the math docs and a lot of the anchors are generated for bridgeheads (headings). So I tried changing quickbook to give these ids and it made a huge difference. The change is on the increasingly inappropriately named branch at http://svn.boost.org/svn/boost/branches/quickbook-filenames/ Will probably merge to trunk soon after the release. I'll have a look into doing the same for other elements. Boostbook reference documentation will still be a problem, but most of the documentation that's checked into boost doesn't use that. Daniel

It shouldn't require huge changes. For the most part, the anchor names are based on the name of whatever they're for. There are just a few cases that aren't handled.
I had a look at the math docs and a lot of the anchors are generated for bridgeheads (headings). So I tried changing quickbook to give these ids and it made a huge difference. The change is on the increasingly inappropriately named branch at http://svn.boost.org/svn/boost/branches/quickbook-filenames/
Thanks Daniel! John.

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of John Maddock Sent: Wednesday, February 02, 2011 9:21 AM To: boost@lists.boost.org Subject: Re: [boost] Process discussions
It shouldn't require huge changes. For the most part, the anchor names are based on the name of whatever they're for. There are just a few cases that aren't handled.
I had a look at the math docs and a lot of the anchors are generated for bridgeheads (headings). So I tried changing quickbook to give these ids and it made a huge difference. The change is on the increasingly inappropriately named branch at http://svn.boost.org/svn/boost/branches/quickbook-filenames/
Thanks Daniel!
Paul PS I note another reason to perhaps use sections more and headings less is the effect on indexing. (If I haven't misunderstood - again -) with John's autoindexing (or any other indexing system for that matter) when viewing html, the index term only gets the user to the right *section*. If you have pages and pages of stuff using many headings (rather than splitting into sections), then the user may have search through many pages past many headings to find what is sought. This is easy enough using the web browser find, but a hassle, and risks finding items with the same word, but not the item to which the index term referred. (With pdf native indexing, the index term hyperlink gets to the right *page*, so it's not such an issue. But we should be structuring for both html and pdf). Of course, sections also appear on the Table of Contents, which will become bigger, perhaps bloated even, if you have too many (sub) sections. Another issue is that I find it too easy to get my section and endsects mismatched. (And I find that the diagnosis of mismatching brackets isn't always user friendly). More sections will give me even more chance of getting in a muddle!

John Maddock wrote:
Maybe I suggest that for some time, we outright ban freeform discussion about process, and instead, we restrict them to threads started by a Boost developers and saying this: "I am maintainer of X, and had N commits and M trac changes in the last year. I most hate P1, P2 and P3. I would propose that we use T1, T2, and T3 to fix that". Then, everybody could join to suggest better way of fixing P1, P2 and P3 -- without making up other supposed problems.
OK let me give my pet hates:
* The only tool comment I have is that SVN is awfully slow for big merges (Math lib docs for example), I probably need to find a better way of using the tool better though.
+1
* I think we could organize the testing more efficiently for faster turnaround and better integration testing, and much to my surprise I'm coming round to Robert Ramey's suggestion that we reorganize testing on a library-by-library basis, with each library tested against the current release/stable branch.
+1 of course,
* I think the release branch is closed for stabilization for too long, and that beta's are too short.
I would hope that implementing the above would make this a non-issue.
Here's a concrete suggestion for how the testing workflow might work:
* Test machine pulls changes for lib X from version control (whatever tool that is). * Iff there are changes (either to lib X or to release), only then run the tests for that library against current release branch. * The testers machine builds it's own test results pages - ideally these should go into some form of version control as well so we can roll back and see what broke when. * When a tester first starts testing they would add a short meta-description to a script, and run the script to generate the test results index pages. ie there would be no need for a separate machine collecting and processing the results. * The test script should run much of the above *in parallel* if requested.
The aim would be to speed processing of testing by reducing the cycle time (most libraries most of the time don't need re-testing).
+1 to all of the above.
The version control system used would be a tiny part of the above changes, the open question, is whether we would need to reorganize Trunk more like the sandbox on a library by library basis in order to facilitate the new testing script. ie a directory structure more like:
Trunk/ Jamfile // Facilitates integration testing by pointing to other libraries in Release. MyLib/ libs/mylib/ boost/mylib/
And yes, Trunk/Mylib could be an alias for some DVCS somewhere "out there", I don't care, it's simply not part of the suggestion, it would work with what we have now or some omnipotent VCS of the future.
Let's divide this into three questions to be separately considered: a) adjust testing script to the above model b) movement from one VCS to alternative one c) restructuring of directories and/or namespaces (modularization) And if implemented, do them one at a time.
I have one concern about this model - from time to time my stuff depends upon some bleeding edge feature from another library or Boost tool - sometimes too development of that new feature goes hand in hand with my usage - which is to say it's developed specifically to handle problem X, and the only way to really shake down the new feature is to put it to work. For example Boost.Build's "check-target-builds" rule was developed for and tested with Boost.Regex's ICU usage requirements. Development of the Boost.Build and Regex went hand in hand. Not sure how we deal with this in the new model?
They way I do this on my own machine is to use a release tree. The directories of those libraries which are of interest to me I switch to the trunk. Usually this is just the serialization libraries, but occasionally I might do it with some other library.
~~~~~
Release process:
How about if once the release is frozen we branch the release branch to "Version-whatever" and then immediately reopen release. Late changes could be added to "version-whatever" via special pleading as before, but normal Boost development (including merges to release) would continue unabated. That would also allow for a slightly longer beta test time before release.
Fine by me, but I would hope that the need for this would dimish with an update to the testing regimen. Robert Ramey

John Maddock wrote:
Maybe I suggest that for some time, we outright ban freeform discussion about process, and instead, we restrict them to threads started by a Boost developers and saying this: "I am maintainer of X, and had N commits and M trac changes in the last year. I most hate P1, P2 and P3. I would propose that we use T1, T2, and T3 to fix that". Then, everybody could join to suggest better way of fixing P1, P2 and P3 -- without making up other supposed problems.
OK let me give my pet hates:
* The only tool comment I have is that SVN is awfully slow for big merges (Math lib docs for example), I probably need to find a better way of using the tool better though.
I can't shake the feeling that SVN performance is specific to our instance, at least other SVN servers I use feel faster. It would be worthwhile to experiment with different setups, including using svn+ssh instead of https, or using FSFS repository format on the server (if it uses BDB) now. Alas, I am not sure anybody is in position to try this.
* I think we could organize the testing more efficiently for faster turnaround and better integration testing, and much to my surprise I'm coming round to Robert Ramey's suggestion that we reorganize testing on a library-by-library basis, with each library tested against the current release/stable branch.
* Test machine pulls changes for lib X from version control (whatever tool that is). * Iff there are changes (either to lib X or to release), only then run the tests for that library against current release branch. * The testers machine builds it's own test results pages - ideally these should go into some form of version control as well so we can roll back and see what broke when. * When a tester first starts testing they would add a short meta-description to a script, and run the script to generate the test results index pages. ie there would be no need for a separate machine collecting and processing the results. * The test script should run much of the above *in parallel* if requested.
The aim would be to speed processing of testing by reducing the cycle time (most libraries most of the time don't need re-testing).
I suppose an alternative approach would be just make the incremental testing work. Boost.Build, obviously, can rebuild and rerun just the necessary tests, but the regression framework used to have issues, like reporting stale tests. I think it should give the same increase in testing time, and not really sure which approach is harder to implement.
I have one concern about this model - from time to time my stuff depends upon some bleeding edge feature from another library or Boost tool - sometimes too development of that new feature goes hand in hand with my usage - which is to say it's developed specifically to handle problem X, and the only way to really shake down the new feature is to put it to work. For example Boost.Build's "check-target-builds" rule was developed for and tested with Boost.Regex's ICU usage requirements. Development of the Boost.Build and Regex went hand in hand. Not sure how we deal with this in the new model?
That's why I prefer 'test whole trunk, incrementally' model to the 'test each library individually, against last release' model. - Volodya -- Vladimir Prus Mentor Graphics +7 (812) 677-68-40

On 2/1/2011 3:15 AM, Vladimir Prus wrote:
John Maddock wrote:
Maybe I suggest that for some time, we outright ban freeform discussion about process, and instead, we restrict them to threads started by a Boost developers and saying this: "I am maintainer of X, and had N commits and M trac changes in the last year. I most hate P1, P2 and P3. I would propose that we use T1, T2, and T3 to fix that". Then, everybody could join to suggest better way of fixing P1, P2 and P3 -- without making up other supposed problems.
OK let me give my pet hates:
* The only tool comment I have is that SVN is awfully slow for big merges (Math lib docs for example), I probably need to find a better way of using the tool better though.
I can't shake the feeling that SVN performance is specific to our instance, at least other SVN servers I use feel faster. It would be worthwhile to experiment with different setups, including using svn+ssh instead of https, or using FSFS repository format on the server (if it uses BDB) now.
There are various problems with SVN. Like using HTTPS, which is known to be unstable. The contention between Trac and SVN is problematic because we have a heavily used Trac and it conflicts with regular SVN use frequently. We do use FSFS repo format, but it's not the latest sharded structure. Both the HTTPS and un-sharded aspects are something I intend to change. But I'm giving priority at the moment to the test reporting problems. Since they seem the most critical.
Alas, I am not sure anybody is in position to try this.
As for trying the plain SVN server configuration.. I'm also not sure if we can try it (and obviously I don't have time at the moment) as I don't know what firewall changes, or server management changes, might need to happen. And that's something I can't do.
* I think we could organize the testing more efficiently for faster turnaround and better integration testing, and much to my surprise I'm coming round to Robert Ramey's suggestion that we reorganize testing on a library-by-library basis, with each library tested against the current release/stable branch.
* Test machine pulls changes for lib X from version control (whatever tool that is). * Iff there are changes (either to lib X or to release), only then run the tests for that library against current release branch. * The testers machine builds it's own test results pages - ideally these should go into some form of version control as well so we can roll back and see what broke when. * When a tester first starts testing they would add a short meta-description to a script, and run the script to generate the test results index pages. ie there would be no need for a separate machine collecting and processing the results. * The test script should run much of the above *in parallel* if requested.
The aim would be to speed processing of testing by reducing the cycle time (most libraries most of the time don't need re-testing).
I suppose an alternative approach would be just make the incremental testing work. Boost.Build, obviously, can rebuild and rerun just the necessary tests, but the regression framework used to have issues, like reporting stale tests. I think it should give the same increase in testing time, and not really sure which approach is harder to implement.
Implementing the incremental testing is the easiest, assuming we are reimplementing the test reporting. And it's a major reason why I'm reimplementing the test reporting :-) The fix will be possible because the new reporting will not rely on process_jam_log to get information. But instead use the BBv2 XML output directly, which has tons more accurate information about the testing results.
I have one concern about this model - from time to time my stuff depends upon some bleeding edge feature from another library or Boost tool - sometimes too development of that new feature goes hand in hand with my usage - which is to say it's developed specifically to handle problem X, and the only way to really shake down the new feature is to put it to work. For example Boost.Build's "check-target-builds" rule was developed for and tested with Boost.Regex's ICU usage requirements. Development of the Boost.Build and Regex went hand in hand. Not sure how we deal with this in the new model?
That's why I prefer 'test whole trunk, incrementally' model to the 'test each library individually, against last release' model.
I tend to prefer both. That is, I don't think we can live without full trunk testing. But we also want the partial-integration testing that using single-library-agaist-release provides. I'm perfectly fine without having the dependencies of those release-tested libraries not be available, and having them fail the pre-integration, as it would show which parts that library depends on clearly. That may shock you ;-) But I'd rather see failures that show likely integration hot-spots, than try and be ultra smart about making a fully working partially integrated release. So to summarize I'd like to see testing: 1. incremental full trunk 2. single-library against full release (incremental if tester disk space allows it) 3. incremental fully integrated release Note, "trunk" and "release" are just shorthands for the corresponding concepts in our current procedures. So adjust for possible future procedures as needed ;-) -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

At Tue, 01 Feb 2011 10:12:45 -0600, Rene Rivera wrote:
That is, I don't think we can live without full trunk testing.
I'm curious why not. I'm fairly sure I don't want any resources wasted on it for my libraries. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

On 2/1/2011 7:21 PM, Dave Abrahams wrote:
At Tue, 01 Feb 2011 10:12:45 -0600, Rene Rivera wrote:
That is, I don't think we can live without full trunk testing.
I'm curious why not. I'm fairly sure I don't want any resources wasted on it for my libraries.
Because in the three testing scenarios I mentioned it would be the only one to give you fully integrated testing without being in the release. Of course a full dependency integrated testing of an individual lib with a release base could be a substitute for full trunk testing. But for some components full dependency integrated testing might devolve back to close to full trunk testing. But again, this all depends on how close or far future procedures are to the current ones. -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org (msn) - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim,yahoo,skype,efnet,gmail

Rene Rivera wrote:
On 2/1/2011 7:21 PM, Dave Abrahams wrote:
At Tue, 01 Feb 2011 10:12:45 -0600, Rene Rivera wrote:
That is, I don't think we can live without full trunk testing.
I'm curious why not. I'm fairly sure I don't want any resources wasted on it for my libraries.
Because in the three testing scenarios I mentioned it would be the only one to give you fully integrated testing without being in the release. Of course a full dependency integrated testing of an individual lib with a release base could be a substitute for full trunk testing. But for some components full dependency integrated testing might devolve back to close to full trunk testing. But again, this all depends on how close or far future procedures are to the current ones.
I would think that the decision to test any dependent libraries could be left to the tester. He could decide to do it if he had the resources - otherwise just test the library recently merged. Robert Ramey

On Tue, Feb 1, 2011 at 8:57 PM, Rene Rivera <grafikrobot@gmail.com> wrote:
On 2/1/2011 7:21 PM, Dave Abrahams wrote:
At Tue, 01 Feb 2011 10:12:45 -0600, Rene Rivera wrote:
That is, I don't think we can live without full trunk testing.
I'm curious why not. I'm fairly sure I don't want any resources wasted on it for my libraries.
Because in the three testing scenarios I mentioned it would be the only one to give you fully integrated testing without being in the release.
What does "fully integrated testing" mean, though [I mean that in two senses: 1. what's your definition, and 2. rhetorically, does it have any meaning]? You're not testing against any past or future released state of other libraries. Someone could have checked in minimal changes to the release branch and be off exploring some grand rewrite on trunk for which there's no intention that it be in the next release.
Of course a full dependency integrated testing of an individual lib with a release base could be a substitute for full trunk testing. But for some components full dependency integrated testing might devolve back to close to full trunk testing. But again, this all depends on how close or far future procedures are to the current ones.
Sorry, I guess you lost me. Here's what I want: Each change I make is tested against the previous released state of the rest of Boost (so I'm not trying to manage a moving target), unless otherwise specified. I might specify otherwise if my new work depends on an upcoming-but-not-yet-released version of another library, for example. I think Boost release managers would want the latest releasable state of all libraries tested against one another, unless otherwise specified. In today's world that corresponds to testing the release branch. Whenever that is "all green," they can spin the release. They might specify otherwise, for example, if they had to roll back to an earlier released or releasable state of one of the libraries. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Dave Abrahams wrote:
On Tue, Feb 1, 2011 at 8:57 PM, Rene Rivera <grafikrobot@gmail.com>
Each change I make is tested against the previous released state of the rest of Boost (so I'm not trying to manage a moving target), unless otherwise specified. I might specify otherwise if my new work depends on an upcoming-but-not-yet-released version of another library, for example.
I test on my own machine against the release branch. In other words, I'm testing against the NEXT release. This means that I update my release tree once in a while. This might address the concerns that some have that we're really testing the integrated whole. If everyone did this, the release would be always (almost) ready to deploy. Robert Ramey

Dave Abrahams wrote:
At Tue, 01 Feb 2011 10:12:45 -0600, Rene Rivera wrote:
That is, I don't think we can live without full trunk testing.
I'm curious why not. I'm fairly sure I don't want any resources wasted on it for my libraries.
There are times I would like to add more tests but am reluctant to do so because of the increased burden on the testing infrasture. Robert Ramey

At Mon, 31 Jan 2011 11:31:35 +0300, Vladimir Prus wrote:
For a quick experiment, I tried to assess whether the discussion actually reflects the needs of Boost developers, so I created a table of Boost developers sorted by the number of commits in 2010. It is here:
It seems that 5 top Boost comitters did not participate much in recent discussions. And going down the list, it seems like many of active developers did not say anything, while most of discussions is fueled by folks who don't commit much.
Yes, my commit rate is way down, in large part because I find the current structure such a drag to deal with.
Maybe I suggest that for some time, we outright ban freeform discussion about process,
Um, no. That's not the sort of thing we ban on this list. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

Vladimir Prus wrote:
As of recent, we had quite a lot of discussion about process. In true open-source spirit, it was a fairly open discussion, with everybody offering their perspectives and experience. However, while we surely learned many things, it does not seem like we're going anywhere.
For a quick experiment, I tried to assess whether the discussion actually reflects the needs of Boost developers, so I created a table of Boost developers sorted by the number of commits in 2010. It is here:
It seems that 5 top Boost comitters did not participate much in recent discussions. And going down the list, it seems like many of active developers did not say anything, while most of discussions is fueled by folks who don't commit much.
Of course, everybody can offer valuable thoughts, but if the goal is to fix things for Boost developers, it would make sense if developers say that needs fixing, as opposed to other people doing it for them.
Maybe I suggest that for some time, we outright ban freeform discussion about process, and instead, we restrict them to threads started by a Boost developers and saying this: "I am maintainer of X, and had N commits and M trac changes in the last year. I most hate P1, P2 and P3. I would propose that we use T1, T2, and T3 to fix that". Then, everybody could join to suggest better way of fixing P1, P2 and P3 -- without making up other supposed problems.
Thoughts?
Trying to "manage" the discussion is a lot of work and I don't thing it's a good use of anyone's time. I think it would create a lot of unhappiness and have no practical benefit. As a practical matter, when I see a long thread, I just pickout posts from some who I've come to expect a concise and/or interesting post and skip over the rest. I'm guessing that most of us do the same, So the thread is already "managed" from this perspective. Robert Ramey

Vladimir Prus wrote:
As of recent, we had quite a lot of discussion about process. In true open-source spirit, it was a fairly open discussion, with everybody offering their perspectives and experience. However, while we surely learned many things, it does not seem like we're going anywhere.
For a quick experiment, I tried to assess whether the discussion actually reflects the needs of Boost developers, so I created a table of Boost developers sorted by the number of commits in 2010. It is here:
It seems that 5 top Boost comitters did not participate much in recent discussions. And going down the list, it seems like many of active developers did not say anything, while most of discussions is fueled by folks who don't commit much.
Of course, everybody can offer valuable thoughts, but if the goal is to fix things for Boost developers, it would make sense if developers say that needs fixing, as opposed to other people doing it for them.
Maybe I suggest that for some time, we outright ban freeform discussion about process, and instead, we restrict them to threads started by a Boost developers and saying this: "I am maintainer of X, and had N commits and M trac changes in the last year. I most hate P1, P2 and P3. I would propose that we use T1, T2, and T3 to fix that". Then, everybody could join to suggest better way of fixing P1, P2 and P3 -- without making up other supposed problems.
Thoughts?
I thought about this some more. I understand your point. Thinking about this maybe there is something we CAN do. We currently have a review process with review manager, etc. .... This has worked quite well for accepting libraries. The review manager's job is to lend some structure to the discussion, try to forge a concensus. weigh all the input (not necessarily equally), and arrive at a decision. I suggest that we engage in the same process structure for making a very large tool change. Robert Ramey

Robert Ramey wrote:
Vladimir Prus wrote:
As of recent, we had quite a lot of discussion about process. In true open-source spirit, it was a fairly open discussion, with everybody offering their perspectives and experience. However, while we surely learned many things, it does not seem like we're going anywhere.
For a quick experiment, I tried to assess whether the discussion actually reflects the needs of Boost developers, so I created a table of Boost developers sorted by the number of commits in 2010. It is here:
It seems that 5 top Boost comitters did not participate much in recent discussions. And going down the list, it seems like many of active developers did not say anything, while most of discussions is fueled by folks who don't commit much.
Of course, everybody can offer valuable thoughts, but if the goal is to fix things for Boost developers, it would make sense if developers say that needs fixing, as opposed to other people doing it for them.
Maybe I suggest that for some time, we outright ban freeform discussion about process, and instead, we restrict them to threads started by a Boost developers and saying this: "I am maintainer of X, and had N commits and M trac changes in the last year. I most hate P1, P2 and P3. I would propose that we use T1, T2, and T3 to fix that". Then, everybody could join to suggest better way of fixing P1, P2 and P3 -- without making up other supposed problems.
Thoughts?
I thought about this some more. I understand your point. Thinking about this maybe there is something we CAN do. We currently have a review process with review manager, etc. .... This has worked quite well for accepting libraries. The review manager's job is to lend some structure to the discussion, try to forge a concensus. weigh all the input (not necessarily equally), and arrive at a decision. I suggest that we engage in the same process structure for making a very large tool change.
Okay, then we only need to pick a: - review manager, to lead a review that will select, - review manager, to lead a review that will select, - ... - review manager, to lead a review that will select, - the version control system That's so simple! ;-) -- Vladimir Prus Mentor Graphics +7 (812) 677-68-40

Vladimir Prus <vladimir@codesourcery.com> writes:
Okay, then we only need to pick a:
- review manager, to lead a review that will select, - review manager, to lead a review that will select, - ... - review manager, to lead a review that will select, - the version control system
That's so simple! ;-)
http://dilbert.com/strips/comic/1996-07-05/ (I know, "there's an xkcd for every situation" is the new hotness, but I'm an old fogey...) Regards, Tony

I wonder how long it took you to find it :) Really nice tho! Philippe
participants (18)
-
Anthony Foiani
-
Chad Nelson
-
Daniel James
-
Dave Abrahams
-
Dean Michael Berris
-
Edward Diener
-
Joel de Guzman
-
John Maddock
-
Matthias Schabel
-
Paul A. Bristow
-
Philippe Vaucher
-
Rene Rivera
-
Robert Ramey
-
Stefan Seefeld
-
Steve M. Robbins
-
Steven Watanabe
-
Thomas Klimpel
-
Vladimir Prus