
I apologize for the delay in producing the final summary and result of the Boost.Xint review, but here it is. Votes ===== YES: - Christian Henning - Steven Watanabe - Jarrad Waterloo - Edward Diener - Paul A. Bristow NO : - Mathias Gaunard - Joel Falcou - Anders Dalvander - Joachim Faulhaber - Phil Endecott - Domagoj Saric - Gordon Woodhull ("fraction of a vote") I might have missed somebody inside the 454 messages I have as result of the review, but the overall picture is clear -- the votes are split. High-level issues ================= - The claim that the library is "fast" is not necessary true. Phil has did some checking against hand-written assembler that don't look too good. - It is not clear whether the design allows for improving performance of fixed-size integers using specialization (again, see Phil's comments) - There were concerns about COW. In particular, it was very clearly stated that if COW is used, it should work fine in multi-threaded programs. - It was suggested to use ET. - Some folks suggested better separation between algorithms and data representation. Specific issues =============== - It was pointed out that "a + b + c" creates a pile of temporaries, and this can be improved without using ET. - Joachim's review raised a lot of valid points. Section 3 of his reviews in something that should be addressed, more or less entirely. Note that while we normally don't force specific coding style, some of formatting pointed out is simply unacceptable if this code is to be read by anybody. - 'Secure' flag is probably not so much of it. Future directions ================= I think that reviews by Phil and Joachim are most important in specying evolutionary directions for Boost.XInt. Chad made several comments about what he plans to do, in particular: - http://article.gmane.org/gmane.comp.lib.boost.devel/216935 Conclusion ========== I think there's no doubt that (i) this domain is important and (ii) Chad has put a lot of work into this library, producing several iterations. It was pointed out that we have many arbitrary-precision libraries skeletons all around Boost, and providing such a solid offering for review is a significant accomplishment. And it everybody learned a lot during the review. There was a couple of important meta-points raised during review -- whether review should focus on external interface only, and what is the scope of the library. Some reviewers argued that only external interface matters. However, it seems to me (as review manager), that concerns about internal design are valid, and may not be rejected offhand. After all, external interface of an integer library is more or less defined by mathematics, and internal design plays a key role in successfull evolution of the library. There was also heated discussion about the scope of the library, in particular support for fixed-width integers. I think that it's abstractly OK for this library to treat fixed-width integers as second-class citizens. However, I am having hard time determining the intended scope of the library. If I'm doing cryptography, would I really want to implement cryptography algorithms from the books with Boost.XInt, only to introduce bugs that are probably fixed in existing implementations (often, included in the OS). And if I'm really intent on doing that, I probably want raw performance, including a way to plug in optimized backends using SSE 2011, CUDA, and whatever fancy technology is there. If I'm doing general programming, then I'm most likely interested in not too big integers. They might be fixed-width 128-bit integers, or they might be something larger-than-64, but not necessary fixed. However, such integers are not the scope of the library, and they were found to have suboptimal performance. It is certainly possible to refine/define the scope, optimize everything for that scope, accomodate the proposals made during review, and obtain an excellent Boost library. However, there are some many suggestions, and so many planned redesigns and changes, that we need a second look at the library. Further, because of the global nature of the changes, it would be very hard to identify a specific subset that must be improved and then go through mini-review. For that reason, I've arrived at the the following result: Boost.Xint library is not accepted at this time. We'll be happy to have another full review of the library when the next version is ready. Thanks to Chad for the submission and for everybody who participated in the review! - Volodya -- Vladimir Prus Mentor Graphics +7 (812) 677-68-40

Vladimir Prus wrote:
Specific issues ===============
- It was pointed out that "a + b + c" creates a pile of temporaries, and this can be improved without using ET.
As noticed by Lars Viklund on IRC, "without" is a typo and should have been "with". - Volodya -- Vladimir Prus Mentor Graphics +7 (812) 677-68-40

On 30 April 2011 17:15, Vladimir Prus <vladimir@codesourcery.com> wrote:
- It was pointed out that "a + b + c" creates a pile of temporaries, and this can be improved without using ET.
As noticed by Lars Viklund on IRC, "without" is a typo and should have been "with".
I haven't looked the library, but it could possibly be improved without ET by using move semantics.

On Sat, 30 Apr 2011 19:00:14 +0300 Vladimir Prus <vladimir@codesourcery.com> wrote:
Votes =====
YES:
- Christian Henning - Steven Watanabe - Jarrad Waterloo - Edward Diener - Paul A. Bristow
Also Christopher Jefferson, Ivan Sorokin, Barend Gehrels, and Artyom Beilis, and a "conditional yes" from Robert Stewart.
NO : - Mathias Gaunard - Joel Falcou - Anders Dalvander - Joachim Faulhaber - Phil Endecott - Domagoj Saric - Gordon Woodhull ("fraction of a vote")
Also Jeffrey Lee Hellrung, Jr.
Boost.Xint library is not accepted at this time. We'll be happy to have another full review of the library when the next version is ready.
So be it. Thank you for acting as Review Manager. -- Chad Nelson Oak Circle Software, Inc. * * *

Chad Nelson wrote:
On Sat, 30 Apr 2011 19:00:14 +0300 Vladimir Prus <vladimir@codesourcery.com> wrote:
Votes =====
YES:
- Christian Henning - Steven Watanabe - Jarrad Waterloo - Edward Diener - Paul A. Bristow
Also Christopher Jefferson, Ivan Sorokin, Barend Gehrels, and Artyom Beilis, and a "conditional yes" from Robert Stewart.
Sorry for missing those. It seems like last four were missed because I was reviewing email in two sessions, and apparently some emails were marked as read in between. Also I've missed the vote from Christopher since it was on a line that started with the "quote" (">") character.
NO : - Mathias Gaunard - Joel Falcou - Anders Dalvander - Joachim Faulhaber - Phil Endecott - Domagoj Saric - Gordon Woodhull ("fraction of a vote")
Also Jeffrey Lee Hellrung, Jr.
The overall picture is still that the votes are split, and I did not use specific percentages to make a decision. Thanks, -- Vladimir Prus Mentor Graphics +7 (812) 677-68-40

On 30-4-2011 18:39, Vladimir Prus wrote:
Chad Nelson wrote:
On Sat, 30 Apr 2011 19:00:14 +0300 Vladimir Prus<vladimir@codesourcery.com> wrote:
Votes =====
YES:
- Christian Henning - Steven Watanabe - Jarrad Waterloo - Edward Diener - Paul A. Bristow Also Christopher Jefferson, Ivan Sorokin, Barend Gehrels, and Artyom Beilis, and a "conditional yes" from Robert Stewart. Sorry for missing those. It seems like last four were missed because I was reviewing email in two sessions, and apparently some emails were marked as read in between. Also I've missed the vote from Christopher since it was on a line that started with the "quote" (">") character.
Sorry to react on this, but I feel this is not as it should be (even if apologies and reasons are given). It seems that 5 of 10 positive reviews had not been read at all by the review manager, or at least not read during making up the review report. This is not very motivating for the reviewers, neither for the library writer. I understand that the traffic was really high, that review managers do this voluntary, everybody don't have all the time, etc. Reviews are usually carefully written. People spend several hours on it, sometimes days. Skipping these reviews is a sad thing. Writing a library cost weeks, sometimes months or more. Forgetting reviews is a very sad thing. There was somebody who recently mentioned a scoreboard on this list and I now think this is a good idea, because the review manager can check if all reviews are taken into account. Note that it is not that I'm offended my personal review being skipped, it was not that special and it didn't cost me days. It is more in general that I feel this is really not honest to the library writer.
NO : - Mathias Gaunard - Joel Falcou - Anders Dalvander - Joachim Faulhaber - Phil Endecott - Domagoj Saric - Gordon Woodhull ("fraction of a vote") Also Jeffrey Lee Hellrung, Jr. The overall picture is still that the votes are split, and I did not use specific percentages to make a decision.
In this case it goes (fraction fully counted) from 5/7 to 10/8, flips from negative to positive. Quite a difference. Even if the decision stays the same, it requires an extra motivation for rejecting the library. Regards, Barend

Barend Gehrels wrote:
On 30-4-2011 18:39, Vladimir Prus wrote:
Chad Nelson wrote:
On Sat, 30 Apr 2011 19:00:14 +0300 Vladimir Prus<vladimir@codesourcery.com> wrote:
Votes =====
YES:
- Christian Henning - Steven Watanabe - Jarrad Waterloo - Edward Diener - Paul A. Bristow Also Christopher Jefferson, Ivan Sorokin, Barend Gehrels, and Artyom Beilis, and a "conditional yes" from Robert Stewart. Sorry for missing those. It seems like last four were missed because I was reviewing email in two sessions, and apparently some emails were marked as read in between. Also I've missed the vote from Christopher since it was on a line that started with the "quote" (">") character.
Sorry to react on this, but I feel this is not as it should be (even if apologies and reasons are given).
It seems that 5 of 10 positive reviews had not been read at all by the review manager, or at least not read during making up the review report.
Thanks for giving me the benefit of doubt, and I think it's the latter than happened -- that is, emails were read, but not written down in the report.
This is not very motivating for the reviewers, neither for the library writer.
I understand that the traffic was really high, that review managers do this voluntary, everybody don't have all the time, etc.
Reviews are usually carefully written. People spend several hours on it, sometimes days. Skipping these reviews is a sad thing. Writing a library cost weeks, sometimes months or more. Forgetting reviews is a very sad thing.
There was somebody who recently mentioned a scoreboard on this list and I now think this is a good idea, because the review manager can check if all reviews are taken into account.
Note that it is not that I'm offended my personal review being skipped, it was not that special and it didn't cost me days. It is more in general that I feel this is really not honest to the library writer.
Well, I can say again that I'm sorry, and that this is lame, and of course a technical mechanism would make counting of votes be more accurate.
NO : - Mathias Gaunard - Joel Falcou - Anders Dalvander - Joachim Faulhaber - Phil Endecott - Domagoj Saric - Gordon Woodhull ("fraction of a vote") Also Jeffrey Lee Hellrung, Jr. The overall picture is still that the votes are split, and I did not use specific percentages to make a decision.
In this case it goes (fraction fully counted) from 5/7 to 10/8, flips from negative to positive. Quite a difference. Even if the decision stays the same, it requires an extra motivation for rejecting the library.
This is not something I'd agree with. 10/8 is actually 55%. That's not a sufficiently wide margin that the simple counting of votes can reasonably determine outcome. - Volodya -- Vladimir Prus Mentor Graphics +7 (812) 677-68-40

2011/5/3 Vladimir Prus <vladimir@codesourcery.com>:
Barend Gehrels wrote:
On 30-4-2011 18:39, Vladimir Prus wrote:
Chad Nelson wrote:
On Sat, 30 Apr 2011 19:00:14 +0300 Vladimir Prus<vladimir@codesourcery.com> wrote:
Votes =====
YES:
- Christian Henning - Steven Watanabe - Jarrad Waterloo - Edward Diener - Paul A. Bristow Also Christopher Jefferson, Ivan Sorokin, Barend Gehrels, and Artyom Beilis, and a "conditional yes" from Robert Stewart. Sorry for missing those. It seems like last four were missed because I was reviewing email in two sessions, and apparently some emails were marked as read in between. Also I've missed the vote from Christopher since it was on a line that started with the "quote" (">") character.
Sorry to react on this, but I feel this is not as it should be (even if apologies and reasons are given).
It seems that 5 of 10 positive reviews had not been read at all by the review manager, or at least not read during making up the review report.
Thanks for giving me the benefit of doubt, and I think it's the latter than happened -- that is, emails were read, but not written down in the report.
This is not very motivating for the reviewers, neither for the library writer.
I understand that the traffic was really high, that review managers do this voluntary, everybody don't have all the time, etc.
Reviews are usually carefully written. People spend several hours on it, sometimes days. Skipping these reviews is a sad thing. Writing a library cost weeks, sometimes months or more. Forgetting reviews is a very sad thing.
There was somebody who recently mentioned a scoreboard on this list and I now think this is a good idea, because the review manager can check if all reviews are taken into account.
I second Barend's concerns and remarks. This should not happen so often. A simple way to avoid this accident is to check with the contributor about the vote count. They usually count extremely precise. Regards, Joachim -- Interval Container Library [Boost.Icl] http://www.joachim-faulhaber.de

Hi Vladimir, Thanks for your reply.
The overall picture is still that the votes are split, and I did not use specific percentages to make a decision.
In this case it goes (fraction fully counted) from 5/7 to 10/8, flips from negative to positive. Quite a difference. Even if the decision stays the same, it requires an extra motivation for rejecting the library. This is not something I'd agree with. 10/8 is actually 55%. That's not a sufficiently wide margin that the simple counting of votes can reasonably determine outcome.
I meant going from -2 to +2, but I agree that a negative decision is still valid with this percentage, votes are still split. What happened has happened, apologies accepted. We have to prevent this from happening again in the future. Setting suggestions from this list on a row: - a scoreboard is a simple mean and yes, I think it is useful - consulting the library writer, as Joachim suggested, is even simpler and will guarantee that no vote is missed (honesty assumed) - marking a review as a "review" in the subject line as Matt suggested is also quite simple and will also be very effective - what happened with Joachim's suggestions of a Review Manager Assistent? This sounds quite promising. - and I don't know if this has been suggested before, but I would like to see a review team, consisting (e.g.) of three people: 1) the review manager 2) the review manager assistent 3) one of the review wizards Number 3) would "guarantee" that the review process is more or less similar, and that not every review manager has to reinvent the wheel, or (worse) takes decisions on other grounds. Number 2) would leverage the tasks, e.g. carefully reading and classifying review results. The three together always make a clear outcome (either 3/0 or 2/1 for either acceptance or rejection - based on review votes, of course). As Joachim said (IIRC), 2) would be a good point to start with, and required to be 1) or even required to submit a library for review. With such a review team, reviews cannot be missed, review reports will not be postponed for ever (as has happened in the past). I realize it requires more people and communication, but on the other hand it leverages the tasks of number 1) and it will lead to better review processes. Regards, Barend

On Tue, May 3, 2011 at 10:37 AM, Barend Gehrels <barend@xs4all.nl> wrote:
Hi Vladimir,
Thanks for your reply.
The overall picture is still that the votes are split, and I did not use
specific percentages to make a decision.
In this case it goes (fraction fully counted) from 5/7 to 10/8, flips from negative to positive. Quite a difference. Even if the decision stays the same, it requires an extra motivation for rejecting the library.
This is not something I'd agree with. 10/8 is actually 55%. That's not a sufficiently wide margin that the simple counting of votes can reasonably determine outcome.
I meant going from -2 to +2, but I agree that a negative decision is still valid with this percentage, votes are still split.
What happened has happened, apologies accepted. We have to prevent this from happening again in the future.
[snip suggestions] I don't know, I think Vladimir did his job as review manager more than adequately. I trust he went through the (many, many) list emails to objectively form a big picture community consensus as to the standing of the library, and reported that. There's much, much more to the review process than counting positive and negative votes. Many negative votes shouldn't prevent a library from being accepted, if the reasons for rejection are weak (in the opinion of the review manager); likewise, I can see one negative vote could be grounds for the rejection, if it raises sufficiently concerning and major issues (again, in the opinion of the review manager), although only under very particular circumstances. It's pretty clear not all of us like the current review process methodology (I remember past threads concerning improvements to the review process), and it's far from perfect, but, I think, it's working: libraries *are* getting reviewed (although they may languish in the queue for quite some time), generally with a fair amount of participation from several parties; and accepted libraries *are* getting added to Boost releases (although some review results are delayed for quite some time after the review period has ended). Seems to me that nothing's going to get done by a lot of talk on the mailing list, same as before. If someone feels strongly about the review process, they would probably have to take a more active role in managing of the next review and coordinate with the review manager and library author to implement and test the desired changes. - Jeff

Dear list, this posting started as a private mail to Barend, but once it was written it wanted to be posted here :) Hi Barend, your points about Vladamir's Declaration of Results are good ones, also your proposals for improvement. I tried to write a posting but somehow I lack motivation. I think the review wizards, after the discussion about Review Manager Assistants startet to accept newcomers as RMs like Chad Nelson and Ed Diener which they did not do in the past. I think this is much better than only waiting for seasoned boosters to volunteer and it is simpler as the RMA model. Chad and Ed for example seem to be doing good jobs. Long time boosters on the contrary tend to be less enthusiastic as RMs: The summarizing fault of Volodya, (excessive) delays or MIA-phenomenons are IMO symptoms of such lack of motivation. This doesn't mean to blame anyone in particular. It's just part of the boost game. First time contributors have more motivation and excitement. Basically I still think we should harness this structure by giving newcomers more responsibility, and as I said, I see this happening, even though in a different way as proposed by me. Best regards, Joachim

Hi Joachim, On 4-5-2011 14:48, Joachim Faulhaber wrote:
Dear list,
this posting started as a private mail to Barend, but once it was written it wanted to be posted here :)
Thanks for sending me a public e-mail ;-)
Hi Barend,
your points about Vladamir's Declaration of Results are good ones, also your proposals for improvement. I tried to write a posting but somehow I lack motivation.
I think the review wizards, after the discussion about Review Manager Assistants startet to accept newcomers as RMs like Chad Nelson and Ed Diener which they did not do in the past.
I think this is much better than only waiting for seasoned boosters to volunteer and it is simpler as the RMA model. Chad and Ed for example seem to be doing good jobs.
Sure!
Long time boosters on the contrary tend to be less enthusiastic as RMs: The summarizing fault of Volodya, (excessive) delays or MIA-phenomenons are IMO symptoms of such lack of motivation.
This doesn't mean to blame anyone in particular. It's just part of the boost game. First time contributors have more motivation and excitement.
Basically I still think we should harness this structure by giving newcomers more responsibility, and as I said, I see this happening, even though in a different way as proposed by me. OK, I understand; there were/are a number of reviews in a row now, so
Maybe. Another reason is that they are very busy with other good Boost-things. there is something happening indeed. Regards, Barend

2011/5/4 Barend Gehrels <barend@xs4all.nl>:
On 4-5-2011 14:48, Joachim Faulhaber wrote:
Long time boosters on the contrary tend to be less enthusiastic as RMs: The summarizing fault of Volodya, (excessive) delays or MIA-phenomenons are IMO symptoms of such lack of motivation.
Maybe. Another reason is that they are very busy with other good Boost-things.
Yes!
This doesn't mean to blame anyone in particular. It's just part of the boost game. First time contributors have more motivation and excitement.
I want to make sure that this is understood correctly. It is not my intent (and I assume same for Barend) to blame Volodya or other RMs. On the contrary. Long term contributors like him do a beautiful service to the community. I personally benefited from Volodya's contributions more than once. Also, conducting the xint review with a large heated discussion is a great contribution and a lot of work. So, many thanks for all that! Still, looking at errors that happen and discussing possibilities for improvement is a good thing as well. Cheers, Joachim -- Interval Container Library [Boost.Icl] http://www.joachim-faulhaber.de

On 05/02/2011 03:04 PM, Barend Gehrels wrote (in another order):
There was somebody who recently mentioned a scoreboard on this list and I now think this is a good idea, because the review manager can check if all reviews are taken into account.
Yes, please. It would also make it more practical for an observer like me to sit down and simply read the reviews in an orderly way.
I understand that the traffic was really high, that review managers do this voluntary, everybody don't have all the time, etc.
Reviews are usually carefully written. People spend several hours on it, sometimes days. Skipping these reviews is a sad thing. Writing a library cost weeks, sometimes months or more. Forgetting reviews is a very sad thing.
Honestly, I would be more likely to help review libraries if I felt like it wouldn't be buried in the volume of the list. - Marsh

On 5/3/2011 11:02 AM, Marsh Ray wrote:
On 05/02/2011 03:04 PM, Barend Gehrels wrote (in another order):
There was somebody who recently mentioned a scoreboard on this list and I now think this is a good idea, because the review manager can check if all reviews are taken into account.
Yes, please.
It would also make it more practical for an observer like me to sit down and simply read the reviews in an orderly way.
I understand that the traffic was really high, that review managers do this voluntary, everybody don't have all the time, etc.
Reviews are usually carefully written. People spend several hours on it, sometimes days. Skipping these reviews is a sad thing. Writing a library cost weeks, sometimes months or more. Forgetting reviews is a very sad thing.
Honestly, I would be more likely to help review libraries if I felt like it wouldn't be buried in the volume of the list.
This where mailing lists fall to pieces. With proper forum software, moderators could easily organize threads and change subject lines so that it would be trivial to separate actual reviews from discussion. And in fact, users would quickly adjust their behavior to fit the paradigm. Instead, we have hundreds of messages with "review" in the subject and only some of them are actual reviews, most being discussion (like this one!). :( -Matt

On May 3, 2011, at 12:12 PM, Matthew Chambers wrote:
This where mailing lists fall to pieces. With proper forum software, moderators could easily organize threads and change subject lines so that it would be trivial to separate actual reviews from discussion. And in fact, users would quickly adjust their behavior to fit the paradigm. Instead, we have hundreds of messages with "review" in the subject and only some of them are actual reviews, most being discussion (like this one!). :(
Interesting idea. Would require some very hard-working moderators, though... and wouldn't it be confusing as a participant to have your subject line changed after you'd written a message? Do you have examples of open-source projects that organize things this way? Thanks, Gordon Personally, I think that what's lacking is tools to "rethread" and organize mail messages after the fact. Or at least I don't know of it.

Chad Nelson <chad.thecomfychair <at> gmail.com> writes:
So be it.
Thank you for acting as Review Manager.
Chad, I really really want something like xint to be in boost. Please consider pushing forward with this library and going for another review down the road. Given the amazing amount of controversy for big integer libraries in boost over the past decade, the number of "yes" reviews you got from big players in boost was a heavy-duty accomplishment. Joel

On Mon, 2 May 2011 20:13:40 +0000 (UTC) Joel Young <jdy@cryregarder.com> wrote:
So be it.
Thank you for acting as Review Manager.
Chad, I really really want something like xint to be in boost. Please consider pushing forward with this library and going for another review down the road.
Thank you for the encouragement. I also feel that Boost needs such a library. I haven't made a final decision on XInt's fate as yet, we'll see once my current project is on its feet and I have a large block of time to devote to it again. If all goes well, that will be late this summer or soon thereafter.
Given the amazing amount of controversy for big integer libraries in boost over the past decade, the number of "yes" reviews you got from big players in boost was a heavy-duty accomplishment.
I found it encouraging that some of the people I've come to admire on this list saw the potential in it, regardless of their vote. -- Chad Nelson Oak Circle Software, Inc. * * *
participants (10)
-
Barend Gehrels
-
Chad Nelson
-
Daniel James
-
Gordon Woodhull
-
Jeffrey Lee Hellrung, Jr.
-
Joachim Faulhaber
-
Joel Young
-
Marsh Ray
-
Matthew Chambers
-
Vladimir Prus