
Paul A. Bristow said
Yes - *exposure* is the key to inspiring authors to refine their contributions.
I've argued for some time for a 'experimental' or 'candidate' status.
(And a state not damned with faint praise like 'unstable' - which is perhaps better described as 'likely_to_be_improved' rather than actively 'not stable').
We'd need a simpler review process to agree to put a library into this 'candidate state' from the 'sandbox state'.
This would mean that when libraries are reviewed for full acceptance, they should have received much more use, feedback and refinement, and be really ready for use by those who want a 'release quality' stable product - including decent documentation.
Agreed. Here is the current review queue. 1) Lexer Ben Hanson 2) Shifted Pointer Phil Bouchard 3) Logging John Torjo 4) Join Yigong Liu 5) Pimpl Vladimir Batov 6) Task Oliver Kowalke 7) Endian Beman Dawes 8) Meta State Machine (MSM) Christophe Henry 9) Conversion Vicente Botet 10) Sorting Steven Ross 11) GIL.IO Christian Henning 12) AutoBuffer Thorsten Ottosen 14) Log Andrey Semashev 15) String Convert Vladimir Batov 16) Move Ion Gaztañaga 17) Containers Ion Gaztañaga 18) Interval Containers Joachim Faulhaber 19) Type Traits Extensions Frédéric Bron 20) Interthreads Vicente Botet 21) Bitfield Vicente Botet 22) Lockfree Ivo 23) Faster Signam slots Helge Bahmann The number of review requests is far outstripping the number of volunteer review managers. Considering about six libraries per year are getting reviewed, this is well over a three year back log. And this does not even include some other libraries that have been discussed, but not included. There are alot of good ideas here, they should not be ignored. Some of these libraries have been in the queue for over 18 months. My idea to fix this is well known. I would like something similar to Paul's approach. Those of you who have argued against a "non-stable" branch of boost, how would you propose fixing this "review queue" problem.