
On Wed, Jan 19, 2011 at 7:39 PM, Dave Abrahams <dave@boostpro.com> wrote:
Our influence, if we introduce new library components, is very great, because they're on a de-facto fast track to standardization, and an improved string library is exactly the sort of thing that would be adopted upstream. If we simply agree to a programming convention, that will have some impact, but much less.
OK, I see. But, is there any chance that the standard itself would be updated so that it first would recommend to use UTF-8 with C++ strings. After some period of time all other encodings would be deprecated and using them would cause undefined behavior. Could Boost be the driving force here? I really see all the obstacles that prevent us from just switching to UTF-8, but adding a new string class will not help for the same reasons adding wstring did not help. As I already said elsewhere I think that this is a problem that has to be solved "organizationally".
*Scenario E:* We add another string class and everyone adopts it
Ok I admit that this is possible. But let me ask: How did the C world made the transition without abandoning char ?
The transition from what to what?
I meant that for example on POSIX OSes the POSIX C-API did not have to be changed or extended by a new set of functions doing the same things, but using a new character type, when they switched from the old encodings to UTF-8. To compare two strings you still can use stdcmp and not utf8strcmp, to collate strings you use strcoll and not utf8strcol, etc. I must admit that the previous statement is an oversimplification and that the things also rely on the C/C++ locale, etc.
-- Dave Abrahams BoostPro Computing http://www.boostpro.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- ________________ ::matus_chochlik