
Le 05/07/2010 20:35, Artyom wrote:
I would disagree. It has very interesting basics and very extensible allowing you to carry
in one class all culture information and copy it efficiently.
I do it better: no class, just a big table, some converters, and some segmenters.
It is not the job of codecvt facet
I provide a generic bridge between my converters and codecvt facets. If you instantiate it with a converter that decodes UTF-8, does normalization, then re-encodes in UTF-16, it does just that. I actually think it's a good idea to put the normalization in there. A lot of things require the string to be normalized to work properly, so if you can automatically do that for unreliable data from files it's one less worry.
also you'll find may issues with it, it is not suitable for it.
What kind of issues? Are you expecting many-to-many conversion to not work?
or create additional facet that does normalization and case conversion, don't use codecvt, it is designed for specific purposes. You need facets? Create your own.
I only need facets if they can be used by an fstream to convert their data. There is no point in creating other types of facets if they're not used by the iostreams subsystem. The whole point of the exercise is to allow the iostreams subsystem to make use of my converters.
Also, do you aware of fact that case conversion is locale dependent? So you still need to connect
locale somehow.
My library is locale-agnostic.