
3. Under non-windows platform that should do anything to strings and pass them
as is as
native POSIX api is narrow and not wide.
Yet you still need to convert between UTF-8 and the POSIX locales. Even if most recent POSIX systems use UTF-8 as their locale, there is no guarantee of that. Indeed, quite a few still run in latin-1.
No you don't need convert UTF-8 to "locales" encoding as char* is native system API unlike Windows one. So you don't need to mess around with encodings at all unless you deal with text related stuff like for example collation. The **only** problem is badly designed Windows API that makes impossible to write cross platform code. So the idea that when we on windows we treat "char *" as UTF-8 and then call Wide API after converting it from UTF-8. There is no problem with this. As long as all library use same policy there would be no issues using Unicode any more. The problem is not locales, encodings or other stuff, the problem is that Windows API does not allow you to use "char *" based string fully as it does not support UTF-8 and platform independent programming becomes total mess. Artyom