
Peter Dimov wrote:
Choosing the wrong native character type causes redundant roundtrip conversions, one in Boost.Filesystem, one in the OS.
Let me expand on that a little. It is _fundamentally wrong_ to assume that all present and future OS APIs have a single native character type. Consider a case where a dual API OS has access to two logical volumes C: and D:, where the file system on C: stores the filenames as 16 bit UTF-16, and the file system on D: uses narrow characters. Now the behavior of the calls is as follows: CreateFileA( "C:/foo.txt" ); // char -> wchar_t OS conversion CreateFileW( L"C:/foo.txt" ); // no OS conversion CreateFileA( "D:/foo.txt" ); // no OS conversion CreateFileW( L"D:/foo.txt" ); // wchar_t -> char OS conversion Furthermore, consider a typical scenario where the application has its own "native" character type, app_char_t. In a design that enforces a single "native" character type boost_fs_char_t ("native" is a deceptive term due to the above scenario), there are potentially redundant (and not necessarily preserving) conversions from app_char_t to boost_fs_char_t and then from boost_fs_char_t to the filesystem character type. In my opinion, the Boost filesystem library should pass the application characters _exactly as-is_ to the underlying OS API, whenever possible. It should not impose its own "native character" ideas upon the user nor upon the OS.