From: ravioli@softhome.net
There was feedback about this on the Boost list, as well, so I've posted a reply about this, there, as well.
However, this does mean that the semantics is changed slightly. In the cases where an implicit conversion exists, it may give a different result than if not, in the case of char/wchar_t. Without implict conversion, you get 1 -> '1'. With implicit conversion, you get 1 -> 1 (ASCII 1). As far as I can tell, this only affects conversions between char/wchar_t and other types, though. If this is a problem, please let me know, and I can change it to make an exception for char/wchar_t.
Is this behaviour overridable, for example by adding a specialization transforming 1 => '1' ?
Absolutely. That's what I meant by making an exception. This version of lexical_cast relies heavily on specialisation, and partial specialisation where available, although where not available, specialisations for the common cases such as char/wchar_t and std::string/std::wstring, is included. I also intend to make a version where partial specialisation isn't required. Considering this, it does indeed seem like a reasonable conversion, for something called lexical_cast. After all, this is how numbers are converted to strings, so it makes sense that the same happens for characters. Therefore, I've changed this so that it performs the above conversion from/to char/wchar_t, to make it consistent with the conversion from/to std::basic_string. Feedback at the Boost list also suggested what you suggested, here.
Do you know about any such fast functions or operators (besides the conversion operators mentioned earlier here) for some types?
It seems these good old clib functions are pretty fast for conversions. Sorry if they sadly look old-fashioned : atoi(), atol(), strtol(), strtod(), sprintf(), sscanf()...
That's no problem in general. Hidden in a library implementation, they can be as cryptic as they want. :)
You may laugh at me, but they are, afaik, really the fastest ones :) depending on the platform lib , of course
:) One reason that I hesitate with this, however, is what I've mentioned earlier, regarding being able to customise the formatting, by configuring the stringstream object. This won't be possible if you use such C functions, because they won't follow the stream state, including locale settings. Especially as it's now possible to configure the stringstream interpreter, and I'm also working on a new version where you can supply the stringstream object as an optional argument to lexical_cast, I don't think it's a good idea to use the C functions above. They will not follow the stream formatting. Is this reasonable for you?
Maybe, for time types on Unix, asctime(const struct tm *) and strftime() if properly wrapped, all built-ins functions involving complex<> types and their conversion to and from doubles, ints, and so on.
These types, like a Roman numbers class you mention below, here, can be made without any extra support from lexical_cast. You just provide the required stream operators, and any constructors or conversion operators. Therefore, I don't think these are the responsibility of lexical_cast. In the grand C++ tradition, lexical_cast is designed to be extensible, so that it can handle any such new types.
PS : Have you considered conversion to and from Roman numbers ;) ;) ?
Well, lexical_cast is about conversions between _types_. So if you want to
convert to and from Roman numbers, make a Roman numbers class. :)
If you design it the right way, i.e. including stream operators for
reading/writing Roman numbers, and any implicit conversions you'd like to
have (for example from/to int), then it should work with lexical_cast. :)
As you know, lexical_cast now supports implicit conversion, where available.
Thanks for the feedback. :)
Another thing I'm wondering about, using implicit conversion, where
available (except for the special cases, like conversion from/to
char/wchar_t, etc., as mentioned), it means that the following works:
int i=lexical_cast<int>(1.23); // double to int
However, using the original version of lexical_cast, the above would throw a
bad_lexical_cast. That's because the function is defined like this:
template