
Dear All, I have had a quick look at the proposed endian::big|littleN_t types. A couple of comments: - I was expecting to find that these types would use the conversion functions that I've already looked at at the lowest level, so that optimisations made to those functions would be useful here too. But instead, these classes seem to use their own byte-shuffling code. This seems like an odd design. - I'm not convinced that these types cause the conversion to happen at the optimum time. For example: struct file_header { big32_t n_things; .... }; int main() { file_header h; h.n_things = 0; while (....) { ++h.n_things; .... } write(h); .... } Here the conversion is happening twice every time that n_things is incremented. It would be better to instead do the conversion once, just before the file_header is written. The same thing applies in reverse when reading from a file. (Surely this is the most common use-case?) I have been wondering if there is a better design that can avoid this. If we had some sort of struct introspection, we could store the fields in native byte order and then template <typename T> T external_representation(const T& t) { T res(t); for each field of T { res.field.reorder(); } } ++h.n_things; // cheap write(external_representation(h)); // conversion happens once here. Of course we don't have introspection of structs so we can't do that. Maybe someone else has another idea. Regards, Phil.