
Edward Diener wrote:
On 1/19/2011 11:33 AM, Peter Dimov wrote:
Edward Diener wrote:
Inevitably a Unicode standard will be adapted where every character of every language will be represented by a single fixed length number of bits.
This was the prevailing thinking once. First this number of bits was 16, which incorrect assumption claimed Microsoft and Java as victims, then it became 21 (or 22?). Eventually, people realized that this will never happen even if we allocate 32 bits per character, so here we are.
"Eventually, people realized..." . This is just rhetoric, where "people" is just whatever your own opinion is.
I do not understand the technical reason for it never happening.
I'm not sure that I do, either. Nevertheless, people at the Unicode consortium have been working on that for... 20 years now? What technical obstacle that currently blocks their progress do you foresee disappearing in the future? Occam says that variable width characters are simply a better match for the problem domain, even when character width in bits is not a problem.