
"Lubomir Bourdev" <lbourdev@adobe.com> wrote in message news:B55F4112A7B48C44AF51E442990015C00173AFC1@namail1.corp.adobe.com...
We don't believe the type of the result could automatically be inferred simply from the types of arguments.
The type of the result is fixed by the types of the arguments, therefore, simply ... the type of the result is automatically inferred simply from the types of arguments. The optimal type depends on your
intent and the context.
The type of the result is fixed by the types of the arguments.
For example, VIGRA's promotion traits choose double as the promotion for int when dealing with floating point operations. Why not float? On many architectures float is faster, and precision may not be an issue in the context in which you use it.
Why should int always be the result type of adding two ints? Depending on what you are doing int may overflow... Int for summing two shorts will not overflow, but what if you are building, say, the integral image and you want to sum the values of all pixels in your image?
As you can see, the optimal type depends on a complex interaction between speed, capacity and desired precision and hard-coding it makes the algorithms less generic.
So, I'm guessing that GIL doesnt support UDT's? regards Andy Little