I just read the docs for numeric/conversion at http://tinyurl.com/5bflvn, and I have some questions. Basically I am getting confused about abstract values versus represented values, verses how they are represented (e.g. as fixed point or floating point numbers). 1) Is the density of scaled integers or fixed point types greater than one? The size of the numeric set is larger than the width, so it seams like the density would be greater than one. 2) Is the precision of long int really greater than the precision of short int? It is strange to me that precision seems to have a different meaning for integer than for "floating" types. 3) What is the difference between the numeric sets "Whole" and "Int"? They both represent the same values from the set of integers, which is a subset of real numbers. Does this mean that the _way_ the values are represented is important? 4) Is it true that a fixed point type would be classified as a 'floating' type? Also, are complex types numeric? Thanks, --John