
"Steven Watanabe" <watanabesj@gmail.com> wrote in message news:4D7F8B1B.8050502@providere-consulting.com...
On 03/15/2011 06:48 AM, Domagoj Saric wrote:
"Steven Watanabe" <watanabesj@gmail.com> wrote in message news:4D7E2D5D.5050304@providere-consulting.com...
I don't think fixed size integers are necessary for a Boost bigint proposal to begin with.
Why?
A Boost library does not need to be all things to all people. It does need to have a clearly defined scope, and it does need to do it well. I would not generally vote against a library because it is missing feature X, even if X is something that I actually need.
Does it need to be "80%" to "80%" of the people? It is not that "I need X", but that many people need/want X _and_ it is relatively trivial to provide X and to, while providing X, implicitly make the library better in other ways (easier extensibility and maintainability, reduced template bloat and compile times...)... But, most importantly, the core issue is the "to begin with"...If the current design and interface (and the author's attitude) left room for 'proper fixed sized integers support' later, without breaking the interface (not that I would object to breaking it but simply know that otherwise change would never happen), then my objections would be vastly toned down (maybe even to an acceptance vote)...However this is not the case here...
First, AFAIK a significant portion of 'bigint' usage falls into the realm of cryptography and encryption keys which usually have fixed power-of-two sizes and both 'fixed' and 'power-of-two' almost always translate to great simplification/efficiency improvements when one gets to the implementation level. This naturally translates to the question "why should I pay for usage of new, try, catch and throw if all I want is to construct a statically known fixed-size public RSA key and use it to verify a message"?
And of course these things are so much more expensive than the modular exponentiation required by RSA...
That's just an overly simplistic argument used just way to often to justify premature pessimization and the production of bloatware. And it really saddens me when I see it used by "prominent boosters"... Of course these things are not "so much more expensive", unless of course they cause a page fault (which sadly is not such a rare event even 'these days' precisely because of the mass production of bloatware...no matter how much RAM you install 'Bill' always finds a way to take it away)... They do however slow down the 'modular exponentiation' itself (additional indirection, worse locality of reference, fragmentation, etc.) but most importantly they induce code bloat and as such affect the whole application (and the entire system in fact). This reasoning is not the fallacious type of the slippery slope argument because the compounded bloat effect is plain obviously demonstrated every day by user computers with OSs that simply idle at the desktop with near 1GB of RAM usage... I see no objective foundation for even having to defend certain efficiency concerns that are IMO really self-evident. For example, if I write an audio application, and I need, as an auxiliary operation, to verify a message, is it so unusual to expect the crypto library used not to add more code to my binary than all of the many dozens of DSP algorithms that make the core and purpose of the application in question? See here http://boost.2283326.n4.nabble.com/Re-Crypto-Proposal-td2671858.html for my analysis of exactly such a problem with real world numbers that demonstrate where does injudicious/by default usage of new, virtual and throw (or their higher level monster-friends, like std::streams) lead... -- "What Huxley teaches is that in the age of advanced technology, spiritual devastation is more likely to come from an enemy with a smiling face than from one whose countenance exudes suspicion and hate." Neil Postman