
On 03/04/2011 03:22 PM, Joel Falcou wrote:
On 04/03/11 22:17, Marsh Ray wrote:
I'm concerned that perfect is the enemy of good enough here.
and I agree. Now, can we consider the proposal fits in the "good enough" ?
I see two dimensions there: interface and implementation. Obviously interface is the place to expend a project's limited capacity to endure perfectionism. While implementations can improve beneath a stable interface, you seem to be saying that it's "verison 1.0 or never" for support of expression templates because they bundle the logic required for efficiency into the interface itself. It's been "never" so far for any standard C++ bigint facility. Perhaps two separate libraries are worth considering: one with old-fashioned objects and overloaded operators enhanced with rvalue references delivered soon, and another with all the compile-time magic it can stand to become available at some unspecified point in the future.
Is it a requirement for Boost that every new library be state-of-the-art in its use of compile time polymorphism?
Maybe, maybe not. Now, I remind you that Phoenix have been delayed as being accepted into boost the first time, consensus being it has to use proto to be accepted.
I promise to look at this Proto again soon.
MSVC 8.0 GCC 4.4
Well, at when do you start feeling "it compiles for too long" ? Template heavy coe is liek other code, it has to be well written to be fast, and here, fast to compile.
I admit I'm not terribly patient. I have some mature, stable projects that take several seconds per translation unit to compile. They crept up that way over time. But those projects are in maintenance mode and I don't have to add much creativity to them very often. I can handle it. But if I sit down to experiment, starting with with a brand new hello world project, and then I #include header file that puts a noticeable pause in my modify-compile-run cycle, then I am really reluctant to use it. If it adds time to hello world, how can I predict what it will cost if that project grows large? When the pauses creep over a few seconds, I start multitasking and task switching then imposes a disproportionate penalty. Thus I tend to favor Lex/Yacc over Spirit and possibly would stick with GMP over a proto-based solution. I do not pretend that that this is an objective criticism of a library's design, only that it's what works best for me. But I'm sure I'm not the only one who evaluates their libraries that way.
It's a problem for me, the poor developer, when I have to use the thing. Or not if I don't use it. But if no developers use it, the vendors don't make it a priority. Lots of numeric code is still written in C and Fortran. Whether that's because of the compiler or the mind, the abstraction penalty is real.
It is real when said abstraction is sued willy-nilly.
At the end of the day, it's all Turing-equivalent. There remain C and Fortran programmers who haven't been converted by the actual performance gains theoretically enabled by template compile-time polymorphism. Future supercomputers are as likely to prefer some derivative of OpenCL or CUDA as they are C++. I think 'willy-nilly' makes a wonderful bottom line standard to avoid, but a metaprogramming-heavy design is still seems very much a matter of personal style and preference. In my opinion (which I'm willing to change as long as it doesn't add too much to my compile times. :-)
Well, if XInt actually provided range based interface and an extension mechanism for external big int representation, we could have a good start.
There we go. What would it look like? Adapters on standard container<integral type> concepts? How long until I can use it at a beta-level of interface stability? - Marsh