Re: [boost] Report from Berlin C++ Standards Committee meeting

Beman Dawes wrote:
The C++ Standards Committee met last week in Berlin, Germany. Of interest to Boosters:
* Boost Filesystem was voted into TR2.
That is great news. It's good to see directory iterators in there too :).
* Boost Lexical Cast has been tentatively accepted by the LWG for TR2.
This has to be one of the most useful utilities in Boost.
* A Boost Date-Time query was presented at the last meeting, and LWG members again in Berlin indicated interest in seeing a full proposal for TR2.
This is good news.
* It looks more and more likely that Concepts will go into C++0x. If Concepts do go into the core language, the LWG has indicated their intent to percolate Concepts throughout the Standard Library. (Concepts were being held up by two competing proposals, but those two proposals are in process of being coalesced into a single proposal by Bjarne Stroustrup and Doug Gregor.)
This is great news. Concepts will be a very useful addition to C++0x. In terms of C++0x features, what is the current state of lambda support as I have seen several papers from Bjarne, et. al.? - Reece _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/

On 4/10/06, Reece Dunn <msclrhd@hotmail.com> wrote: []
* Boost Lexical Cast has been tentatively accepted by the LWG for TR2.
This has to be one of the most useful utilities in Boost.
All developers I know roll out their own versions of lexical_cast. Last time I checked the implementation of boost::lexical_cast it had efficiency bugs. There at least should be specializations for string -> integer/double conversions that use strtoul/d functions directly rather than iostreams.

Maxim Yegorushkin <maxim.yegorushkin@gmail.com> wrote:
All developers I know roll out their own versions of lexical_cast. Last time I checked the implementation of boost::lexical_cast it had efficiency bugs. There at least should be specializations for string -> integer/double conversions that use strtoul/d functions directly rather than iostreams.
the other problem that I have with lexical_cast is that currently it's impossible to tell whether particular conversion will fail without actually catching exception (which implies huge performance penalty). This is needed in all situations where "invalid" input is not an error (eg. optional data coming from external source) B.

On 4/10/06, Bronek Kozicki <brok@rubikon.pl> wrote:
Maxim Yegorushkin <maxim.yegorushkin@gmail.com> wrote:
All developers I know roll out their own versions of lexical_cast. Last time I checked the implementation of boost::lexical_cast it had efficiency bugs. There at least should be specializations for string -> integer/double conversions that use strtoul/d functions directly rather than iostreams.
the other problem that I have with lexical_cast is that currently it's impossible to tell whether particular conversion will fail without actually catching exception (which implies huge performance penalty). This is needed in all situations where "invalid" input is not an error (eg. optional data coming from external source)
True. Boost have long ignored these issues, no wonder they accepted it tentatively.

Maxim Yegorushkin wrote:
On 4/10/06, Bronek Kozicki <brok@rubikon.pl> wrote:
All developers I know roll out their own versions of lexical_cast. Last time I checked the implementation of boost::lexical_cast it had efficiency bugs. There at least should be specializations for string -> integer/double conversions that use strtoul/d functions directly rather than iostreams.
Maxim Yegorushkin <maxim.yegorushkin@gmail.com> wrote: the other problem that I have with lexical_cast is that currently it's impossible to tell whether particular conversion will fail without actually catching exception (which implies huge performance penalty). This is needed in all situations where "invalid" input is not an error (eg. optional data coming from external source)
True. Boost have long ignored these issues, no wonder they accepted it tentatively.
The thing is, lexical_cast in its current implementation, is simple and elegant, and perfectly usable for most cases. It doesn't preclude optimisations in the *implementation*, and some such optimisations have been suggested. It would be nice to see some actually implemented in the reference implementation. As to the error handling - I haven't personally given it much thought - but I'm not sure that can be easily parameterised without disturbing the interface. Best regards, [)o IhIL..

On 4/10/06, Phil Nash <phil.nash.lists@gmail.com> wrote: []
True. Boost have long ignored these issues, no wonder they accepted it tentatively.
The thing is, lexical_cast in its current implementation, is simple and elegant, and perfectly usable for most cases.
In the real world practicality beats purity. The fact is people do not use boost::lexical_cast because of the aforementioned problems.

"Maxim Yegorushkin" <maxim.yegorushkin@gmail.com> wrote in message news:3254098f0604100258y68d4d4dbw68e0e24e7128d7e9@mail.gmail.com...
On 4/10/06, Phil Nash <phil.nash.lists@gmail.com> wrote:
[]
True. Boost have long ignored these issues, no wonder they accepted it tentatively.
The thing is, lexical_cast in its current implementation, is simple and elegant, and perfectly usable for most cases.
In the real world practicality beats purity. The fact is people do not use boost::lexical_cast because of the aforementioned problems.
Last time I looked, four out of five end-users in the "Who's using Boost" section reported using lexical_cast. So it might be more accurate to say "_some_ people do not use lexical_cast" . The LWG is aware of those issues. That is the sort of thing they often discuss. --Beman

Maxim Yegorushkin wrote:
In the real world practicality beats purity. The fact is people do not use boost::lexical_cast because of the aforementioned problems.
I second this! boost::lexical_cast in its current form/implementation is usable in hardly any real project situation because of many downsides. The only thing what is nice about this utility at the moment actually is its pretty interface, which wouldn't be so pretty if it handled all the requirements if should IMO. Those issues araised again and again on the boost list, but nobody felt ever responsible even to comment them in a sufficient way, some people always referring to the 'nice interface' which should never be touched in any way. I don't remember all the issues people complained about in many threads, I remember some of them which are really crucial and need to be dealt with: - performance, performance, performance - error handling - controlling the conversions via facets There may be some others and maybe we should put up a complete list. I'm afraid, if lexical_cast is standardized as is, people will always have to go back to C library functions to do their conversions. I really can't imagine that we are the only ones who have major ressentiments agains lexical_cast in its current form! Stefan

Stefan Slapeta wrote:
Maxim Yegorushkin wrote:
In the real world practicality beats purity. The fact is people do not use boost::lexical_cast because of the aforementioned problems.
I second this! boost::lexical_cast in its current form/implementation is usable in hardly any real project situation because of many downsides.
The only thing what is nice about this utility at the moment actually is its pretty interface, which wouldn't be so pretty if it handled all the requirements if should IMO. Those issues araised again and again on the boost list, but nobody felt ever responsible even to comment them in a sufficient way, some people always referring to the 'nice interface' which should never be touched in any way.
I don't remember all the issues people complained about in many threads, I remember some of them which are really crucial and need to be dealt with:
- performance, performance, performance - error handling - controlling the conversions via facets
There may be some others and maybe we should put up a complete list. I'm afraid, if lexical_cast is standardized as is, people will always have to go back to C library functions to do their conversions. I really can't imagine that we are the only ones who have major ressentiments agains lexical_cast in its current form!
FWIW, I completely agree.

Am Montag, den 10.04.2006, 10:49 +0100 schrieb Phil Nash:
Maxim Yegorushkin wrote:
As to the error handling - I haven't personally given it much thought - but I'm not sure that can be easily parameterised without disturbing the interface.
Another nice feature would be to check if the output was actually generated from the complete input. Imagine the following scenario: You have lexical_cast<double>("1,2") and as it happens, it returns 1 but happily. This is why I used strtod directly in an application of mine. You could add parameters with Boost.Parameters, by the way. So I would call: lexical_cast<double>("1,2", ignore_tail = false) and get a exception. I will probably have to wait for Boost 1.36 though :D.
Best regards,
Thank you
[)o IhIL.. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

"Aristid Breitkreuz" <aribrei@arcor.de> wrote in message news:1144663521.11938.4.camel@localhost.localdomain...
Am Montag, den 10.04.2006, 10:49 +0100 schrieb Phil Nash:
Maxim Yegorushkin wrote:
As to the error handling - I haven't personally given it much thought - but I'm not sure that can be easily parameterised without disturbing the interface.
Another nice feature would be to check if the output was actually generated from the complete input. Imagine the following scenario: You have lexical_cast<double>("1,2") and as it happens, it returns 1 but happily. This is why I used strtod directly in an application of mine. You could add parameters with Boost.Parameters, by the way. So I would call: lexical_cast<double>("1,2", ignore_tail = false) and get a exception. I will probably have to wait for Boost 1.36 though :D.
The LWG is also moving forward with a separate set of conversion functions that are lower level, consume the input string (if any), and in general are to be used where lexical_cast is not the right tool. There will be a paper from Pete Becker available in two or three weeks. --Beman

Beman Dawes wrote:
The LWG is also moving forward with a separate set of conversion functions that are lower level, consume the input string (if any), and in general are to be used where lexical_cast is not the right tool.
There will be a paper from Pete Becker available in two or three weeks.
That's great news! I'm looking forward to reading it! Stefan

Beman Dawes wrote:
The LWG is also moving forward with a separate set of conversion functions that are lower level, consume the input string (if any), and in general are to be used where lexical_cast is not the right tool.
There will be a paper from Pete Becker available in two or three weeks.
Sounds good to me. Could you pls. forward him link to this discussion (eg. http://thread.gmane.org/gmane.comp.lib.boost.devel/140542/ )? B.

Aristid Breitkreuz wrote:
Another nice feature would be to check if the output was actually generated from the complete input. Imagine the following scenario: You have lexical_cast<double>("1,2") and as it happens, it returns 1 but happily.
...
So I would call: lexical_cast<double>("1,2", ignore_tail = false) and get a exception. I will probably have to wait for Boost 1.36 though :D.
I have a feeling you are using an old lexical_cast. AFAIK, The lexical_cast in the newest Boost version assumes successful parsing only if all of the input string was consumed.

On 4/10/06, Phil Nash <phil.nash.lists@gmail.com> wrote:
Maxim Yegorushkin wrote:
On 4/10/06, Bronek Kozicki <brok@rubikon.pl> wrote:
All developers I know roll out their own versions of lexical_cast. Last time I checked the implementation of boost::lexical_cast it had efficiency bugs. There at least should be specializations for string -> integer/double conversions that use strtoul/d functions directly rather than iostreams.
Maxim Yegorushkin <maxim.yegorushkin@gmail.com> wrote: the other problem that I have with lexical_cast is that currently it's impossible to tell whether particular conversion will fail without actually catching exception (which implies huge performance penalty). This is needed in all situations where "invalid" input is not an error (eg. optional data coming from external source)
True. Boost have long ignored these issues, no wonder they accepted it tentatively.
The thing is, lexical_cast in its current implementation, is simple and elegant, and perfectly usable for most cases. It doesn't preclude optimisations in the *implementation*, and some such optimisations have been suggested. It would be nice to see some actually implemented in the reference implementation. As to the error handling - I haven't personally given it much thought - but I'm not sure that can be easily parameterised without disturbing the interface.
So have a try_lexical_cast which takes a reference to the output var and returns a bool of success, and have lexical_cast wrap it to throw an exception on failure.
Best regards,
[)o IhIL.. _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Cory Nelson http://www.int64.org

The thing is, lexical_cast in its current implementation, is simple and elegant, and perfectly usable for most cases. It doesn't preclude optimisations in the *implementation*, and some such optimisations have been suggested.
In theory yes but since lexical_cast always work on the global locale optimizations are difficult. You can't just specialize on e.g. int to string conversion since you alse need to confirm that the default locale is the same as the classic one (or implement locale handling in the optimization).

"Maxim Yegorushkin" <maxim.yegorushkin@gmail.com> wrote in message news:3254098f0604100130l75d5b010x4e0352722a2c9a7b@mail.gmail.com...
On 4/10/06, Reece Dunn <msclrhd@hotmail.com> wrote:
[]
* Boost Lexical Cast has been tentatively accepted by the LWG for TR2.
This has to be one of the most useful utilities in Boost.
All developers I know roll out their own versions of lexical_cast. Last time I checked the implementation of boost::lexical_cast it had efficiency bugs. There at least should be specializations for string -> integer/double conversions that use strtoul/d functions directly rather than iostreams.
The proposal explicitly allows implementations to provide specializations. It would be very unusual for the committee to require implementations to supply specific specializations. Rather, that would be viewed as a quality of implementation issue. The actual document is N1975, Filesystem Library Proposal for TR2 (Revision 3). It will appear on the committee web site in two or three weeks. --Beman

Beman Dawes wrote:
The proposal explicitly allows implementations to provide specializations. It would be very unusual for the committee to require implementations to supply specific specializations. Rather, that would be viewed as a quality of implementation issue.
Specializing lexical_cast means PTS for functions. Will this be added to core language?

Yuval Ronen wrote:
Beman Dawes wrote:
The proposal explicitly allows implementations to provide specializations. It would be very unusual for the committee to require implementations to supply specific specializations. Rather, that would be viewed as a quality of implementation issue.
Specializing lexical_cast means PTS for functions. Will this be added to core language?
Probably not. This specialization can be made inside the function by means of a class template. -Thorsten

Thorsten Ottosen wrote:
Yuval Ronen wrote:
Specializing lexical_cast means PTS for functions. Will this be added to core language?
Probably not.
This specialization can be made inside the function by means of a class template.
OK. If only boost::lexical_cast was doing it, we could actually use it...

Yuval Ronen wrote:
OK. If only boost::lexical_cast was doing it, we could actually use it...
And maybe not even then. Providing a uniform interface for conversion from any type to any type using an intermediate string sounds good at first, but when you think of it, it's quite useless. I've *never* had any need to convert between two types via a string. All I needed was to serialize to string, and de-serialize from string. These two usages are inherently different because de-serialization can fail, while serialization always succeeds. This means different interface by two separate functions - the de-serialization interface (and only it) should account for error handling. Insisting on the current cast-like interface is no good, since it prevents much needed interface improvements such the one I mentioned, and others Stefan Slapeta mentioned in his post. I hope the interface accepted to the standard will take this into account... Yuval

On 4/11/06, Yuval Ronen <ronen_yuval@yahoo.com> wrote:
And maybe not even then.
Providing a uniform interface for conversion from any type to any type using an intermediate string sounds good at first, but when you think of it, it's quite useless.
Agree with that. Neat but useless is not a virtue.

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Maxim Yegorushkin | Sent: 10 April 2006 09:31 | To: boost@lists.boost.org | Subject: Re: [boost] Report from Berlin C++ Standards | Committee meeting | | * Boost Lexical Cast has been tentatively accepted by the LWG for TR2. | > | > This has to be one of the most useful utilities in Boost. | | All developers I know roll out their own versions of lexical_cast. | Last time I checked the implementation of boost::lexical_cast it had | efficiency bugs. There at least should be specializations for string | -> integer/double conversions that use strtoul/d functions directly | rather than iostreams. Agree this would be nice, but it is also vital IMO that lexical_cast (and serialization) should 'round-trip' correctly, as in recent discussion: http://article.gmane.org/gmane.comp.lib.boost.devel/140386 One the the requirements for 'round-tripping' is to get the number of digits 'written' right, and neither of these are correct at present. I have tried to get acceptance for this trivial, but still very important detail: http://www2.open-std.org/JTC1/SC22/WG21/docs/papers/2005/n1822.pdf which has also just been accepted for inclusion in C++0x "After static const int digits10 = 0; insert: static const int max_digits10 = 0;" which would be used by writing: std::numeric_limits<FPType>::max_digits10(); However I do not see how to add these until the new numeric_limits definition is available. (Will someone correct me if I am wrong about ADDING to the existing definition) and also, to be included in the next C Standard mailing ftp://www.hetp.u-net.com/public/N1171%20max%20significant%20decimal%20digits %20MACROs.pdf which proposed provision of macros for these values thus for the built-in types #define FLT_MAXDIG10 (2+(FLT_MANT_DIG * 30103UL)/100000UL) #define DBL_MAXDIG10 (2+ (DBL_MANT_DIG * 30103UL)/100000UL) #define LDBL_MAXDIG10 (2+ (LDBL_MANT_DIG * 30103UL)/100000UL) Meanwhile, I am confident that the best formula to use is 2 + std::numeric_limits<FPType>::digits * 30103UL/100000UL; and to use these in both in Boost lexical cast and serialization. and also test (which does already use the correct number of digits, but might fail for some higher precision types, as noted below). (Note that the reason for using 30103UL/100000UL is avoid an inaccurate result for higher precision types like 113 significand bits used by VAXH and IEEE quadruple, and future 256-bit floating point. But this ratio, and 3010/10000, would cause overflow on 16-bit integer systems, so if numeric_limits<int>::digits < 32 the ratio should be 301/1000. For this observation I am indebted to the eagle-eyed Fred J. Tydeman.) I am not clear if we need to worry about 16-bit integer systems - are any toaster manufacturers whoch have not moved on the 32-bit processor planning to use lexical_cast, serlialization or test? I am not sure if they will get any warning of the overflow - I suspect not. Paul -- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB Phone and SMS text +44 1539 561830, Mobile and SMS text +44 7714 330204 mailto: pbristow@hetp.u-net.com http://www.hetp.u-net.com/index.html http://www.hetp.u-net.com/Paul%20A%20Bristow%20info.html

There are TWO issues which prevent perfect round tripping of floats/doubles <-> strings. a) correct number of digits and correct rendering/round tripping of floating point values b) portable rendering of the variety of Nans. Both of these issues have been discussed extensively on this list - without any resolution. I would think that resolving these issues is very important and long overdue. On the other hand - I don't see the motivation for including concepts in the core language. Haven't we been able to implement concept checking with the current facilities of the library? It seems to me that the main problem these days with the C++ language these days is that its too hard to write a correct compiler for it. Making the core language fancier will only make this problem worse. While, the commitee is at it - how about re-visiting two-phase lookup. Apparently there are enough implementers skeptical about it that they've declined to implement it. (for good reason in my opinion). I believe this is related to the "export" feature - which has also failed to gain traction. So its seems to me that there are a number of issues more important/urgent than the ones currently being focused on. Robert Ramey

On Apr 10, 2006, at 11:50 AM, Robert Ramey wrote:
On the other hand - I don't see the motivation for including concepts in the core language. Haven't we been able to implement concept checking with the current facilities of the library? It seems to me that the main problem these days with the C++ language these days is that its too hard to write a correct compiler for it. Making the core language fancier will only make this problem worse.
Imagine a world where the compiler type-checks your templates when you define them, so that no type errors slip past until instantiation time. A world where misuse of template libraries results in 2-line error message instead of a 2-megabyte error message, where user errors are diagnosed as user errors and implementor errors are diagnosed as implementor errors. Imagine throwing away all of that imprecise concept documentation we use, replacing it with precise concept definitions checked by the compiler. Imagine composing generic libraries together without writing a single adaptor class, where it doesn't matter if I call it "x.size()", "size(x)", or "length (x)". And do it all without breaking existing code. Concepts will drastically improve our ability to write solid generic libraries in C++. When we've improved our compiler and introductory material, we'll announce the next release of ConceptGCC. I strongly encourage everyone interested in library design to give it a try. Doug

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Doug Gregor
Imagine a world where the compiler type-checks your templates when you define them, so that no type errors slip past until instantiation time. A world where misuse of template libraries results in 2-line error message instead of a 2-megabyte error message, where user errors are diagnosed as user errors and implementor errors are diagnosed as implementor errors. Imagine throwing away all of that imprecise concept documentation we use, replacing it with precise concept definitions checked by the compiler. Imagine composing generic libraries together without writing a single adaptor class, where it doesn't matter if I call it "x.size()", "size(x)", or "length (x)". And do it all without breaking existing code.
Concepts will drastically improve our ability to write solid generic libraries in C++. When we've improved our compiler and introductory material, we'll announce the next release of ConceptGCC. I strongly encourage everyone interested in library design to give it a try.
Doug, is there an up-to-date paper on concepts? If so, maybe some of the serious problems associated with earlier concept mechanism ideas have been dealt with. Concepts, as I've seen them so far, are not simply "everything is better" like you appear to be implying above. Given what I have seen, let me add to the above: Imagine a world where either 1) copious amounts of implementation detail bubbles up the hierarchy of concept requirements, 2) the 2-megabyte error messages we get now are replaced with 2-megabyte error messages of concept failures deep in the hierarchy of concept requirements, or 3) ad-hoc polymorphism, the greatest strength of the template mechanism in C++, is simply dropped in favor of, at best, non-extensible lists of alternatives. IOW, I've yet to see a concept mechanism that could even be used to implement the STL (about the simplest possible example) with the same degree of extensibility that we have now. Maybe a concept mechanism can be designed that minimizes how much extensibility is lost, but either way, it's hardly the all-around-win-scenario that you're implying above. Perhaps my understanding is out-of-date. If so, corrections are welcome. However, my opinion (watching this from the very beginning) is that concepts are fundamentally at odds with ad-hoc polymorphism and second-phase lookup. In order to do it, you have to make significant concessions in those two areas--and I doubt this has changed. Regards, Paul Mensonides

On Apr 10, 2006, at 4:35 PM, Paul Mensonides wrote:
Doug, is there an up-to-date paper on concepts?
I'm basing my comments on the most recent "Indiana" proposal for concepts, described in committee paper N1849: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1849.pdf However, we are busy working on a new proposal for concepts that should be available in the near future. The new proposal will have different syntax in several places and should have greatly improved presentation.
Imagine a world where either 1) copious amounts of implementation detail bubbles up the hierarchy of concept requirements,
This claim has been made before, but has yet to be proven. With concepts, your templates need to require what they use, e.g., if you want to treat some type "Iter" as an Input Iterator in your implementation, you have to say that in the interface. Otherwise, you don't get the benefits of separate type checking for templates. We've looked at a few examples that others have had where implementation details bubble up, but they were all either due to misunderstanding of how concepts worked or very poor abstractions. Did you have a particular example in mind?
2) the 2-megabyte error messages we get now are replaced with 2-megabyte error messages of concept failures deep in the hierarchy of concept requirements,
The 2MB error messages we get today are mostly due to the need to include the instantiation context with the message. We get something that says "no operator< for x < y", which shows up in some implementation detail function __introsort_loop(), called from __introsort(), called from __sort(), called from sort(), called from <where the user actually made the mistake>. Each of these functions needs to also have the list of template arguments it was given. That's a lot of text. Concepts provide separate type-checking, so the error will show up in one of two places: 1) In the definition of __introsort_loop(), before it is instantiated, if there is no suitable operator<. No instantiation stack needed, because there's no instantiation yet. 2) In the user code, where the user actually made the mistake. This error will say that the requirement for, e.g., LessThanComparable is not satisfied by the types involved. No instantiation stack needed, because we haven't tried to instantiate sort(). To try to make this description a little more "real", I've appended the output of both GCC and ConceptGCC when trying to compile this simple program: list<int> l; sort(l.begin(), l.end());
or 3) ad-hoc polymorphism, the greatest strength of the template mechanism in C++, is simply dropped in favor of, at best, non-extensible lists of alternatives.
Concepts (and Generic Programming itself) have always been about extensibility and avoiding lock-in. Let's take the trivial example of a min() function and examine it's use with and without concepts. Without concepts, it looks like this: template<typename T> const T& min(const T& x, const T& y) { return x < y? x : y; } You can instantiate min() with any type U so long as there is a < operator that can compare two U's. The compiler does this by substituting U for T throughout the body of the template, and then looking up the < operator for two U's. With concepts, one would create a LessThanComparable concept and use that within the declaration of min(): auto concept LessThanComparable<typename T> { bool operator<(T, T); }; template<typename T> where LessThanComparable<T> const T& min(const T& x, const T& y) { return x < y? x : y; } You can instantiate min() with any type U so long as there is a < operator that can compare two U's. There are some subtle differences (i.e., the concept-based min() has been fully type-checked at definition time), but the use of min() will be the same either way. However, concepts give you more flexibility. For instance, let's assume that we want to use min() on our Employee type but we don't want to add a global operator< for Employees. Without concepts, we would need to write some kind of adaptor class to use when calling min (): it would wrap up a reference to an Employee, provide some alternative definition of operator<, and then probably convert to an Employee reference. Each time we use min(), we need to remember to construct this adaptor. The end result looks something like this: class Employee { ... string id; ... }; class employee_with_less_than { public: employee_with_less_than(const Employee& e) : e(e) { } operator const Employee& () const { return e; } const Employee& e; } bool operator<(employee_with_less_than x, employee_with_less_than y) { return x.e.id < y.e.id; } void f(Employee e1, Employee e2) { min(employee_with_less_than(e1), employee_with_less_than(e2)); } Concepts allow one to adapt syntax through the use of "concept maps" (called "models" in N1849). Concept maps mean that you never have to match the exact syntax required by a concept, because you can tell the compiler how your types meet the requirements of a concept. For instance: concept_map LessThanComparable<Employee> { bool operator<(Employee x, Employee y) { return x.id < y.id; } }; void f(Employee e1, Employee e2) { min(e1, e2); // Okay, uses LessThanComparable<Employee>::operator< } Concept maps can be templates, too, which allows you to adapt an entire class of types to a concept. For instance, every vector<T> can be made into a Stack; every type that meets the requirements of a Front Insertion Sequence and a Back Insertion Sequence can be made into a Queue, and every type that is a Weighted Graph can be made into a Matrix.
IOW, I've yet to see a concept mechanism that could even be used to implement the STL (about the simplest possible example) with the same degree of extensibility that we have now. Maybe a concept mechanism can be designed that minimizes how much extensibility is lost, but either way, it's hardly the all-around-win-scenario that you're implying above.
We immediately rejected any design that could not express the STL. The appendix of N1849 contains many examples illustrating how the STL can be expressed with our concept proposal. These aren't straw men or guesses or wishes: they were copied directly out of the implementation of the Standard Library in ConceptGCC, and they work as expected. I am fully aware that I sound like a raving lunatic, claiming to fix all of the current problems with templates. ConceptGCC implements enough of the Indiana concepts proposal to express most of the STL, from the simplest concept like CopyConstructible all the way to proxy- laden InputIterators. We know how to express the rest of the STL, and much of the non-STL Standard Library, using concepts, and have also looked at expressing concepts from other libraries (e.g., Boost.Graph, Boost.Function). What we've found is that we can improve the quality of template libraries (by catching bugs earlier; we've found several in libstdc++), their usability (with shorter/better error messages), and their composability (by allowing syntax adaptation).
Perhaps my understanding is out-of-date. If so, corrections are welcome. However, my opinion (watching this from the very beginning) is that concepts are fundamentally at odds with ad-hoc polymorphism and second-phase lookup. In order to do it, you have to make significant concessions in those two areas--and I doubt this has changed.
There are always tradeoffs when introducing a type system into an untyped language. The question is whether one can express all of the interesting and useful constructs within the type system. If there's some useful construct that you are concerned might not be expressible with concepts, please give us an example and we'll discuss it. You can try out concepts today with ConceptGCC and/or read N1849 to learn about some of the details of concepts. In the near future, we'll have a revised proposal (with new concept syntax) and an improved version of ConceptGCC that implements the revised proposal. If you're itching to learn about concepts now, I suggest starting with the ConceptGCC tutorial and compiler at: http://www.osl.iu.edu/~dgregor/ConceptGCC/ When we have the revised compiler/proposal available, I'll make an announcement here on Boost. Doug Sorting list iterators with GCC: /usr/include/c++/4.0.0/bits/stl_algo.h: In function 'void std::sort (_RandomAcces sIterator, _RandomAccessIterator) [with _RandomAccessIterator = std::_List_itera tor<int>]': sort_list.cpp:8: instantiated from here /usr/include/c++/4.0.0/bits/stl_algo.h:2570: error: no match for 'operator-' in '__last - __first' /usr/include/c++/4.0.0/bits/stl_algo.h: In function 'void std::__final_insertion _sort(_RandomAccessIterator, _RandomAccessIterator) [with _RandomAccessIterator = std::_List_iterator<int>]': /usr/include/c++/4.0.0/bits/stl_algo.h:2571: instantiated from 'void std::sort (_RandomAccessIterator, _RandomAccessIterator) [with _RandomAccessIterator = std ::_List_iterator<int>]' sort_list.cpp:8: instantiated from here /usr/include/c++/4.0.0/bits/stl_algo.h:2214: error: no match for 'operator-' in '__last - __first' /usr/include/c++/4.0.0/bits/stl_algo.h:2216: error: no match for 'operator+' in '__first + _S_threshold' /usr/include/c++/4.0.0/bits/stl_algo.h:2217: error: no match for 'operator+' in '__first + _S_threshold' /usr/include/c++/4.0.0/bits/stl_algo.h: In function 'void std::__insertion_sort( _RandomAccessIterator, _RandomAccessIterator) [with _RandomAccessIterator = std: :_List_iterator<int>]': /usr/include/c++/4.0.0/bits/stl_algo.h:2220: instantiated from 'void std::__fi nal_insertion_sort(_RandomAccessIterator, _RandomAccessIterator) [with _RandomAc cessIterator = std::_List_iterator<int>]' /usr/include/c++/4.0.0/bits/stl_algo.h:2571: instantiated from 'void std::sort (_RandomAccessIterator, _RandomAccessIterator) [with _RandomAccessIterator = std ::_List_iterator<int>]' sort_list.cpp:8: instantiated from here /usr/include/c++/4.0.0/bits/stl_algo.h:2130: error: no match for 'operator+' in '__first + 1' /usr/include/c++/4.0.0/bits/stl_algo.h:2220: instantiated from 'void std::__fi nal_insertion_sort(_RandomAccessIterator, _RandomAccessIterator) [with _RandomAc cessIterator = std::_List_iterator<int>]' /usr/include/c++/4.0.0/bits/stl_algo.h:2571: instantiated from 'void std::sort (_RandomAccessIterator, _RandomAccessIterator) [with _RandomAccessIterator = std ::_List_iterator<int>]' sort_list.cpp:8: instantiated from here /usr/include/c++/4.0.0/bits/stl_algo.h:2136: error: no match for 'operator+' in '__i + 1' Sorting list iterators with ConceptGCC: sort_list.cpp: In function 'int main()': sort_list.cpp:8: error: no matching function for call to 'sort (std::_List_iterat or<int>, std::_List_iterator<int>)' /usr/local/conceptgcc/lib/gcc/powerpc-apple- darwin8.2.0/4.0.1/../../../../includ e/c++/4.0.1/bits/stl_algo.h:2843: note: candidates are: void std::sort (_Iter, _I ter) [with _Iter = std::_List_iterator<int>] <where clause> sort_list.cpp:8: note: unsatisfied model requirement 'std::MutableRandomAccess Iterator<std::_List_iterator<int> >'

Douglas Gregor wrote:
On Apr 10, 2006, at 4:35 PM, Paul Mensonides wrote:
To try to make this description a little more "real", I've appended the output of both GCC and ConceptGCC when trying to compile this simple program:
list<int> l; sort(l.begin(), l.end());
Sorting list iterators with ConceptGCC:
sort_list.cpp: In function 'int main()': sort_list.cpp:8: error: no matching function for call to 'sort (std::_List_iterat or<int>, std::_List_iterator<int>)' /usr/local/conceptgcc/lib/gcc/powerpc-apple- darwin8.2.0/4.0.1/../../../../includ e/c++/4.0.1/bits/stl_algo.h:2843: note: candidates are: void std::sort (_Iter, _I ter) [with _Iter = std::_List_iterator<int>] <where clause> sort_list.cpp:8: note: unsatisfied model requirement 'std::MutableRandomAccess Iterator<std::_List_iterator<int> >'
Here's the message from Comeau 4.3.3: $ como list.cpp --error_limit=1 Comeau C/C++ 4.3.3 (Jan 13 2004 11:29:09) for MS_WINDOWS_x86 Copyright 1988-2003 Comeau Computing. All rights reserved. MODE:non-strict warnings microsoft C++ "E:\C++\visual7_1\include\algorithm", line 1795: error: no operator "-" matches these operands operand types are: std::list<int, std::allocator<int>>::iterator - std::list<int, std::allocator<int>>::iterator _Sort(_First, _Last, _Last - _First); ^ detected during instantiation of "void std::sort(_RanIt, _RanIt) [with _RanIt=std::list<int, std::allocator<int>>::iterator]" Now the problem here is that we can't see which line that caused the error. I assume concept checks from boost would help. -Thorsten

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Douglas Gregor
I'm basing my comments on the most recent "Indiana" proposal for concepts, described in committee paper N1849:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1849.pdf
Okay.
However, we are busy working on a new proposal for concepts that should be available in the near future. The new proposal will have different syntax in several places and should have greatly improved presentation.
Okay, this is what I suspected--that a lot of the details exist mostly in the collective minds of those close to the proposal.
Imagine a world where either 1) copious amounts of implementation detail bubbles up the hierarchy of concept requirements,
This claim has been made before, but has yet to be proven. With concepts, your templates need to require what they use, e.g., if you want to treat some type "Iter" as an Input Iterator in your implementation, you have to say that in the interface. Otherwise, you don't get the benefits of separate type checking for templates. We've looked at a few examples that others have had where implementation details bubble up, but they were all either due to misunderstanding of how concepts worked or very poor abstractions. Did you have a particular example in mind?
Well, for a simple scenario: template<class T> void f(T x) { return x; } template<class T> void g(T x) { return f(x); } If I add concept requirements to f and g, the set of requirements for g has to include the set of requirements of f and f itself. If I add another function: template<class T> void h(T x) { return g(x); } Its set of requirements must include the set of requirements from g and g itself. So far, for h, that set includes CopyConstructible, the ability to call f, and the ability to call g. That's what I'm referring to when I say "bubbling of implementation detail." Note, in particular, that CopyConstructible is not sufficient as a sole requirement of g or h. f is needed in the requirements of g, and both f and g are needed in the requirements of h because both are bound during the second phase--which could result in (e.g.) an ambiguity. You need to raise that (e.g.) ambiguity to the top-level instantiation (or concept check immediately prior to instantiation) in order to avoid the two-megabyte error messages (whether they be in regular templates or in concept checks). Something like CopyConstructible is fairly innocuous as an implementation detail, the f and g are not--particular the requirement for f in h. Such bubbling makes it virtual impossible to let those calls be resolved during the second phase (without some sort of concept requirement union mechanism either explicit or implicit).
2) the 2-megabyte error messages we get now are replaced with 2-megabyte error messages of concept failures deep in the hierarchy of concept requirements,
The 2MB error messages we get today are mostly due to the need to include the instantiation context with the message. We get something that says "no operator< for x < y", which shows up in some implementation detail function __introsort_loop(), called from __introsort(), called from __sort(), called from sort(), called from <where the user actually made the mistake>. Each of these functions needs to also have the list of template arguments it was given. That's a lot of text.
Yes, I understand that.
Concepts provide separate type-checking, so the error will show up in one of two places:
1) In the definition of __introsort_loop(), before it is instantiated, if there is no suitable operator<. No instantiation stack needed, because there's no instantiation yet.
2) In the user code, where the user actually made the mistake. This error will say that the requirement for, e.g., LessThanComparable is not satisfied by the types involved. No instantiation stack needed, because we haven't tried to instantiate sort().
What I fail to understand is why this is just LessThanComparable here instead of all of LessThanComparable, __introsort_loop, __introsort, __sort, and sort. All of those can fail to be bound during the second phase--even if they are namespace qualified--because of overloading.
Concepts (and Generic Programming itself) have always been about extensibility and avoiding lock-in. Let's take the trivial example of a min() function and examine it's use with and without concepts. Without concepts, it looks like this:
template<typename T> const T& min(const T& x, const T& y) { return x < y? x : y; }
You can instantiate min() with any type U so long as there is a < operator that can compare two U's. The compiler does this by substituting U for T throughout the body of the template, and then looking up the < operator for two U's.
With concepts, one would create a LessThanComparable concept and use that within the declaration of min():
auto concept LessThanComparable<typename T> { bool operator<(T, T); };
template<typename T> where LessThanComparable<T> const T& min(const T& x, const T& y) { return x < y? x : y; }
You can instantiate min() with any type U so long as there is a < operator that can compare two U's. There are some subtle differences (i.e., the concept-based min() has been fully type-checked at definition time), but the use of min() will be the same either way.
Yes, I understand all of the above, and everything is simple at this simple level. What happens, on the other hand, when some other template uses min? It then needs both LessThanComparable and min (or some concept name that describes the operation). In this particular case it isn't so bad, but when functions (or any other name bound during the second phase) is a non-public implementation detail (such as func() calling func_impl()), it is a serious problem (func_impl is an operation on the types passed to it just as much as operator< is). Say you have operator< above, but it is implemented as: template<class T> bool operator<(const X<T>& x, const X<T>& y) { x.less(y); } Now the min above breaks because the where clause does not include the requirement that T (the one in min) has a less() member function. I don't see how you can avoid the bubbling of implementation detail without severely restricting second-phase lookup--ultimately to the point of listing viable alternatives before hand (subsetting second-phase lookup) and prioritizing them (to avoid ambiguity).
Concepts allow one to adapt syntax through the use of "concept maps" (called "models" in N1849). Concept maps mean that you never have to match the exact syntax required by a concept, because you can tell the compiler how your types meet the requirements of a concept. For instance:
concept_map LessThanComparable<Employee> { bool operator<(Employee x, Employee y) { return x.id < y.id; } };
void f(Employee e1, Employee e2) { min(e1, e2); // Okay, uses LessThanComparable<Employee>::operator< }
This is a great idea, but that is not my main concern. My main concern is how you can avoid implementation detail bubbling--even in simple cases--without some sort of premature binding.
IOW, I've yet to see a concept mechanism that could even be used to implement the STL (about the simplest possible example) with the same degree of extensibility that we have now. Maybe a concept mechanism can be designed that minimizes how much extensibility is lost, but either way, it's hardly the all-around-win-scenario that you're implying above.
We immediately rejected any design that could not express the STL.
Note that I said "...implement the STL with the same degree of extensibility that we have now."
The appendix of N1849 contains many examples illustrating how the STL can be expressed with our concept proposal. These aren't straw men or guesses or wishes: they were copied directly out of the implementation of the Standard Library in ConceptGCC, and they work as expected.
Okay.
I am fully aware that I sound like a raving lunatic, claiming to fix all of the current problems with templates. ConceptGCC implements enough of the Indiana concepts proposal to express most of the STL, from the simplest concept like CopyConstructible all the way to proxy- laden InputIterators. We know how to express the rest of the STL, and much of the non-STL Standard Library, using concepts, and have also looked at expressing concepts from other libraries (e.g., Boost.Graph, Boost.Function). What we've found is that we can improve the quality of template libraries (by catching bugs earlier; we've found several in libstdc++), their usability (with shorter/better error messages), and their composability (by allowing syntax adaptation).
I really hope that is the case. However, all examples that you've shown seem to allow transitive compatibility--meaning f can call g without listing the requirement for g if f and g both have the same requirements on their operands (etc.). I don't see how this could possibly work without binding g at the point of f's definition--i.e. _not_ during the second phase.
Perhaps my understanding is out-of-date. If so, corrections are welcome. However, my opinion (watching this from the very beginning) is that concepts are fundamentally at odds with ad-hoc polymorphism and second-phase lookup. In order to do it, you have to make significant concessions in those two areas--and I doubt this has changed.
There are always tradeoffs when introducing a type system into an untyped language. The question is whether one can express all of the interesting and useful constructs within the type system. If there's some useful construct that you are concerned might not be expressible with concepts, please give us an example and we'll discuss it.
My main concern is what I've mentioned above--which is a generic concern related to all generic programming under a concept mechanism. I'm not trying to be a nay-sayer, I really want a concept mechanism to work, but I'm not sure that it can work without major concessions in the ad-hoc way that regular templates work. That ad-hoc nature is what yields two-megabyte error messages, but it is also what makes templates so powerful and so incredibly flexible. Regards, Paul Mensonides

Hi! <caveat> I've followed some discussions on adding Concepts to the language from a distance, but I'm not completely familiar with all the details. Take my comments with a grain of salt </caveat> Paul Mensonides wrote:
Well, for a simple scenario:
template<class T> void f(T x) { return x; }
template<class T> void g(T x) { return f(x); }
If I add concept requirements to f and g, the set of requirements for g has to include the set of requirements of f and f itself. If I add another function:
template<class T> void h(T x) { return g(x); }
Its set of requirements must include the set of requirements from g and g itself. So far, for h, that set includes CopyConstructible, the ability to call f, and the ability to call g.
That's what I'm referring to when I say "bubbling of implementation detail." Note, in particular, that CopyConstructible is not sufficient as a sole requirement of g or h. f is needed in the requirements of g, and both f and g are needed in the requirements of h because both are bound during the second phase--which could result in (e.g.) an ambiguity. You need to raise that (e.g.) ambiguity to the top-level instantiation (or concept check immediately prior to instantiation) in order to avoid the two-megabyte error messages (whether they be in regular templates or in concept checks).
It is my understanding that requirements are to be maintained by the developer. This only follows existing practice with, e.g., enable_if, static_asserts and tag dispatching. It is still up to the developer to keep those requirements accurate and up-to-date. Otherwise the compiler will "fallback" to the two megabyte error message. In my view, the Concepts approach, because it formalizes syntax and semantics, has the added benefit that it eases automated testing of the requirements. I'm not sure how much the Concept checks add to what is already possible in the current language, but having an explicit and unambiguous syntax to do the same things is better than the hacks being used today.
Something like CopyConstructible is fairly innocuous as an implementation detail, the f and g are not--particular the requirement for f in h. Such bubbling makes it virtual impossible to let those calls be resolved during the second phase (without some sort of concept requirement union mechanism either explicit or implicit).
From my experience with Spirit, for example, you'll get tons of errors from fairly small mistakes. I suppose this is what you'd call "bubbling of implementation details" and it is the current state of affairs. The situation could be improved today by adding Concept check hacks or it can be improved tomorrow with proper Concept checks. Either way, the burden will be on the developers of Spirit to keep those checks accurate and to avoid ambiguity and corner cases in the library code.
Concepts provide separate type-checking, so the error will show up in one of two places:
1) In the definition of __introsort_loop(), before it is instantiated, if there is no suitable operator<. No instantiation stack needed, because there's no instantiation yet.
2) In the user code, where the user actually made the mistake. This error will say that the requirement for, e.g., LessThanComparable is not satisfied by the types involved. No instantiation stack needed, because we haven't tried to instantiate sort().
What I fail to understand is why this is just LessThanComparable here instead of all of LessThanComparable, __introsort_loop, __introsort, __sort, and sort. All of those can fail to be bound during the second phase--even if they are namespace qualified--because of overloading.
That would be because, as the developer you take great care to ensure that you respect the stated requirements so your implementation details don't pop out of the interface. Even though the user could intentionally (or not) break things by overloading implementation functions or, god forbid, a library bug could creep in, there will always be plenty of ways to shoot one self in the foot. But, templates or no templates, the developer is responsible for hiding and maintaining implementation details. Taking your example above, if h needs to invoke g(x) generically then, that could be its Concept requirement. In that case, the compiler would ensure such invocation is possible and that x meets the requirements for g. I further suppose all this occurs while deciding if h is a viable function, and before actually trying to fully instantiate and compile it. If, on the other hand, you already know (and want to enforce) that the only effective requirement is that T be CopyConstrutible you could specify that in the requirements instead. Best regards, João Abecasis

On Apr 12, 2006, at 6:05 AM, João Abecasis wrote:
Paul Mensonides wrote:
That's what I'm referring to when I say "bubbling of implementation detail." Note, in particular, that CopyConstructible is not sufficient as a sole requirement of g or h. f is needed in the requirements of g, and both f and g are needed in the requirements of h because both are bound during the second phase--which could result in (e.g.) an ambiguity. You need to raise that (e.g.) ambiguity to the top-level instantiation (or concept check immediately prior to instantiation) in order to avoid the two-megabyte error messages (whether they be in regular templates or in concept checks).
It is my understanding that requirements are to be maintained by the developer. This only follows existing practice with, e.g., enable_if, static_asserts and tag dispatching. It is still up to the developer to keep those requirements accurate and up-to-date. Otherwise the compiler will "fallback" to the two megabyte error message.
Concept requirements are a little stronger than that. If your requirements are wrong on a given function template, that function template will fail to type-check when it is defined. The typical example I use is: template<InputIterator Iter, typename F> where Callable1<F, InputIterator<Iter>::reference> F for_each(Iter first, Iter last, F f) { for (; first < last; ++first) f(*first); return f; } The compiler will reject this definition of for_each, because there is no operator< that works on Input Iterators. Change the "<" to a "! =", and all is well. So, Paul is right that your requirements need to reflect what your implementation does. But, you are also correct that this is the current state of affairs--except that today we use documentation to describe requirements, whereas concepts allow us to use code to describe requirements. Doug

On Apr 11, 2006, at 4:31 PM, Paul Mensonides wrote:
However, we are busy working on a new proposal for concepts that should be available in the near future. The new proposal will have different syntax in several places and should have greatly improved presentation.
Okay, this is what I suspected--that a lot of the details exist mostly in the collective minds of those close to the proposal.
... and scattered throughout various academic papers, presentations, tutorials, etc :)
I really hope that is the case. However, all examples that you've shown seem to allow transitive compatibility--meaning f can call g without listing the requirement for g if f and g both have the same requirements on their operands (etc.). I don't see how this could possibly work without binding g at the point of f's definition--i.e. _not_ during the second phase.
Ah, this is the issue at hand. Yes, we change when and how names are bound. There are still two phases to I'll use the lower_bound algorithm to illustrate how name resolution works: template<InputIterator Iter> void advance(Iter& x, difference_type n); // #1 template<BidirectionalIterator Iter> void advance(Iter& x, difference_type n); // #2 template<ForwardIterator Iter> where LessThanComparable<value_type> Iter lower_bound(Iter first, Iter last, const value_type& value) { difference_type n = distance(first, last); Iter mid = first; advance(mid, n/2); // we're concerned with this call... } template<RandomAccessIterator Iter> void advance(Iter& x, difference_type n); // #3 When we initially type-check lower_bound, we need to determine if there is an advance() that we can call. Both #1 and #2 have been seen, so we check them. Can we call #1? Sure, because every ForwardIterator is also an InputIterator. Can we call #2? Nope, our ForwardIterator is not guaranteed to be a BidirectionalIterator. So, we call #1 our "seed" function and use it for type-checking the template. Type-checking succeeds. Now, let's consider what happens with this call to lower_bound: vector<int> v; lower_bound(v.begin(), v.end(), 17); When instantiating a template that uses concepts, we don't perform the normal second phase lookup with ADL. Instead, we do a more limited lookup designed to make sure that instantiation cannot fail [*]. So, when we instantiate the call to advance() in lower_bound(), we start with our "seed" function #1. We then look for any other functions named "lower_bound" in the same namespace as our seed that are specializations of the seed, and add those functions to our candidate set along with the seed. This means that our candidate set will contain #1, #2, and #3. We perform normal partial ordering of function templates on this candidate set, then end up choosing #3. That's the behavior we want and expect: pick the best function when we instantiate. Doug [*] Yes, instantiation can still fail if partial ordering of function templates returns an ambiguity. We can synthesize it all we want, but it has yet to actually bite us in practice.

Doug Gregor wrote:
Ah, this is the issue at hand. Yes, we change when and how names are bound. There are still two phases to I'll use the lower_bound algorithm to illustrate how name resolution works:
template<InputIterator Iter> void advance(Iter& x, difference_type n); // #1 template<BidirectionalIterator Iter> void advance(Iter& x, difference_type n); // #2
template<ForwardIterator Iter> where LessThanComparable<value_type> Iter lower_bound(Iter first, Iter last, const value_type& value) { difference_type n = distance(first, last); Iter mid = first; advance(mid, n/2); // we're concerned with this call... }
template<RandomAccessIterator Iter> void advance(Iter& x, difference_type n); // #3
When we initially type-check lower_bound, we need to determine if there is an advance() that we can call. Both #1 and #2 have been seen, so we check them. Can we call #1? Sure, because every ForwardIterator is also an InputIterator. Can we call #2? Nope, our ForwardIterator is not guaranteed to be a BidirectionalIterator. So, we call #1 our "seed" function and use it for type-checking the template. Type-checking succeeds.
Question #1, what happens when there is no seed?
Now, let's consider what happens with this call to lower_bound:
vector<int> v; lower_bound(v.begin(), v.end(), 17);
When instantiating a template that uses concepts, we don't perform the normal second phase lookup with ADL. Instead, we do a more limited lookup designed to make sure that instantiation cannot fail [*]. So, when we instantiate the call to advance() in lower_bound(), we start with our "seed" function #1. We then look for any other functions named "lower_bound" in the same namespace as our seed that are specializations of the seed, and add those functions to our candidate set along with the seed.
Question #2, what happens in the swap() case? The seed (if there's one) is std::swap, but the actual call must go to the specialized swap in the type's namespace.

On Apr 12, 2006, at 9:25 AM, Peter Dimov wrote:
Doug Gregor wrote:
When we initially type-check lower_bound, we need to determine if there is an advance() that we can call. Both #1 and #2 have been seen, so we check them. Can we call #1? Sure, because every ForwardIterator is also an InputIterator. Can we call #2? Nope, our ForwardIterator is not guaranteed to be a BidirectionalIterator. So, we call #1 our "seed" function and use it for type-checking the template. Type-checking succeeds.
Question #1, what happens when there is no seed?
Type-checking of the template definition fails.
Now, let's consider what happens with this call to lower_bound:
vector<int> v; lower_bound(v.begin(), v.end(), 17);
When instantiating a template that uses concepts, we don't perform the normal second phase lookup with ADL. Instead, we do a more limited lookup designed to make sure that instantiation cannot fail [*]. So, when we instantiate the call to advance() in lower_bound(), we start with our "seed" function #1. We then look for any other functions named "lower_bound" in the same namespace as our seed that are specializations of the seed, and add those functions to our candidate set along with the seed.
Question #2, what happens in the swap() case? The seed (if there's one) is std::swap, but the actual call must go to the specialized swap in the type's namespace.
In that case, you'll need a Swappable concept in your requirements: auto concept Swappable<typename T> { void swap(T&, T&); }; We'll still define std::swap, of course: template<typename T> where CopyConstructible<T> && Assignable<T> void swap(T& x, T& y); The "auto" means that we get implicit matching for the concept, which will either fall back to the swap() definition above or will find a more specialized swap in the type's namespace. Doug

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Doug Gregor
I really hope that is the case. However, all examples that you've shown seem to allow transitive compatibility--meaning f can call g without listing the requirement for g if f and g both have the same requirements on their operands (etc.). I don't see how this could possibly work without binding g at the point of f's definition--i.e. _not_ during the second phase.
Ah, this is the issue at hand. Yes, we change when and how names are bound. There are still two phases to I'll use the lower_bound algorithm to illustrate how name resolution works:
template<InputIterator Iter> void advance(Iter& x, difference_type n); // #1 template<BidirectionalIterator Iter> void advance(Iter& x, difference_type n); // #2
template<ForwardIterator Iter> where LessThanComparable<value_type> Iter lower_bound(Iter first, Iter last, const value_type& value) { difference_type n = distance(first, last); Iter mid = first; advance(mid, n/2); // we're concerned with this call... }
template<RandomAccessIterator Iter> void advance(Iter& x, difference_type n); // #3
When we initially type-check lower_bound, we need to determine if there is an advance() that we can call. Both #1 and #2 have been seen, so we check them. Can we call #1? Sure, because every ForwardIterator is also an InputIterator. Can we call #2? Nope, our ForwardIterator is not guaranteed to be a BidirectionalIterator.
What happens if you have more than one seed? I.e. there is more than one function that is callable without a refinement relationship existing between the two concepts?
So, we call #1 our "seed" function and use it for type-checking the template. Type-checking succeeds.
Now, let's consider what happens with this call to lower_bound:
vector<int> v; lower_bound(v.begin(), v.end(), 17);
When instantiating a template that uses concepts, we don't perform the normal second phase lookup with ADL. Instead, we do a more limited lookup designed to make sure that instantiation cannot fail [*]. So, when we instantiate the call to advance() in lower_bound(), we start with our "seed" function #1. We then look for any other functions named "lower_bound" in the same namespace as our seed that are specializations of the seed, and add those functions to our candidate set along with the seed.
Is the name resolution related to the seed is implicitly added to the concept requirements that are checked immediately prior to instantiation? Or is the name resolution done during normal instantiation (with a restricted set of possible bindings)?
This means that our candidate set will contain #1, #2, and #3. We perform normal partial ordering of function templates on this candidate set, then end up choosing #3. That's the behavior we want and expect: pick the best function when we instantiate.
Doug
[*] Yes, instantiation can still fail if partial ordering of function templates returns an ambiguity. We can synthesize it all we want, but it has yet to actually bite us in practice.
Well, it seems like an reasonable tradeoff--though I guarantee that the above will bite eventually (which is no worse than what we have now). Regards, Paul Mensonides

On Apr 12, 2006, at 5:36 PM, Paul Mensonides wrote:
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Doug Gregor
When we initially type-check lower_bound, we need to determine if there is an advance() that we can call. Both #1 and #2 have been seen, so we check them. Can we call #1? Sure, because every ForwardIterator is also an InputIterator. Can we call #2? Nope, our ForwardIterator is not guaranteed to be a BidirectionalIterator.
What happens if you have more than one seed?
The program is ill-formed and the compiler will report an ambiguity when type-checking the template definition.
Is the name resolution related to the seed is implicitly added to the concept requirements that are checked immediately prior to instantiation? Or is the name resolution done during normal instantiation (with a restricted set of possible bindings)?
The latter.
[*] Yes, instantiation can still fail if partial ordering of function templates returns an ambiguity. We can synthesize it all we want, but it has yet to actually bite us in practice.
Well, it seems like an reasonable tradeoff--though I guarantee that the above will bite eventually (which is no worse than what we have now).
Everything will bite you eventually, unless you give up specialization entirely. For more information on the issue, see: Jaakko Järvi, Douglas Gregor, Jeremiah Willcock, Andrew Lumsdaine, and Jeremy Siek. Algorithm specialization in generic programming: Challenges of constrained generics in C++. Note: Accepted for publication in PLDI, June 2006. Doug

"Robert Ramey" <ramey@rrsd.com> writes:
On the other hand - I don't see the motivation for including concepts in the core language. Haven't we been able to implement concept checking with the current facilities of the library?
Concepts in the language go way beond checking. There's a great deal of mess that we deal with today because we don't have true concept support: everything from the perils of ADL, to the need for the "typename" and "template" keywords, to tag dispatching, which is really just a hack we invented to simulate concept-based overloading, to many uses of enable_if, will largely disappear from code written with in-language concept support.
It seems to me that the main problem these days with the C++ language these days is that its too hard to write a correct compiler for it. Making the core language fancier will only make this problem worse.
Actually, implementing concept support presents no great difficulty, as Doug has demonstrated by implementing ConceptGCC.
While, the commitee is at it - how about re-visiting two-phase lookup. Apparently there are enough implementers skeptical about it that they've declined to implement it.
AFAICT you just mean that Microsoft hasn't implemented it yet. All the other major compiler vendors who actually do work on their C++ front ends at a reasonable pace (I'm not counting Borland in that group) have got two-phase lookup implemented.
(for good reason in my opinion). I believe this is related to the "export" feature - which has also failed to gain traction.
No, it's not related to export; it was implemented so that outright errors in templates could be detected immediately where the template was defined, rather than later when it is instantiated. Two-phase lookup makes sense even for non-exported templates. An expression in a template that doesn't depend on the template parameters shouldn't change its meaning arbitrarily based on code that happens to come *after* the template definition. If you don't like that idea, you're probably not going to like in-language concept support, since it is capable of moving *all* the checking to phase 1. That's a major benefit.
So its seems to me that there are a number of issues more important/urgent than the ones currently being focused on.
I'm afraid it's the same old song: if you want the committee to focus on other issues, write a proposal that gets those issues on the table. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Where can I find info on this concept support? Sounds interesting.

Neal Becker wrote:
Where can I find info on this concept support? Sounds interesting.
The first Google "ConceptGCC" is: http://www.osl.iu.edu/~dgregor/ConceptGCC/ Jeff

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Robert Ramey | Sent: 10 April 2006 16:51 | To: boost@lists.boost.org | Subject: Re: [boost] Report from Berlin C++ Standards | Committee meeting | | There are TWO issues which prevent perfect round tripping of | floats/doubles | <-> strings. | | a) correct number of digits and correct rendering/round | tripping of floating point values | b) portable rendering of the variety of Nans. | | Both of these issues have been discussed extensively on this | list - without | any resolution. I would think that resolving these issues is | very important and long overdue. Agree - it doesn't seem to be 'rocket science' - and yet it could come round and bite - as one user has discovered recently. Acceptance of the max_digits10 proposal should fix the coding problem. (I am revising the document a little in the light on comments from C experts in WG14 and will post a link when done). I think we now have a reasonable idea of how to test round-tripping. But some problems highlighted are up to Microsoft (and perhaps others too) to fix - Previous bug report for round-tripping floats - now works perfectly. http://lab.msdn.microsoft.com/ProductFeedback/viewfeedback.aspx?feedbackid=7 bf2f26d-171f-41fe-be05-4169a54eef9e My new bug report on doubles: http://lab.msdn.microsoft.com/ProductFeedback/viewFeedback.aspx?feedbackId=F DBK48810 Holding breath not recommended ;-) It seems reasonable for the LWG to increase expectations in REQUIRING round-tripping to work exactly - the Standard is not quite explicit on this. Is a defect report the way to improve this? So a) is in principle fixed IMO. As to b), I had understood that we had agreed that all NaNs could reasonably be treated as identical and one would get a NaN (unspecified) back. Similarly infinities. http://article.gmane.org/gmane.comp.lib.boost.devel/134569/match=serializati on+nan But simple stream << output_value; // write out. double read_value; stream >> read_value; // read back. doesn't work for double output_value = numeric_limits<double>::quiet_NaN(); // output_value 1.#QNAN != read_value 1 double output_value = numeric_limits<double>::infinity(); // output_value 1.#INF != read_value 1 So some special flag would be needed to hold the Nan or Inf nature. If only there was a Standard representation of Nan and Infs. VERY shortsighted, making NaNs and infs hardly useable. Should we make a proposal for this for C++0x? So that the problem can at least be solved eventually? John Maddock's hex proposal is starting to sound MUCH more attractive! It isn't portable, but no FP ever is, and in practice most people are using 32-bit floats or 64-bit doubles, or could do. I was unsure whether there is a problem with denormalized values? double output_value = numeric_limits<double>::denorm_min(); // round-trips OK. so I presume others do. Paul -- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB Phone and SMS text +44 1539 561830, Mobile and SMS text +44 7714 330204 mailto: pbristow@hetp.u-net.com http://www.hetp.u-net.com/index.html http://www.hetp.u-net.com/Paul%20A%20Bristow%20info.html

On Wed, 19 Apr 2006 18:31:14 +0100, Paul A Bristow wrote
So a) is in principle fixed IMO.
As to b), I had understood that we had agreed that all NaNs could reasonably be treated as identical and one would get a NaN (unspecified) back. Similarly infinities.
http://article.gmane.org/gmane.comp.lib.boost.devel/134569/match=serializati on+nan
But simple
stream << output_value; // write out. double read_value; stream >> read_value; // read back.
doesn't work for
double output_value = numeric_limits<double>::quiet_NaN(); // output_value 1.#QNAN != read_value 1 double output_value = numeric_limits<double>::infinity(); // output_value 1.#INF != read_value 1
So some special flag would be needed to hold the Nan or Inf nature.
If only there was a Standard representation of Nan and Infs. VERY shortsighted, making NaNs and infs hardly useable.
Agreed. The types in date_time have the ability to serialize and deserilize NADT (not a date time), -infinity and +infinity. Why couldn't there be a simple extension to the numpunct<charT> facet to define an appropriate output string? Basically something like: //see Langer and Kreft p 414... template<class charT> class numpunct : public locale::facet { //new functions for nan and infinity string_type not_a_number_name() const; string_type infinity_name() const; And you'd have to fix num_get as well. Jeff

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Jeff Garland | Sent: 20 April 2006 04:01 | To: boost@lists.boost.org | Subject: Re: [boost] Report from Berlin C++ Standards | Committee meeting | | On Wed, 19 Apr 2006 18:31:14 +0100, Paul A Bristow wrote | | > So a) is in principle fixed IMO. | > | > As to b), I had understood that we had agreed that all NaNs | could reasonably | > be treated as identical and one would get a NaN (unspecified) back. | > Similarly infinities. | > | > | http://article.gmane.org/gmane.comp.lib.boost.devel/134569/mat | ch=serializati | > on+nan | > | > But simple | > | > stream << output_value; // write out. | > double read_value; | > stream >> read_value; // read back. | > | > doesn't work for | > | > double output_value = numeric_limits<double>::quiet_NaN(); | // output_value | > 1.#QNAN != read_value 1 | > double output_value = numeric_limits<double>::infinity(); | // output_value | > 1.#INF != read_value 1 | > | > So some special flag would be needed to hold the Nan or Inf nature. | > | > If only there was a Standard representation of Nan and Infs. VERY | > shortsighted, making NaNs and infs hardly useable. | | Agreed. The types in date_time have the ability to serialize | and deserilize | NADT (not a date time), -infinity and +infinity. Why | couldn't there be a | simple extension to the numpunct<charT> facet to define an | appropriate output | string? Basically something like: | | //see Langer and Kreft p 414... | template<class charT> | class numpunct : public locale::facet { | | //new functions for nan and infinity | string_type not_a_number_name() const; | string_type infinity_name() const; | | And you'd have to fix num_get as well. Looks a good idea - but these facets look complicated to my inexpert eye. Should we ALSO press to get a 'proper' C++0x Standard solution to this? Paul -- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB Phone and SMS text +44 1539 561830, Mobile and SMS text +44 7714 330204 mailto: pbristow@hetp.u-net.com http://www.hetp.u-net.com/index.html http://www.hetp.u-net.com/Paul%20A%20Bristow%20info.html

On Thu, 20 Apr 2006 11:04:45 +0100, Paul A Bristow wrote
| > So some special flag would be needed to hold the Nan or Inf nature. | > | > If only there was a Standard representation of Nan and Infs. VERY | > shortsighted, making NaNs and infs hardly useable. | | Agreed. The types in date_time have the ability to serialize | and deserilize | NADT (not a date time), -infinity and +infinity. Why | couldn't there be a | simple extension to the numpunct<charT> facet to define an | appropriate output | string? Basically something like: | | //see Langer and Kreft p 414... | template<class charT> | class numpunct : public locale::facet { | | //new functions for nan and infinity | string_type not_a_number_name() const; | string_type infinity_name() const; | | And you'd have to fix num_get as well.
Looks a good idea - but these facets look complicated to my inexpert eye.
Should we ALSO press to get a 'proper' C++0x Standard solution to this?
Maybe it isn't clear, these facets already exist (hence the ref to L&K p414). I'm just suggesting that we write a proposal to add a couple methods and them modify the 'get'/'put' functions to use them. In fact, someone could probably write an extended facet to do just this and submit it to Boost if so motivated. That, frankly, would be more valuable than the change to the standard. Jeff

| -----Original Message----- | From: boost-bounces@lists.boost.org | [mailto:boost-bounces@lists.boost.org] On Behalf Of Jeff Garland | Sent: 20 April 2006 15:02 | To: boost@lists.boost.org | Subject: Re: [boost] Report from Berlin C++ Standards | Committee meeting | | On Thu, 20 Apr 2006 11:04:45 +0100, Paul A Bristow wrote | | > | > So some special flag would be needed to hold the Nan or | Inf nature. | > | > | > | > If only there was a Standard representation of Nan and Infs. | > VERY | > shortsighted, making NaNs and infs hardly useable. | | | > Agreed. The types in date_time have the ability to | serialize | and deserilize | > | NADT (not a date time), -infinity and +infinity. Why | > | couldn't there be a | > | simple extension to the numpunct<charT> facet to define an | > | appropriate output | > | string? Basically something like: | > | | > | //see Langer and Kreft p 414... | > | template<class charT> | > | class numpunct : public locale::facet { | > | | > | //new functions for nan and infinity | > | string_type not_a_number_name() const; | > | string_type infinity_name() const; | > | | > | And you'd have to fix num_get as well. | > | > Looks a good idea - but these facets look complicated to my | inexpert | > eye. | > | > Should we ALSO press to get a 'proper' C++0x Standard | solution to this? | | Maybe it isn't clear, these facets already exist (hence the | ref to L&K p414). | I'm just suggesting that we write a proposal to add a couple | methods and them | modify the 'get'/'put' functions to use them. In fact, | someone could probably | write an extended facet to do just this and submit it to Boost if so | motivated. That, frankly, would be more valuable than the | change to the standard. OK - sounds like an excellent project for a student? Small, self-contained, easily testable.. Paul -- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB Phone and SMS text +44 1539 561830, Mobile and SMS text +44 7714 330204 mailto: pbristow@hetp.u-net.com http://www.hetp.u-net.com/index.html http://www.hetp.u-net.com/Paul%20A%20Bristow%20info.html

"Reece Dunn" <msclrhd@hotmail.com> wrote in message news:BAY101-W6D23CFC4F1DECB0ABF1DFA0CC0@phx.gbl...
In terms of C++0x features, what is the current state of lambda support as I have seen several papers from Bjarne, et. al.?
Lambda support is a very active topic under discussion in the committee's Evolution Working Group. My sense is that a lot of committee members would like lambda support, but that work hasn't progressed far enough yet to be able to say for certain whether or not lambda support will make it into C++0x. Remember, too, that it is going to be very hard or impossible to push through a core language change unless compiler implementors are comfortable with it and there is at least one existing compiler implementation. --Beman

On Apr 10, 2006, at 3:58 AM, Reece Dunn wrote:
In terms of C++0x features, what is the current state of lambda support as I have seen several papers from Bjarne, et. al.?
I don't think being the fourth of five authors on one lambda proposal counts as "Bjarne et. al." :) The driving forces behind lambdas for C++0x are Jeremiah Willcock, Jaakko Jarvi, and Valentin Samko. The lambda proposals are still in an early state of development and will take a couple months to mature. There are many unanswered questions about the scope of lambdas (how much can/should they express?), various design features (polymorphic vs. monomorphic, how to handle references to local variables), etc. I have no doubt the authors will spiral in on the right solution, but they could certainly use more feedback. Doug
participants (22)
-
Aristid Breitkreuz
-
Beman Dawes
-
Bronek Kozicki
-
Cory Nelson
-
David Abrahams
-
Doug Gregor
-
Douglas Gregor
-
Jeff Flinn
-
Jeff Garland
-
João Abecasis
-
Martin Adrian
-
Maxim Yegorushkin
-
Neal Becker
-
Paul A Bristow
-
Paul Mensonides
-
Peter Dimov
-
Phil Nash
-
Reece Dunn
-
Robert Ramey
-
Stefan Slapeta
-
Thorsten Ottosen
-
Yuval Ronen