
On Thu, Oct 4, 2012 at 5:25 PM, Dave Abrahams <dave@boostpro.com> wrote:
I would not draw too many conclusions from the recent EWG proposals. It remains unclear to many of us that the "valid expressions" approach is technically or ergonomically feasible.
Yeah, this is sort of my conclusion. I really don't understand the move away from pseudo-signatures, the lack of mention of archetypes, lack of associated type deduction, etc. I think what I'm going to try to do is finish Boost.Generic following N2914 along side of Lorenzo's N3351 implementation (and preferably start sharing backend implementation of some things). At that point, perhaps we'll have two reasonable implementations of the ideas presented in both and people will actually be able to determine which proposal they feel is more suitable for standardization. It also makes it easy to experiment with different library specifications as well. Right now, I get the sense that some people definitely want concepts in some form but don't necessarily have an understanding of all of the nuances of the different approaches (let alone the larger capabilities), making it difficult to make a sound judgment on the benefits and drawbacks of each. If people have a chance to actually use both implementations, especially developers of generic libraries, it will hopefully make the debate at least a little less theoretical than it currently is.
You should bring it to
https://groups.google.com/a/isocpp.org/forum/?fromgroups#!forum/std-discussi... or https://groups.google.com/a/isocpp.org/forum/?fromgroups#!forum/std-proposal... .
Once I finish all of the explicit concept map changes and properly document the library, I'll do that. Right now the tests, slides, and N2914 are the only documentation.
? That seems to cover implicit (not explicit) concept maps:
14.10.3.2 Implicit concept maps for refined concepts [concept.refine.maps]
It affects the implicit generation of concept maps of less-refined concepts based on explicit concept maps of a more-refined concept -- in a world of all auto concepts, none of this is comes into play (which is why Boost.Generic works perfectly fine in all tested cases for concepts that make even heavy use of refinement, as long as the concepts involved are auto). The tricky part comes from coding the concept map look-up for concept maps of less-refined concept in the case where someone provides a concept map template for a more-refined concept. This is nontrivial since it's entirely possible for, I.E., a more-refined concept to have more (or less, or reordered) arguments than a concept that it refines, but the less-refined concept still needs to be deducible from the template arguments for the more-refined concept's concept map template; otherwise, it would be impossible to match that template with the less-refined concept (that was a mouthful). This type of situation is what's referred to in that part of the standard that I linked. The trick I use in the end is to make curious use of template aliasing along with function template overloads to get the same type of deduction and pattern matching that is described in the standard. It's a little bit trickier than that because the library needs to keep track of the more refined concepts and concept maps in compile-time mutable type lists that are updated every time a new concept refinement or concept map is introduced in a translation unit. My proof-of-concept on this works, but it's taking a few days to implement in full. -- -Matt Calabrese