
I suppose something more usefull would be a colour gamut conversion or something.
template <class Gamut, class Repr> bool inGamut(Gamut g, rgb_color<Repr> c)?
and inGamut(Gamut g, yuv_color<Repr> c), etc... . Would that work?
Yes, that makes sense. Let me be devil's advocate for a moment, though, because I think this example raises some interesting issues. I assume the idea here is to convert a color so that it's within the given gamut. In practice I think there would need to be more parameters because there's no single right way to map colors into a given color gamut (it all depends on what sorts of image features you want to preserve), but I'll bypass that for now. This example implies the existence of a Gamut concept, which inGamut presumably uses to determine the mapping. The thing is, I can't think of any way to define a color gamut that isn't in terms of a specific color space (e.g. as a polyhedral region in XYZ). So really, the example would need to be more like template <class Gamut_rgb, class Repr> bool inGamut(Gamut_rgb g, rgb_color<Repr> c) template <class Gamut_yuv, class Repr> bool inGamut(Gamut_yuv g, yuv_color<Repr> c) If you have to deal with gamuts or colors in more than one space, this could kind of suck. What I'm getting at, of course, is that it would be very nice to be able to write code which is generic not only with respect to color representation, but also with respect to the color model. That way one could write a single bool inGamut() template that would work anywhere. More generally, there are many operations one could imagine wanting to apply to colors, which don't require the colors to be represented in a particular model.. Off the top of my head, I can think of two ways of implementing this. One would be to choose a single color space as canonical, and define all generic operations in terms of that space. Then, each new color space just needs to define conversion operations. This has several drawbacks, most obviously the problem of performance (conversion might not be cheap). In addition, there may be subtle correctness issues associated with the convert-compute-convert model. Finally, this would restrict the system to perceptual (as opposed to physical) color models, so that (for example) spectral color models would be disallowed (this is because the canonical model would have to be a perceptual model for performance reasons, and conversion from physical to perceptual color is an information-losing operation) The alternative would be to try to decompose all "interesting" operations on colors into some fixed set of primitive operations, which each color space template would be expected to implement. More complex operations could then be defined generically in terms of those primitives. This would be elegant and efficient, but I have my doubts as to whether it's possible. Of course, my bias in this area is for more abstraction, because I'm more used to thinking of colors as abstract phenomena, without really getting my hands dirty with bits and bytes, so this may not be a practical idea. At any rate, some sort of attention to the issue of conversion between color models would be worthwhile.