GIL - Generic Image Library Review - Begins October 5

With the last finishing touches completed, i'm excited to be able to announce that the review of Generic Image Library (GIL) will begin October 5, in three days. Please download the library at: http://opensource.adobe.com/gil.<http://opensource.adobe.com/gil> This site has a great deal of background information about the library, so be shur to check it out. A quick summary follows: The Generic Image Library (GIL) is a C++ library that abstracts the image representation from operations on images. It allows for writing the image processing algorithm once and having it work for images in any color space, channel depth and pixel organization, or even synthetic images, without compromising the performance. GIL has an extension mechanism that allows for adding extra functionality. Two extensions are currently provided – one for image I/O and one for handling images whose properties are specified at run-time. A 55 minute Breeze presentation describing is available at: http://opensource.adobe.com/gil/presentation/index.htm A tutorial is available at: http://opensource.adobe.com/gil/gil_tutorial.pdf A design guide is availage at: http://opensource.adobe.com/gil/gil_design_guide.pdf Tom Brinkman Review Manager

"Tom Brinkman" <reportbase@gmail.com> wrote in message news:30f04db60610022214t6b2b42b8h8c4efe9f14bac849@mail.gmail.com... With the last finishing touches completed, i'm excited to be able to announce that the review of Generic Image Library (GIL) will begin October 5, in three days. My first impression is that the library mixes many Concepts, that can easily be separated. The first is of a display matrix, the second is of an image. Others .. Points are Geometric Concepts. Colour is a Concept which too could stand alone. Cursor (locator) is useful to a matrix The concept of an display matrix could be applied elsewhere, where elements could be for example text characters. An image can also be comprised of vector graphics, but this subject is not touched upon in any serious way. Colour would be useful in Vector graphics, but would deserve its own library and I would expect to see the interface more user friendly with mappings to commonly used colour systems such as those in HTML, SVG and VRML. The domain that the library can be used in is very narrow. To be seriously used for image recognition as was previously suggested an application, my guess is that the library would need the ability to apply arbitrary transforms, including other than 90 degree rotations, and interpolation of points, stereoscopic vision etc. As it stands the only use I can see is for touching up photos, and that is my problem with it , the domain is too limited. I would suggest revisiting the Concepts ,extracting them and then making sure that each would stand on its own. That would be a more interesting and widely useable set of libraries. regards Andy Little

"Andy Little" <andy@servocomm.freeserve.co.uk> wrote in message news:efub9t$sio$1@sea.gmane.org...
"Tom Brinkman" <reportbase@gmail.com> wrote in message news:30f04db60610022214t6b2b42b8h8c4efe9f14bac849@mail.gmail.com... With the last finishing touches completed, i'm excited to be able to announce that the review of Generic Image Library (GIL) will begin October 5, in three days.
My first impression is that the library mixes many Concepts, that can easily be separated. The first is of a display matrix, the second is of an image. Others .. Points are Geometric Concepts. Colour is a Concept which too could stand alone. Cursor (locator) is useful to a matrix
The concept of an display matrix could be applied elsewhere, where elements could be for example text characters. An image can also be comprised of vector graphics, but this subject is not touched upon in any serious way.
Colour would be useful in Vector graphics, but would deserve its own library and I would expect to see the interface more user friendly with mappings to commonly used colour systems such as those in HTML, SVG and VRML.
The domain that the library can be used in is very narrow. To be seriously used for image recognition as was previously suggested an application, my guess is that the library would need the ability to apply arbitrary transforms, including other than 90 degree rotations, and interpolation of points, stereoscopic vision etc.
As it stands the only use I can see is for touching up photos, and that is my problem with it , the domain is too limited. I would suggest revisiting the Concepts ,extracting them and then making sure that each would stand on its own. That would be a more interesting and widely useable set of libraries.
regards Andy Little
I just wanted to second Andy's opinion, because these are my thoughts on the library, exactly. It's too large of a beast that should be broken down into several smaller stand alone libraries that could be repurposed to various tasks. Michael Goldshteyn

"Andy Little" <andy@servocomm.freeserve.co.uk> wrote in message news:efub9t$sio$1@sea.gmane.org...
"Tom Brinkman" <reportbase@gmail.com> wrote in message news:30f04db60610022214t6b2b42b8h8c4efe9f14bac849@mail.gmail.com... With the last finishing touches completed, i'm excited to be able to announce that the review of Generic Image Library (GIL) will begin October 5, in three days.
Are the GIL Authors interested interested in defending their library against my criticisms or do they expect it to get into boost by default? If it does get in to Boost without any defence whatsoever, then it will confirm some suspicions (I currently think are unfounded) I have about Boost. regards Andy Little

Andy, I was not sure you were looking for a response from us. I thought you were just expressing your opinion on the library (which I happen to disagree with). Any large library has some concepts that could stand on their own or be reused in other contexts. Sometimes it makes sense to do so, but sometimes (as I believe is the case with GIL) those other contexts are quite nebulous and doing so would be somewhat extreme. Let's take your example of Display Matrix. You suggested that we could take GIL's image view concept and have it be a separate library, so that you could, for example, use it to manipulate characters in a rectangular grid. But the intersection between the two contexts is very small. Why not just use any 2D matrix of characters? And what is special about the "Display" aspect of a "Display Matrix"? Another problem is that a library cannot consist of just concepts. You need to have concrete models. GIL provides only the models it needs in practice - a 2D image comprised of pixels in a specific color space. So another reason we can't have a "Display Matrix" library is that we don't have existing models that go with it. Let's take your perhaps most convincing example, which is separating color out. The only practical operation on color that doesn't necessarily involve images would be converting from one color space to another (although ultimately I have a hard time coming up with a case where the result won't end up on an image). To do color conversion, you will need to store the values somewhere. Hence you cannot go by without the concept of a channel. The color value is a collection of channels, hence we need the concept of a pixel. And if you have a pixel in one library, does it make sense to have the pixel iterator and reference in another library? No. So we have a lot of GIL that is needed to do the very basic pixel-related operations. Now, we could conceivably separate pixel-level operations in one library and put image-level operations in another library because pixel-level operations don't depend on image-level ones. But the two are so closely related and used together that it is like splitting a linear algebra library into a vector-level operations library and a matrix-level library. Sometimes it just doesn't make sense to do so. We have tried our best to separate GIL functionality as much as we can; this is why we have the extension mechanism. You can think of the GIL core as one library that has no dependencies, and the extensions as separate libraries that depend on the core and perhaps other extensions. But with a few minor exceptions, the code in the core is tightly interdependent. Lubomir

"Lubomir Bourdev" <lbourdev@adobe.com> wrote in message news:B55F4112A7B48C44AF51E442990015C00167861E@namail1.corp.adobe.com...
Andy,
I was not sure you were looking for a response from us. I thought you were just expressing your opinion on the library (which I happen to disagree with).
Any large library has some concepts that could stand on their own or be reused in other contexts. Sometimes it makes sense to do so, but sometimes (as I believe is the case with GIL) those other contexts are quite nebulous and doing so would be somewhat extreme.
? "Those other contexts are quite nebulous"? You lost me..
Let's take your example of Display Matrix. You suggested that we could take GIL's image view concept and have it be a separate library, so that you could, for example, use it to manipulate characters in a rectangular grid. But the intersection between the two contexts is very small. Why not just use any 2D matrix of characters? And what is special about the "Display" aspect of a "Display Matrix"?
Whats so special about the pixel aspect of a pixel matrix? Spreadsheet, LCD display, keyboard, circuitboard, network switch, RAM
Another problem is that a library cannot consist of just concepts. You need to have concrete models. GIL provides only the models it needs in practice - a 2D image comprised of pixels in a specific color space. So another reason we can't have a "Display Matrix" library is that we don't have existing models that go with it.
Yes I think this is the heart of the problem! The library claims to be generic but is in fact tightly coupled to one domain.
Let's take your perhaps most convincing example, which is separating color out. The only practical operation on color that doesn't necessarily involve images would be converting from one color space to another (although ultimately I have a hard time coming up with a case where the result won't end up on an image).
Serialising a line, Changing colour of a button, Making an explosion in a 3D game, monitoring a chemical reaction....
To do color conversion, you will need to store the values somewhere. Hence you cannot go by without the concept of a channel. The color value is a collection of channels, hence we need the concept of a pixel. And if you have a pixel in one library, does it make sense to have the pixel iterator and reference in another library? No. So we have a lot of GIL that is needed to do the very basic pixel-related operations.
You lost me as why you *need* a pixel for a colour.
Now, we could conceivably separate pixel-level operations in one library and put image-level operations in another library because pixel-level operations don't depend on image-level ones. But the two are so closely related and used together that it is like splitting a linear algebra library into a vector-level operations library and a matrix-level library. Sometimes it just doesn't make sense to do so.
The library is designed only for working in one specialised domain, and that is my point.
We have tried our best to separate GIL functionality as much as we can; this is why we have the extension mechanism. You can think of the GIL core as one library that has no dependencies, and the extensions as separate libraries that depend on the core and perhaps other extensions. But with a few minor exceptions, the code in the core is tightly interdependent.
Yes exactly my point.... ;-) regards Andy Little

Andy Little wrote:
"Lubomir Bourdev" <lbourdev@adobe.com> wrote in message
Another problem is that a library cannot consist of just concepts. You need to have concrete models. GIL provides only the models it needs in practice - a 2D image comprised of pixels in a specific color space. So another reason we can't have a "Display Matrix" library is that we don't have existing models that go with it.
Yes I think this is the heart of the problem! The library claims to be generic but is in fact tightly coupled to one domain.
IMO 'generic' doesn't have to translate to 'domain-agnostic'. Generic here means that the individual models used are orthogonal, so it is easy to combine different representations of these models into working code. There is the 'Image' container, and there are various 'Pixel' types images are composed of. Generic means that both models are presented as concepts (and in fact I'm totally delighted to find the documentation use Concepts to present them !), making it easy to provide alternative Image and Pixel implementations. I'm working on a library for high-performance signal and image processing (http://www.codesourcery.com/vsiplplusplus) and I'm looking forward to trying out GIL's Pixel types with my own Matrix types. That's what 'generic' is all about ! Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:4526579E.6040003@sympatico.ca...
Andy Little wrote:
"Lubomir Bourdev" <lbourdev@adobe.com> wrote in message
Another problem is that a library cannot consist of just concepts. You need to have concrete models. GIL provides only the models it needs in practice - a 2D image comprised of pixels in a specific color space. So another reason we can't have a "Display Matrix" library is that we don't have existing models that go with it.
Yes I think this is the heart of the problem! The library claims to be generic but is in fact tightly coupled to one domain.
IMO 'generic' doesn't have to translate to 'domain-agnostic'. Generic here means that the individual models used are orthogonal, so it is easy to combine different representations of these models into working code. There is the 'Image' container, and there are various 'Pixel' types images are composed of. Generic means that both models are presented as concepts (and in fact I'm totally delighted to find the documentation use Concepts to present them !),
I wasnt going to bring it up, but I was specifically told not to use Concept docs. OTOH maybe different rules apply depending on who you are I guess?
making it easy to provide alternative Image and Pixel implementations. I'm working on a library for high-performance signal and image processing (http://www.codesourcery.com/vsiplplusplus) and I'm looking forward to trying out GIL's Pixel types with my own Matrix types. That's what 'generic' is all about !
Its looks very nice.... regards Andy Little

On Oct 6, 2006, at 10:01 AM, Andy Little wrote:
"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:4526579E.6040003@sympatico.ca...
IMO 'generic' doesn't have to translate to 'domain-agnostic'. Generic here means that the individual models used are orthogonal, so it is easy to combine different representations of these models into working code. There is the 'Image' container, and there are various 'Pixel' types images are composed of. Generic means that both models are presented as concepts (and in fact I'm totally delighted to find the documentation use Concepts to present them !),
I wasnt going to bring it up, but I was specifically told not to use Concept docs.
? At this point, it's still a gamble. There are benefits to using concepts in the documentation, such as their more formal specification and the possibility that one could use them with ConceptGCC. And if they get accepted, you'll be ahead of the curve :) Still, the syntax of concepts might change... GIL's documentation, for instance, uses the old "Indiana" syntax. The syntax for newer concepts proposals is a bit different, so the documentation will have to be changed at least once to reflect newer syntax. And, of course, not many people are familiar with concepts at this time, so even though concepts are relatively easy to read, they aren't as standard as SGI-style concept documentation. So it's a tough call. I wouldn't fault a library either way, myself. Doug

"Doug Gregor" <dgregor@cs.indiana.edu> wrote
On Oct 6, 2006, at 10:01 AM, Andy Little wrote:
"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message
<...>
(and in fact I'm totally delighted to find the documentation use Concepts to present them !),
I wasnt going to bring it up, but I was specifically told not to use Concept docs.
?
http://permalink.gmane.org/gmane.comp.lib.boost.devel/144195 para 5 regards Andy Little

"Andy Little" <andy@servocomm.freeserve.co.uk> writes:
"Doug Gregor" <dgregor@cs.indiana.edu> wrote
On Oct 6, 2006, at 10:01 AM, Andy Little wrote:
"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message
<...>
(and in fact I'm totally delighted to find the documentation use Concepts to present them !),
I wasnt going to bring it up, but I was specifically told not to use Concept docs.
?
http://permalink.gmane.org/gmane.comp.lib.boost.devel/144195
para 5
? I don't see anything there that amounts to "specifically telling you not to use Concept docs." -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams" <dave@boost-consulting.com> wrote in message news:87vemwx0f5.fsf@pereiro.luannocracy.com...
"Andy Little" <andy@servocomm.freeserve.co.uk> writes:
"Doug Gregor" <dgregor@cs.indiana.edu> wrote
On Oct 6, 2006, at 10:01 AM, Andy Little wrote:
"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message
<...>
(and in fact I'm totally delighted to find the documentation use Concepts to present them !),
I wasnt going to bring it up, but I was specifically told not to use Concept docs.
?
http://permalink.gmane.org/gmane.comp.lib.boost.devel/144195
para 5
? I don't see anything there that amounts to "specifically telling you not to use Concept docs."
Concepts arent an established convention. Read Doug Gregors post. They havent been finalised. I knew exactly what you meant in that and other discussions. Also I can say with a large amount of certainty that if I had presented PQS for another review, and had used Concept documentation, then it would be pointed out by you or others that I had been specifically told not to do so. Now I shall sit back and watch the goalposts move as if by magic ... regards Andy Little

"Andy Little" <andy@servocomm.freeserve.co.uk> writes:
http://permalink.gmane.org/gmane.comp.lib.boost.devel/144195
para 5
? I don't see anything there that amounts to "specifically telling you not to use Concept docs."
Concepts arent an established convention.
Yes they certainly are. Maybe you mean in-language concept support is not an established convention?
Read Doug Gregors post. They havent been finalised.
I'm quite aware of that.
I knew exactly what you meant in that and other discussions.
Apparently not. If you're referring to this: FWIW, once we have concept support in the language we will be using pseudosignatures rather than valid expressions to express syntactic constraints, so we can expect that to change. In the meantime, though, the things that can be expressed using established conventions should be so expressed. what I meant was that, although there is a well-established convention for documenting concepts, it's very likely that the conventions for documenting concepts will change in the near future. Then I go on to reiterate my general (not specific) position that established conventions should be used wherever applicable. This is *not* specifically telling you not to use proposed concept declaration syntax in documentation any more than it's specifically telling you not to insert emoticons into your concept tables. If you had specifically raised the topic of using the new proposed concept language syntax to do documentation, I would have said the following things about it, specifically (I did consider this issue, so I know what I was thinking): 1. It's an interesting idea 2. You'd have to write some very careful introductory material that explains to people how to interpret it, or at least makes reference to a particular proposal paper. 3. Don't forget to express semantic requirements, which don't have a place in concept description syntax (I think that has changed in more recent proposals) 4. Especially if you're not all that familiar with how to document concepts, I think it would probably be a good idea to go with the existing conventions, and only do something more adventurous once you're very comfortable with the sorts of things that need to be expressed in a concept specification. For one thing, there are lots more examples out there of the existing convention to work from. 5. Anyone doing this instead of following the convention ought to be able to supply a good reason for doing it. In other words, I wouldn't have rejected the idea out-of-hand, but I'd have asked you to justify your choice and suggested that you might want to hold off until you had more experience writing concepts.
Also I can say with a large amount of certainty that if I had presented PQS for another review, and had used Concept documentation, then it would be pointed out by you or others that I had been specifically told not to do so.
Now I shall sit back and watch the goalposts move as if by magic ...
Thought experiment #1: is there possible outcome here -- other than me conceding that I "specifically told you" something I never said or meant to say -- that would convince you the goalposts aren't being moved? Thought experiment #2: what does someone who makes such a remark hope to accomplish by it, and what does it _actually_ accomplish? -- Dave Abrahams Boost Consulting www.boost-consulting.com

"David Abrahams" <dave@boost-consulting.com> wrote in message news:873ba0w7ck.fsf@pereiro.luannocracy.com...
"Andy Little" <andy@servocomm.freeserve.co.uk> writes:
http://permalink.gmane.org/gmane.comp.lib.boost.devel/144195
para 5
? I don't see anything there that amounts to "specifically telling you not to use Concept docs."
Concepts arent an established convention.
Yes they certainly are. Maybe you mean in-language concept support is not an established convention?
Read Doug Gregors post. They havent been finalised.
I'm quite aware of that.
I knew exactly what you meant in that and other discussions.
Apparently not. If you're referring to this:
FWIW, once we have concept support in the language we will be using pseudosignatures rather than valid expressions to express syntactic constraints, so we can expect that to change. In the meantime, though, the things that can be expressed using established conventions should be so expressed.
what I meant was that, although there is a well-established convention for documenting concepts, it's very likely that the conventions for documenting concepts will change in the near future. Then I go on to reiterate my general (not specific) position that established conventions should be used wherever applicable.
This is *not* specifically telling you not to use proposed concept declaration syntax in documentation any more than it's specifically telling you not to insert emoticons into your concept tables.
If you had specifically raised the topic of using the new proposed concept language syntax to do documentation, I would have said the following things about it, specifically (I did consider this issue, so I know what I was thinking):
1. It's an interesting idea
2. You'd have to write some very careful introductory material that explains to people how to interpret it, or at least makes reference to a particular proposal paper.
3. Don't forget to express semantic requirements, which don't have a place in concept description syntax (I think that has changed in more recent proposals)
4. Especially if you're not all that familiar with how to document concepts, I think it would probably be a good idea to go with the existing conventions, and only do something more adventurous once you're very comfortable with the sorts of things that need to be expressed in a concept specification. For one thing, there are lots more examples out there of the existing convention to work from.
5. Anyone doing this instead of following the convention ought to be able to supply a good reason for doing it.
In other words, I wouldn't have rejected the idea out-of-hand, but I'd have asked you to justify your choice and suggested that you might want to hold off until you had more experience writing concepts.
Also I can say with a large amount of certainty that if I had presented PQS for another review, and had used Concept documentation, then it would be pointed out by you or others that I had been specifically told not to do so.
Now I shall sit back and watch the goalposts move as if by magic ...
Thought experiment #1: is there possible outcome here -- other than me conceding that I "specifically told you" something I never said or meant to say -- that would convince you the goalposts aren't being moved?
Thought experiment #2: what does someone who makes such a remark hope to accomplish by it, and what does it _actually_ accomplish?
I don't think there is anything I want to say in response. I hope the bullet points will be useful to those considering writing Concept documentation. regards Andy Little

Andy Little wrote:
"David Abrahams" <dave@boost-consulting.com> wrote in message news:873ba0w7ck.fsf@pereiro.luannocracy.com...
Now I shall sit back and watch the goalposts move as if by magic ... Thought experiment #1: is there possible outcome here -- other than me conceding that I "specifically told you" something I never said or meant to say -- that would convince you the goalposts aren't being moved?
Thought experiment #2: what does someone who makes such a remark hope to accomplish by it, and what does it _actually_ accomplish?
I don't think there is anything I want to say in response.
I tell you what it accomplishes for me -- it makes me want to put the person that writes this sort of hyperbole into the /dev/null filter. Unfortunately since I'm a list moderator I can't actually do that...
I hope the bullet points will be useful to those considering writing Concept documentation.
In case you missed it, there is a fair amount of Concept documentation used by several Boost libraries. Concept documentation, as far as I know, has never impacted negatively on the acceptance of a library. In fact, in my experience, it's the other way around. For example, ASIO, accepted earlier this year, uses concept documentation. But some concept documentation goes back to the very beginning of Boost (like operators, vintage 1999). Here's some samples (there are others) http://asio.sourceforge.net/asio-0.2.0/doc/html/ http://www.boost.org/libs/iterator/doc/iterator_concepts.html http://www.boost.org/libs/iterator/doc/iterator_archetypes.html http://www.boost.org/libs/utility/operators.htm My whole point in this is to make it totally clear for potential library authors that Concepts are valid and valuable documentation approach. If people have questions on how best to use concepts, I'm sure there are several people on the list that would be happy to help with *best practices* w.r.t concept documentation. Jeff

"Jeff Garland" <jeff@crystalclearsoftware.com> wrote in message news:4527E320.7050904@crystalclearsoftware.com...
Andy Little wrote:
"David Abrahams" <dave@boost-consulting.com> wrote in message news:873ba0w7ck.fsf@pereiro.luannocracy.com...
Now I shall sit back and watch the goalposts move as if by magic ... Thought experiment #1: is there possible outcome here -- other than me conceding that I "specifically told you" something I never said or meant to say -- that would convince you the goalposts aren't being moved?
Thought experiment #2: what does someone who makes such a remark hope to accomplish by it, and what does it _actually_ accomplish?
I don't think there is anything I want to say in response.
I tell you what it accomplishes for me -- it makes me want to put the person that writes this sort of hyperbole into the /dev/null filter. Unfortunately since I'm a list moderator I can't actually do that...
You may want to read this so that you can hone your skills: http://en.wikipedia.org/wiki/Flame_war Alternatively an apology for the above remarks would be welcome.
I hope the bullet points will be useful to those considering writing Concept documentation.
In case you missed it, there is a fair amount of Concept documentation used by several Boost libraries.
You might like to read some of the other posts in this thread. It might clarify for you the subject under discussion, and as a general rule IMHO, it is wise to do that before jumping in with inflammatory comments such as the above. http://www.generic-programming.org/languages/conceptcpp/ Of course this is all now apparently part of the C++ language, and there is only one conforming compiler. And I quote: "once we have concept support in the language we will be using pseudosignatures rather than valid expressions to express syntactic constraints, so we can expect that to change. In the meantime, though, the things that can be expressed using established conventions should be so expressed" regards Andy Little

"Andy Little" <andy@servocomm.freeserve.co.uk> writes:
I hope the bullet points will be useful to those considering writing Concept documentation.
In case you missed it, there is a fair amount of Concept documentation used by several Boost libraries.
You might like to read some of the other posts in this thread. It might clarify for you the subject under discussion, and as a general rule IMHO, it is wise to do that before jumping in with inflammatory comments such as the above.
Andy, While I agree that Jeff *might* have been able to understand what you meant by reading the foregoing thread very very carefully, you ought to take at least some responsibility for his misunderstanding. When you write "use Concept docs" it's natural that anyone would assume you mean just that: documenting the concepts used in a library, just as many other Boost libraries have done for years. It takes a fair amount of reading-between-the-lines (or a good hard look at the GIL documentation in the context of your posts) to realize that you really meant "documenting concepts using a proposed new-style concept syntax." I didn't get it at first, either, which partly explains my shock that you were claiming you were specifically told not to do it.
http://www.generic-programming.org/languages/conceptcpp/
Of course this is all now apparently part of the C++ language, and there is only one conforming compiler. And I quote:
"once we have concept support in the language we will be using pseudosignatures rather than valid expressions to express syntactic constraints, so we can expect that to change. In the meantime, though, the things that can be expressed using established conventions should be so expressed"
I know you have enough sense to know that what is "part of the C++ language" comes from the standard, and no standard document contains a specification for language support of concepts. I've been giving you the benefit of the doubt until now, but it seems clear to me that you're not actually misunderstanding the intent of my words: this is a wilful misrepresentation of what was merely a confident prediction on my part. I didn't feel it necessary, in this forum, to spell all that out. If you are determined to continue sarcastic distortions of other peoples' statements, the moderators will have no choice but to regulate your postings. It is poisonous to Boost discourse. You're already way, way over the line in my opinion, and I know several other moderators thought so even before I did. -- Dave Abrahams Boost Consulting www.boost-consulting.com

I think it would be really helpful if www.boost.org/more/writingdoc/index.html were updated with a tutorial example of how boost documentation which uses concepts is expected to look. I realize its a task which is much bigger than it looks - but I would like to say I think that it would be a lot more helpful that it might seems right now. We could then focus our discussion around what this document should look like. Having a concrete example make the whole issue much easier to discuss in a constructive way. Currently, www.boost.org/more/writingdoc/index.html is a little mis-leading as to what boost documentation requirements actually are. So in the meantime maybe it can enhanced somewhat to point one in the right direction. And, though I've looked around for it - I haven't really found a concise readable document which describes "formal documentation". The closest I found was the SGI documentation. I understand that the C++ standard may contain such information, but I still don't have a copy of that document (I know, I know!!!). Robert Ramey

David Abrahams wrote:
? I don't see anything there that amounts to "specifically telling you not to use Concept docs."
For reference, here is the paragraph Andy most likely refers to:
The problem is that is remote from the per item description:
binary_operation<Lhs,Op,Rhs>
IMO It would make more sense to say e.g.
binary_operation<AbstractQuantity Lhs,Op,AbstractQuantity Rhs>
Maybe it would (in fact something like that will be available with the language support for concepts), but as I said this is not the time to invent new notations. Get comfortable with the existing conventions first.
If you wanted to look at ConceptGCC and actually write conforming new-style concepts, I'd find it hard to fault you... but I don't think that would be as useful to your readers, and for you I think that might be overreaching at this stage.
And at the very end:
You don't need to invent a more abstract syntax to describe that True concept. The standard requirements table and other notations will do just fine:
Sebastian

Sebastian Redl <sebastian.redl@getdesigned.at> writes:
For reference, here is the paragraph Andy most likely refers to:
The problem is that is remote from the per item description:
binary_operation<Lhs,Op,Rhs>
IMO It would make more sense to say e.g.
binary_operation<AbstractQuantity Lhs,Op,AbstractQuantity Rhs>
Maybe it would (in fact something like that will be available with the language support for concepts), but as I said this is not the time to invent new notations. Get comfortable with the existing conventions first.
If you wanted to look at ConceptGCC and actually write conforming new-style concepts, I'd find it hard to fault you... but I don't think that would be as useful to your readers, and for you I think that might be overreaching at this stage.
Andy said paragraph 5. I don't see any way to count paragraphs that that particular section is #5. Furthermore, that's hardly "specifically telling Andy not to" use new-style concept syntax for documentation. I am clearly discouraging the idea, however, because most readers don't know the new syntax, and because Andy didn't seem to have a strong grounding in concepts yet.
And at the very end:
You don't need to invent a more abstract syntax to describe that True concept. The standard requirements table and other notations will do just fine:
?? That is not referring to the proposed new-style concept syntax at all, "specifically" or otherwise. -- Dave Abrahams Boost Consulting www.boost-consulting.com

Andy Little wrote:
"Lubomir Bourdev" <lbourdev@adobe.com> wrote:
Any large library has some concepts that could stand on their own or be reused in other contexts. Sometimes it makes sense to do so, but sometimes (as I believe is the case with GIL) those other contexts are quite nebulous and doing so would be somewhat extreme.
? "Those other contexts are quite nebulous"? You lost me..
Again, lets take your example of extracting the 2D-navigation aspect of GIL (what you call Display Matrix) into a stand-alone library that you could use for purposes other than imaging. Your example is manipulating text characters in a rectangular grid. I claim this is a nebulous example, because the moment you start getting down to business of writing this text-processing-display system, you will realize that the above DisplayMatrix is the wrong abstraction for your job. For one thing, most fonts have variable width, which means you have different number of characters per line. It is hard to accomodate the vertical random-access navigation. There is no longer a global "width" parameter, but width per line. You need a different model of a different concept. Let's say you constrain yourself to fixed width fonts and rectangular grid (a severe constraint in my opinion). Still, the set of operations GIL provides make no sense for you. The operations you need for text manipulation are very much one-dimensional (insert/delete characters). It would be inefficient to keep your characters in a 2D grid. Navigating vertically has very little use. Operations meaningful for images, like copying a rectangular subgrid or displaying it 90-degree rotated make no sense for you. Finally, if you want to constrain yourself to rectangular grid of characters and you feel those image-like operations could be meaningful for you, then you can interpret what you have as an 8-bit grayscale image (gil::gray8_view_t) whose pixel channel is the ASCII code of your character. Lubomir

"Lubomir Bourdev" <lbourdev@adobe.com> wrote in message news:B55F4112A7B48C44AF51E442990015C04C993A@namail1.corp.adobe.com... Andy Little wrote:
"Lubomir Bourdev" <lbourdev@adobe.com> wrote:
Any large library has some concepts that could stand on their own or be reused in other contexts. Sometimes it makes sense to do so, but sometimes (as I believe is the case with GIL) those other contexts are quite nebulous and doing so would be somewhat extreme.
? "Those other contexts are quite nebulous"? You lost me..
Again, lets take your example of extracting the 2D-navigation aspect of GIL (what you call Display Matrix) into a stand-alone library that you could use for purposes other than imaging. Your example is manipulating text characters in a rectangular grid. I claim this is a nebulous example, because the moment you start getting down to business of writing this text-processing-display system, you will realize that the above DisplayMatrix is the wrong abstraction for your job. For one thing, most fonts have variable width, which means you have different number of characters per line. It is hard to accomodate the vertical random-access navigation. There is no longer a global "width" parameter, but width per line. You need a different model of a different concept. Let's say you constrain yourself to fixed width fonts and rectangular grid (a severe constraint in my opinion). Still, the set of operations GIL provides make no sense for you. The operations you need for text manipulation are very much one-dimensional (insert/delete characters). It would be inefficient to keep your characters in a 2D grid. Navigating vertically has very little use. Operations meaningful for images, like copying a rectangular subgrid or displaying it 90-degree rotated make no sense for you. Finally, if you want to constrain yourself to rectangular grid of characters and you feel those image-like operations could be meaningful for you, then you can interpret what you have as an 8-bit grayscale image (gil::gray8_view_t) whose pixel channel is the ASCII code of your character. ------- Fair enough, but it was only one example... Lets look at your image and view concepts. AFAICS the image is some sequence(s) of more or less raw data The view turns the raw data into a 2D matrix. Is that correct so far? regards Andy Little --------------------------------------------------------------------------------
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Andy Little wrote:
Lets look at your image and view concepts. AFAICS the image is some sequence(s) of more or less raw data The view turns the raw data into a 2D matrix.
Is that correct so far?
Not quite. An image is a container of pixels. Its only purpose is to hold the pixels (allocate/deallocate/deep-copy). It models STL's random access container concept (_almost_ models it actually, because GIL's pixel iterators are random-traversal iterators) An image can also return an image view of its pixels. An image view is a lightweight representation of the 2D grid of pixels. Unlike the image, it is lightweight; it doesn't own or copy its data. Image views are what algorithms operate on. An image view is like a 2D equivalent of STL's range. As for the dimensionality, inherently all data is, of course, one-dimensional as the RAM is 1D. The image provides 1D interface to the pixels (because that's what STL's random access container does). The image view provides both 1D and 2D interface to its pixels. Lubomir

"Lubomir Bourdev" <lbourdev@adobe.com> wrote in message news:B55F4112A7B48C44AF51E442990015C04C993A@namail1.corp.adobe.com... Andy Little wrote:
"Lubomir Bourdev" <lbourdev@adobe.com> wrote:
Any large library has some concepts that could stand on their own or be reused in other contexts. Sometimes it makes sense to do so, but sometimes (as I believe is the case with GIL) those other contexts are quite nebulous and doing so would be somewhat extreme.
? "Those other contexts are quite nebulous"? You lost me..
Again, lets take your example of extracting the 2D-navigation aspect of GIL (what you call Display Matrix) into a stand-alone library that you could use for purposes other than imaging. Your example is manipulating text characters in a rectangular grid. ---------------- Heres another example, of a field representing streamlines round a cylinder: http://www.servocomm.freeserve.co.uk/Cpp/pqs-2-00-02/whats_next.html Now presumably I could combine the locator with some function as in the mandelbrot example to find out the state of the streamlines at any point in the flow. Presumably the locator could also be extended to 3D. regards Andy Little

Andy Little wrote:
Heres another example, of a field representing streamlines round a cylinder:
http://www.servocomm.freeserve.co.uk/Cpp/pqs-2-00-02/whats_next.html
Now presumably I could combine the locator with some function as in the mandelbrot example to find out the state of the streamlines at any point in the flow. Presumably the locator could also be extended to 3D.
Andy, if I read that example correctly '2D coordinates' there are meant to represent some physical dimensions, while here we are talking about a way to index an n-dimensional raster. That's quite a different world. FWIW, I believe the same reasoning can be applied to the color-space discussion. While I think a good library to deal with color spaces and transformations between them would be nice, that is a different domain than how colors are represented essentially as bit-fields in pixels. You were proposing not to lump independent concepts / domains together, so let's keep them separate ! ;-) Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:4526CFD0.8050200@sympatico.ca...
Andy Little wrote:
Heres another example, of a field representing streamlines round a cylinder:
http://www.servocomm.freeserve.co.uk/Cpp/pqs-2-00-02/whats_next.html
Now presumably I could combine the locator with some function as in the mandelbrot example to find out the state of the streamlines at any point in the flow. Presumably the locator could also be extended to 3D.
Andy, if I read that example correctly '2D coordinates' there are meant to represent some physical dimensions, while here we are talking about a way to index an n-dimensional raster. That's quite a different world.
Nope 2d grid ... same concept
FWIW, I believe the same reasoning can be applied to the color-space discussion. While I think a good library to deal with color spaces and transformations between them would be nice, that is a different domain than how colors are represented essentially as bit-fields in pixels.
Nope Colour .. same concept.
You were proposing not to lump independent concepts / domains together, so let's keep them separate ! ;-)
Just trying to "Raise the bar a little" , as is often done to me here on Boost ... ;-) regards Andy Little

Andy Little wrote:
"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:4526CFD0.8050200@sympatico.ca...
Andy Little wrote:
Heres another example, of a field representing streamlines round a cylinder:
http://www.servocomm.freeserve.co.uk/Cpp/pqs-2-00-02/whats_next.html
Now presumably I could combine the locator with some function as in the mandelbrot example to find out the state of the streamlines at any point in the flow. Presumably the locator could also be extended to 3D. Andy, if I read that example correctly '2D coordinates' there are meant to represent some physical dimensions, while here we are talking about a way to index an n-dimensional raster. That's quite a different world.
Nope 2d grid ... same concept
FWIW, I believe the same reasoning can be applied to the color-space discussion. While I think a good library to deal with color spaces and transformations between them would be nice, that is a different domain than how colors are represented essentially as bit-fields in pixels.
Nope Colour .. same concept.
I strongly disagree. Modeling is always goal-driven. There is not only a single possible model to represent any given 'real-world' entity. But we are getting quite off-topic now. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Andy Little wrote:
"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:4526CFD0.8050200@sympatico.ca...
Andy Little wrote:
Heres another example, of a field representing streamlines round a cylinder:
http://www.servocomm.freeserve.co.uk/Cpp/pqs-2-00-02/whats_next.html
Now presumably I could combine the locator with some function as in the mandelbrot example to find out the state of the streamlines at any point in the flow. Presumably the locator could also be extended to 3D. Andy, if I read that example correctly '2D coordinates' there are meant to represent some physical dimensions, while here we are talking about a way to index an n-dimensional raster. That's quite a different world.
Nope 2d grid ... same concept Either image-like operations make sense in your context, in which case you can treat your grid as an image (an image does not have to be something that will necessarily end up being displayed on screen) or they don't make sense for you in which case GIL's 2D grid is not the right tool for you.
FWIW, I believe the same reasoning can be applied to the color-space discussion. While I think a good library to deal with color spaces and transformations between them would be nice, that is a different domain than how colors are represented essentially as bit-fields in pixels.
Nope Colour .. same concept.
I strongly disagree. Modeling is always goal-driven. There is not only a single possible model to represent any given 'real-world' entity. But we are getting quite off-topic now.
Regards, Stefan
I agree with Stefan. The formulas for color conversion depend on how colors are represented in memory. Multiplying two float channels can be done as simply "a*b" but if the channels are 8-bit unsigned you need to divide the result by 255 (and there is an efficient way to do that). So to do just color conversion you need to define the concept of a channel, rudimentary channel operations, channel traits, color conversion formulas, color space representations, pixel representations, pixel traits, etc. It is not that simple to take it out of GIL. Lubomir

"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:4527E091.7050209@sympatico.ca...
Andy Little wrote:
"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:4526CFD0.8050200@sympatico.ca...
<...>
FWIW, I believe the same reasoning can be applied to the color-space discussion. While I think a good library to deal with color spaces and transformations between them would be nice, that is a different domain than how colors are represented essentially as bit-fields in pixels.
Nope Colour .. same concept.
I strongly disagree. Modeling is always goal-driven. There is not only a single possible model to represent any given 'real-world' entity.
One Concept can have many models. That is the reasoning behind a Concept.
But we are getting quite off-topic now.
AFAICS discussion of a colour Concept is right on-topic for an image processing library. regards Andy Little

Andy Little wrote:
Nope Colour .. same concept. I strongly disagree. Modeling is always goal-driven. There is not only a single possible model to represent any given 'real-world' entity.
One Concept can have many models. That is the reasoning behind a Concept.
I'm not sure what the distinction between a Concept and a Model is... The point I was trying to make was that a physiologist, a physicist, a designer, etc., will all use different ways to think of 'Color', because what they try to represent in their respective models differs. Abstracting away the goal from the discussion will take out all the life from the models and render them meaningless and useless. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:452CE07E.4070709@sympatico.ca...
Andy Little wrote:
Nope Colour .. same concept. I strongly disagree. Modeling is always goal-driven. There is not only a single possible model to represent any given 'real-world' entity.
One Concept can have many models. That is the reasoning behind a Concept.
I'm not sure what the distinction between a Concept and a Model is...
Look up Generic programming for the answer to that.
The point I was trying to make was that a physiologist, a physicist, a designer, etc., will all use different ways to think of 'Color', because what they try to represent in their respective models differs.
I think what each is doing with colour has many similarities. Quantifying, filtering mixing etc. IOW there is a common set of operations that can be applied to a colour.
Abstracting away the goal from the discussion will take out all the life from the models and render them meaningless and useless.
OTOH spending time on the concept of Colour rather than a particular low level representation may help the user to use work with colour more intuitively. One could start from the most comprehensive model , which is probably that closest to the physical phenomenon of electromagnetic radiation and then show how that differs from a particular representation, and why there are various representations. For instance the primary colours are red, yellow and blue. Why is the computer representation comprised of red green and blue. Floating point RGB colours often have a range of intensities per color between 0 and 1, yet if I look at the sun I can burn my eyes. IOW what do those numbers actually represent in terms of the physical phenomenon of light. IOW surely the goal is to try to model the physical phenomenon in the best possible way given a set of constraints imposed by hardware and software. regards Andy Little

One could start from the most comprehensive model , which is probably that closest to the physical phenomenon of electromagnetic radiation The physical/neurological phenomenon of perceived colour is very complicated. It involves a range of electromagnetic radiation and the way the retina cells are stimulated by it, and it is not really useful in computer graphics. For example, the colour orange may be perceived either by stimulation by the orange frequency (wavelengths from 590 to 620 nanometers), or it may be stimulation by two or more distinct frequencies ranging from the green to the red area. In real life, it will usually be a continuous spectrum, distributed in such a way that
and why there are various representations. That could certainly be interesting, but it might also be too much
For instance the primary colours are red, yellow and blue. No, they're not. It's what people learn when they grow up, but it's really wrong. The primary colours of the additive model are red, green and blue. With
Andy Little wrote: the red receivers are stimulated more than the green receivers. In fact, the exact way by which the human eye perceives colour is not yet definitely known. theory for the library. I think the presence or absence of such background documentation should not affect the decision whether to accept or reject GIL. three light sources in these colours, you can mix nearly every other colour a human can perceive. Because the human eye responds to these three colours the strongest, they most closely resemble the real model. Mixing these three colours results in white, while the absence of them is black. The primary colours of the subtractive model are magenta, cyan and yellow. Absence of these is white, presence of all three is white. They are used in printing, because the base of printing is a white ground, and because printed colour dots don't emit light, unlike CRT/LCD pixels.
Floating point RGB colours often have a range of intensities per color between 0 and 1, yet if I look at the sun I can burn my eyes. IOW what do those numbers actually represent in terms of the physical phenomenon of light.
Ah, that's an interesting question. In terms of the physical phenomenon, the answer is, "not much". The answer is found in engineering: 0 means nothing, while 1 means the brightest light the output device is capable of emitting (under its current brightness configuration).
IOW surely the goal is to try to model the physical phenomenon in the best possible way given a set of constraints imposed by hardware and software.
What do you mean now, the goal of the library or the goal of computer engineering? In terms of computer engineering, well, they actually achieved the physical model ;) In terms of the library, I disagree. The library should model not the physical concept of colour, because that is tricky to define, hard to understand, and complicated to work with. A library doing that might be nice for exact scientific simulation, but useless in every other discipline concerned with colour. Rather, the library should most closely model colour as it is used in computing, i.e. be capable of representing the various colour models in use (RGB, CMYK, HSV, ...) and providing access to all these, while allowing transparent use in cases where I don't care about them, allowing conversion between the various models, etc. It should be noted that GIL, at the moment, provides no proper concept of colour at all! It only provides the concept of a colour space, which is very close but not the same. The colour space defines the way in which pixels express their colour. But GIL does not have the concept of a colour independent of a pixel. In other words, it provides no way to express, for example, that the user has now chosen red as the colour of his drawing tool. Sure, you can make this very easily: either just misuse a single pixel for it (physical representation of the two is the same) or create a small class effectively copying it. Still, it might be called a mistake in the concept foundation of the library that the independent concept of Colour is missing. Sebastian Redl

Sebastian Redl wrote:
Still, it might be called a mistake in the concept foundation of the library that the independent concept of Colour is missing.
Sebastian, There are trade-offs in introducing the concept of a Color separate from that of a Pixel Value. On the one hand, it seems natural to have the concept of a color value in an imaging library, as it plays an important role. On the other hand, a Color is exactly equivalent to a Pixel Value. Introducing two concepts that are identical might confuse people - they may wonder how is one different from the other, how do you convert from one to the other, etc. That means more concepts, more explanations and longer documentation. It might be a good idea, however, to add a sentence clarifying that a pixel is used to represent a color value even outside the context of an image. It would certainly be a mistake to introduce new code in GIL to deal with colors. This may add further complexity and opportunity for performance degradation. Lubomir

"Sebastian Redl" <sebastian.redl@getdesigned.at> wrote in message news:452D33F5.7090507@getdesigned.at...
One could start from the most comprehensive model , which is probably that closest to the physical phenomenon of electromagnetic radiation The physical/neurological phenomenon of perceived colour is very complicated. It involves a range of electromagnetic radiation and the way the retina cells are stimulated by it, and it is not really useful in computer graphics. For example, the colour orange may be perceived either by stimulation by the orange frequency (wavelengths from 590 to 620 nanometers), or it may be stimulation by two or more distinct frequencies ranging from the green to the red area. In real life, it will usually be a continuous spectrum, distributed in such a way that
Andy Little wrote: the red receivers are stimulated more than the green receivers. In fact, the exact way by which the human eye perceives colour is not yet definitely known.
Nevertheless whether you are a physicist or a theatre lighting designer or an artist, there seems to me to be a common and limited set of operations on colours. For theatre lighting (and for lighting a 3d CGI scene) one uses lights with colour filters and mixes colours. Lights can be dimmed or brightened or switched on and off. These are quite simple operations and seem to me to apply to most models of colour. The other aspect of colour ( but not being an expert) is that it seems to be like a 3d vector in many respects, where instead of x,y,z you have the colours red, green and blue.( as I saw on a Wiki somewhere). This gives you a basis for some mathematical operations, like addition, subtraction and multiplication by a scalar. That is assuming the RGB model, but the RGB model appears to be the closest hardware conterpart to the physical phenomena. This stuff is presumably pretty basic to an expert, but my point is that a colour C++ Concept is not outside the realms of possibility AFAICS. To me anyway its more interesting and useful on its own, and should be useable on its own.
and why there are various representations. That could certainly be interesting, but it might also be too much theory for the library. I think the presence or absence of such background documentation should not affect the decision whether to accept or reject GIL. For instance the primary colours are red, yellow and blue. No, they're not. It's what people learn when they grow up, but it's really wrong. The primary colours of the additive model are red, green and blue. With three light sources in these colours, you can mix nearly every other
colour a human can perceive. Because the human eye responds to these three colours the strongest, they most closely resemble the real model. Mixing these three colours results in white, while the absence of them is black. The primary colours of the subtractive model are magenta, cyan and yellow. Absence of these is white, presence of all three is white. They are used in printing, because the base of printing is a white ground, and because printed colour dots don't emit light, unlike CRT/LCD pixels. > Floating point RGB colours often have a range of intensities per color > between 0 and 1, yet if > I look at the sun I can burn my eyes. IOW what do those numbers actually > represent in terms of the physical phenomenon of light. > Ah, that's an interesting question. In terms of the physical phenomenon, the answer is, "not much". The answer is found in engineering: 0 means nothing, while 1 means the brightest light the output device is capable of emitting (under its current brightness configuration).
That doesnt give you much of a basis for comparing two images from different sources. Say for comparing a CCTV image of a criminal to a mugshot. Presumably there are standards around which try to address this problem.
IOW surely the goal is to try to model the physical phenomenon in the best possible way given a set of constraints imposed by hardware and software.
What do you mean now, the goal of the library or the goal of computer engineering? In terms of computer engineering, well, they actually achieved the physical model ;)
They may do if they have a standard by which the hardware phenomenon is calibrated to the physical one as in the above problem.
In terms of the library, I disagree. The library should model not the physical concept of colour, because that is tricky to define, hard to understand, and complicated to work with.
That is up to the designer of the Concept. The phenomenon may be complex but the means of using it, its operations, seem fairly simple to me. A library doing that might be
nice for exact scientific simulation, but useless in every other discipline concerned with colour. Rather, the library should most closely model colour as it is used in computing, i.e. be capable of representing the various colour models in use (RGB, CMYK, HSV, ...) and providing access to all these, while allowing transparent use in cases where I don't care about them, allowing conversion between the various models, etc.
And don't forget printing... And thanks for the technical info about colour which was very interesting. regards Andy Little

Andy Little wrote:
For theatre lighting (and for lighting a 3d CGI scene) one uses lights with colour filters and mixes colours. Lights can be dimmed or brightened or switched on and off.
These are quite simple operations and seem to me to apply to most models of colour.
They apply to linear additive light models certainly, but don't always apply to non-linear subtractive models (well at some level of detail the become linear light in the physical world but users/artists don't view it like that)
The other aspect of colour ( but not being an expert) is that it seems to be like a 3d vector in many respects, where instead of x,y,z you have the colours red, green and blue.( as I saw on a Wiki somewhere). This gives you a basis for some mathematical operations, like addition, subtraction and multiplication by a scalar. That is assuming the RGB model, but the RGB model appears to be the closest hardware conterpart to the physical phenomena.
Colour is a human concept, it only exists in the brain. In the physical world you have an infinitive space of wavelengths, but spectral colour when used in modeling the spectral reproduction of things for instance uses more than 3 basis vectors, for example you may take 10nm intervals across the visible spectrum (you may also need UV and IR bands if your dealing with things like whiteners added to paper etc). If you needed to work with unusual eye conditions e.g. http://en.wikipedia.org/wiki/Tetrachromat you may need 5 basis functions for your colour representation. To truly understand the cinema style experience you need 4 (rods and cones) But these are all based upon additive mixtures of linear light. They don't quite work in subtractive systems (like printing or print film) in the same way, nor in non-linearly encoded colour spaces.
That doesnt give you much of a basis for comparing two images from different sources. Say for comparing a CCTV image of a criminal to a mugshot. Presumably there are standards around which try to address this problem.
well yes and no. There are often a set of viewing conditions associated with the reproduction of an image in the correct way, what is not yet complete is how to map an image out of this set of conditions. For instance the size of the image affects how you perceive it - this is not something non-specialists would guess I imagine, whilst they will have observed that turning a light on or off changes the way printer media reacts in a coarse way, try look at a photo under a sodium vapor street lamp for instance.
In terms of the library, I disagree. The library should model not the physical concept of colour, because that is tricky to define, hard to understand, and complicated to work with.
That is up to the designer of the Concept. The phenomenon may be complex but the means of using it, its operations, seem fairly simple to me.
Generally in the CGI end of the business we have meta data that describes how to interpret the data separate from its data representation. I'd want a library to understand that separation. Floating point numbers do go >1.0 (and <0.0 once you enter abstract mathematical representations) Personally if I ever see a HSV/HSL/etc colour space I'd avoid it like the plague as generally they are ill defined by some 'graphics' text book rather than being a true 'colour' space. This is because the approximation they make is *too* approximate for the level of adjustments we need to make. Kevin -- | Kevin Wheatley, Cinesite (Europe) Ltd | Nobody thinks this | | Senior Technology | My employer for certain | | And Network Systems Architect | Not even myself |

"Kevin Wheatley" <hxpro@cinesite.co.uk> wrote in message news:eglp03$neu$1@sea.gmane.org...
Generally in the CGI end of the business we have meta data that describes how to interpret the data separate from its data representation. I'd want a library to understand that separation. Floating point numbers do go >1.0 (and <0.0 once you enter abstract mathematical representations)
Personally if I ever see a HSV/HSL/etc colour space I'd avoid it like the plague as generally they are ill defined by some 'graphics' text book rather than being a true 'colour' space. This is because the approximation they make is *too* approximate for the level of adjustments we need to make.
It occurs that, there is no real reason why a Concept, once more formally defined in the language, and knowing the available models, couldnt choose a representation, by which to represent itself. regards Andy Little

Andy, Yes, I agree with you we can have a single concept for Colour, and that its representation in memory (number of bits per channel, ordering of the channels, etc.) could be models of that concept. At first glance, it does seem appealing to separate color out. Here is where we disagree. I believe that: - Dealing with color cannot be easily separated out of GIL without splitting the library in half - It doesn't make sense to separate color out, because it will require moving out of GIL lots of other concepts which are very much related to images. Let us consider carefully what this would involve in practice. First, lets settle on terminology. A color value consists of an ordered sequence of channel values, along with a color space that provides interpretation for the channel values. For example, the color Red can be represented in RGB color space with 8-bit unsigned integral channels as [255 0 0] (that is, 100% Red, 0% Green, 0% Blue). The same color Red can be represented in floating point BGR as [0.0f 0.0f 1.0f]. Color space domains do not overlap completely. For example the same color Red cannot be accurately represented in CMYK color space, i.e. using a mixture of Cyan, Magenta, Yellow and Black inks. One common operation we would like to do is convert between colors of different color spaces. In GIL the construct that holds the value of a color in a given color space is called a Pixel. To support all this, we need: 1. A concept of a color channel - support for conversion between channels of different representations - channel traits to define the range of a channel - channel iterators and reference types, to allow for channels whose size is less than a byte - low-level operations such as multiplying two channels, inverting a channel, etc. These may require performance specializations for each channel model - metafunctions to determine if two channels are compatible (i.e. if there is a lossless conversion between them) 2. A concept of a color space. - properties of a color space, such as the number of channels, presence of alpha - compatibility between color spaces - ordering of the channels 3. A concept of a Pixel (i.e. the holder of the channels) - metafunctions determining things like compatibility between pixels - various pixel-level operations, such as fill_channels, transform_channels, for_each_channel. Implementation of these that is fast (i.e. no explicit loop), supports pixels of any arity, supports heterogeneous pixels, and pairs channels semantically (Red to Red) not based on their ordering in memory - metafunctions to get, say, the type of the K-th channel of a pixel. - types for reference and iterators of pixels, for the same reasons we have them for channels 4. Support for color conversion - convertibility properties of an ordered pair of pixels - color conversion implementations for ordered pairs of pixel types, possibly with performance specializations - ability to replace the default color converter, or provide overrides for some combinations. For example, perform a high quality color conversion using color profiles To do fast color conversion some systems may want to perform color conversion on multiple pixels simultaneously. This necessitates defining pixel iterators, the associated iterator traits... GIL core files that deal with exclusively with the above: - cmyk.hpp - device_n.hpp - gray.hpp - hsb.hpp - lab.hpp - rgb.hpp - rgba.hpp - channel.hpp - color_convert.hpp - pixel.hpp - pixel_algorithm.hpp - pixel_iterator.hpp - pixel_iterator_traits.hpp - planar_ptr.hpp - planar_ref.hpp GIL core files that deal in part with the above: - gil_concept.hpp - gil_config.hpp - metafunctions.hpp - typedefs.hpp - gil_all.hpp Of the 27 files in GIL core, 20 are necessary to support just basic color conversion. You characterized Color as something that is easy to separate out of GIL. I hope that this convinces you otherwise. Now, as for the second statement, whether it makes sense to separate color out of GIL. In my opinion, certainly not! Yes, there are cases where people would need just color and color conversion and won't have to deal with images directly. But there are also cases where people will need vectors and vector operations and won't need matrices. Does that mean we should split a linear algebra library into a vector-only and image-only libraries? Color support is tightly integrated with the rest of GIL, and this is with good reason: The list above is long not because dealing with color is complex on a conceptual level, but because of the vast variability in how color can be represented and this variability needs to be handled by the image processing library. You may argue that you need color to define the color of your dialog box or your 3D mesh, objects seemingly unrelated to images. Ultimately, however, everything gets rasterized into an image to be displayed on the screen. That's why it seems natural for the color to be part of an image library. If you are still not convinced, look of all the other image libraries. Does Cairo separate color out? Does Vigra do so? Does Anti-Grain do so? How about OpenCV? VXL have a set of eight core libraries to do imaging: http://paine.wiau.man.ac.uk/pub/doc_vxl/index.html Why do you think they deal with color inside their "Core Image" library? http://paine.wiau.man.ac.uk/pub/doc_vxl/core/vil/html/annotated.html Or do you think that every imaging library developer must have gotten it wrong? Lubomir

"Lubomir Bourdev" <lbourdev@adobe.com> wrote in message <...> <...>
To do fast color conversion some systems may want to perform color conversion on multiple pixels simultaneously. This necessitates defining pixel iterators, the associated iterator traits...
Here IMO is the problem. Elsewhere you say that a pixel is exactly equivalent to a Colour, but what would a colour iterator be?. Quote from the GIL design guide: "A pixel is a set of channels defining the color at a given point in an image." There is more than just colour to a pixel AFAIKS. regards Andy Little

Andy Little wrote:
There is more than just colour to a pixel AFAIKS.
well we certainly store more than colour channels in an image, e.g. z-depth, surface normal direction, opacity, it is also quite usual to have several colour channels for differing aspects of the image. Kevin -- | Kevin Wheatley, Cinesite (Europe) Ltd | Nobody thinks this | | Senior Technology | My employer for certain | | And Network Systems Architect | Not even myself |

Andy Little wrote:
Lubomir Bourdev wrote:
But with a few minor exceptions, the code in the core is tightly interdependent.
Yes exactly my point.... ;-)
Andy Little wrote:
My first impression is that the library mixes many Concepts, that can easily be separated.
Which of these is your point? Lubomir

Andy Little wrote:
Are the GIL Authors interested interested in defending their library against my criticisms or do they expect it to get into boost by default?
If it does get in to Boost without any defence whatsoever, then it will confirm some suspicions (I currently think are unfounded) I have about Boost.
Huh ? Are you serious ? Today is the first day of GIL's evaluation. What do you expect ? Give the author(s) some time to collect opinions and criticism so they can provide a well structured response. I believe it is much easier to criticize and mentally destruct something than it is to actually create and defend it. Please, give the authors a break ! Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:4525DD06.2090207@sympatico.ca...
Andy Little wrote:
Are the GIL Authors interested interested in defending their library against my criticisms or do they expect it to get into boost by default?
If it does get in to Boost without any defence whatsoever, then it will confirm some suspicions (I currently think are unfounded) I have about Boost.
Huh ? Are you serious ? Today is the first day of GIL's evaluation. What do you expect ? Give the author(s) some time to collect opinions and criticism so they can provide a well structured response.
3 days is plenty, unless they have no answers...
I believe it is much easier to criticize and mentally destruct something than it is to actually create and defend it.
Criticism is what a review is about. If I'm not allowed to criticise , seems no point in having a review. And I'm not sure what you mean by mentally destroying... could you elaborate? Anything worthwhile is robust enough to withstand criticism. Please lets not all lets have to sit about saying how Nice everything is.
Please, give the authors a break !
What, are you asking for a suspension of the review? It ain't hardly even started yet. regards Andy Little

Andy Little wrote:
"Stefan Seefeld" <seefeld@sympatico.ca> wrote in message news:4525DD06.2090207@sympatico.ca...
Andy Little wrote:
Are the GIL Authors interested interested in defending their library against my criticisms or do they expect it to get into boost by default?
If it does get in to Boost without any defence whatsoever, then it will confirm some suspicions (I currently think are unfounded) I have about Boost. Huh ? Are you serious ? Today is the first day of GIL's evaluation. What do you expect ? Give the author(s) some time to collect opinions and criticism so they can provide a well structured response.
3 days is plenty, unless they have no answers...
The review started yesterday, and hopefully there are many more reviews that will arrive. If there is a lot of overlap in their arguments it would be more efficient for Lubomir to address them collectively.
I believe it is much easier to criticize and mentally destruct something than it is to actually create and defend it.
Criticism is what a review is about. If I'm not allowed to criticise , seems no point in having a review.
I agree.
And I'm not sure what you mean by mentally destroying... could you elaborate? Anything worthwhile is robust enough to withstand criticism. Please lets not all lets have to sit about saying how Nice everything is.
What are you talking about ? Boost has never been a place where compliments were made gratuitously. I'm not suggesting not to criticize. My point is that it is far easier to make a simple statement reflecting a 'first impression' than it is to give detailed reasoning (and defense). Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin...

On Oct 6, 2006, at 7:23 AM, Andy Little wrote:
I believe it is much easier to criticize and mentally destruct something than it is to actually create and defend it.
Criticism is what a review is about. If I'm not allowed to criticise , seems no point in having a review. And I'm not sure what you mean by mentally destroying... could you elaborate? Anything worthwhile is robust enough to withstand criticism. Please lets not all lets have to sit about saying how Nice everything is.
Andy, you are absolutely correct that library reviews involve criticism, but how we criticize is very important. The goal of a Boost library review is to improve the library through constructive criticism, and at the end we make a decision: is the library good enough at this point to accept it into Boost? If not, we hope to have provided enough constructive criticism for it to be improved and accepted at a later time. I believe that the Serialization library is our best example of how constructive criticism in a review resulted in an excellent library that was accepted in its second review, and I hope we can have more such success stories. You brought up some valid points in your initial message, and these points need to be discussed. But you crossed a line when asking whether the authors are interested in defending their library against your criticisms. They are interested, or they would not have brought their library up for review. If you don't get a response to your question quickly, be patient; if it takes too long or you don't get an answer you feel is sufficient, ask again or try to rephrase the question. E-mail is a poor communication medium, and even if messages rarely get lost in transmission, they often get drowned in the deluge of other messages. Don't assume that an unanswered message means you're being ignored. Given constructively, criticism will be taken better and have more positive effects, and you'll get the answers you want. Doug, Boost Moderator

Andy Little wrote:
"Andy Little" <andy@servocomm.freeserve.co.uk> wrote in message news:efub9t$sio$1@sea.gmane.org...
"Tom Brinkman" <reportbase@gmail.com> wrote in message news:30f04db60610022214t6b2b42b8h8c4efe9f14bac849@mail.gmail.com... With the last finishing touches completed, i'm excited to be able to announce that the review of Generic Image Library (GIL) will begin October 5, in three days.
Are the GIL Authors interested interested in defending their library against my criticisms or do they expect it to get into boost by default?
If it does get in to Boost without any defence whatsoever, then it will confirm some suspicions (I currently think are unfounded) I have about Boost.
Well, are you going to provide a review? So far you have only given your "first impression". Or do you think a first impression should be enough to kill it? Ian McCulloch

"Ian McCulloch" <ianmcc@physik.rwth-aachen.de> wrote in message news:eg5cj6$es7$1@sea.gmane.org...
Andy Little wrote:
"Andy Little" <andy@servocomm.freeserve.co.uk> wrote in message news:efub9t$sio$1@sea.gmane.org...
"Tom Brinkman" <reportbase@gmail.com> wrote in message news:30f04db60610022214t6b2b42b8h8c4efe9f14bac849@mail.gmail.com... With the last finishing touches completed, i'm excited to be able to announce that the review of Generic Image Library (GIL) will begin October 5, in three days.
Are the GIL Authors interested interested in defending their library against my criticisms or do they expect it to get into boost by default?
If it does get in to Boost without any defence whatsoever, then it will confirm some suspicions (I currently think are unfounded) I have about Boost.
Well, are you going to provide a review?
If the authors cant be bothered to answer my original post... probably not. Why make the effort? So far you have only given
your "first impression".
Yep... what is the significance of the quotes? Or do you think a first impression should be
enough to kill it?
I think it had better start justifying its existence as a proposed addition to boost. regards Andy Little

Andy Little wrote:
"Ian McCulloch" <ianmcc@physik.rwth-aachen.de> wrote in message news:eg5cj6$es7$1@sea.gmane.org...
Andy Little wrote:
"Andy Little" <andy@servocomm.freeserve.co.uk> wrote in message news:efub9t$sio$1@sea.gmane.org...
"Tom Brinkman" <reportbase@gmail.com> wrote in message news:30f04db60610022214t6b2b42b8h8c4efe9f14bac849@mail.gmail.com... With the last finishing touches completed, i'm excited to be able to announce that the review of Generic Image Library (GIL) will begin October 5, in three days.
Are the GIL Authors interested interested in defending their library against my criticisms or do they expect it to get into boost by default?
If it does get in to Boost without any defence whatsoever, then it will confirm some suspicions (I currently think are unfounded) I have about Boost.
Well, are you going to provide a review?
If the authors cant be bothered to answer my original post... probably not. Why make the effort?
There was a reply from Lubomir 12 hours ago.
So far you have only given
your "first impression".
Yep... what is the significance of the quotes?
Because it was a quote from your original post: "My first impression is..." Ian McCulloch

Andy Little wrote:
The domain that the library can be used in is very narrow. To be seriously used for image recognition as was previously suggested an application, my guess is that the library would need the ability to apply arbitrary transforms, including other than 90 degree rotations, and interpolation of points, stereoscopic vision etc.
That is a fair point - having image processing algorithms would make GIL more useful. And we would like to have image processing algorithms at some point in the future. But writing these in a generic and efficient way is a huge undertaking. We are hoping the open source community will join us in writing a future numeric extension to GIL (and, if GIL makes it into boost, a future boost GIL-algorithm extension proposal) That said, we don't want the lack of image processing algorithms to be an impediment for those who want to use them. This is why we recently provided a small numeric GIL extension that gives you a starting point to writing image processing algorithms. You can get it off GIL's main download page (step 5): http://opensource.adobe.com/gil/download.html The algorithms there are not well documented and not optimized for performance. But you can do a convolution and generic resampling (nearest-neighbor and bilinear interpolation). That lets you do things like blurring, sharpening, rescaling the images, arbitrary-degree rotation, etc. We have a sample file that shows you how to do this. Even if we had a fully-optimized and comprehensive image processing extension to GIL, it is probably not a good idea to include it in this boost proposal. If you think GIL is big now, imagine how much bigger it will be with the algorithms. Lubomir

Lubomir, I'm in the process of reading the GIL Design Guide. Right now I'm specifically interested into aspects related to the pixel type(s), specifically, how to represent pixels where individual channels can't be represented as multiples of bytes. For example, IPP provides a heterogenous 16-bit RGB type with 5 bits red, 6 bits green, and 5 bits blue. I wonder whether that could be mapped to GIL, and how. I think it is important to be able to write C++ code using GIL that is binary-compatible with these types. Is this possible ? Any hints would be highly appreciated ! Many thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

Stefan Seefeld wrote:
...IPP provides a heterogenous 16-bit RGB type with 5 bits red, 6 bits green, and 5 bits blue. I wonder whether that could be mapped to GIL, and how.
Although we don't have an existing model, I believe it is possible. You will need to make a heterogeneous pixel (i.e. pixel whose channels are of different type). Its channels will have proxy references. Here is how I would go about doing it. First, you need a class that can modify a given range of bits in a given type. This will be the channel proxy reference: // Manipulates bits [StartBit..StartBit+NumBits] of Data template <typename Data, int StartBit, int NumBits> struct subbyte_channel_reference { Data& _data; typedef Data value_type; subbyte_channel_reference(Data& data); subbyte_channel_reference& operator=(int value); }; typedef subbyte_channel_reference<int16_t,0,5> red_565_chan_ref_t; typedef subbyte_channel_reference<int16_t,6,6> green_565_chan_ref_t; typedef subbyte_channel_reference<int16_t,11,5> blue_565_chan_ref_t; You need to also make traits for it indicating that its reference type is proxy: template <> struct channel_traits<red_565_chan_ref_t> { typedef red_565_chan_ref_t::value_type value_type; typedef red_565_chan_ref_t reference; ... }; Then create your own model of HeterogeneousPixelConcept that returns the corresponding references: struct rgb565_pixel { typedef rgb_t color_space_t; template <int K> struct kth_channel_t; template <int K> typename kth_channel_t<K>::reference channel() { return typename kth_channel_t<K>::reference(_data); } int16_t _data; // this stores the 5+6+5 bits }; template <> struct rgb565_pixel::kth_channel_t<0> { typedef red_565_chan_ref_t reference; }; template <> struct rgb565_pixel::kth_channel_t<1> { typedef green_565_chan_ref_t reference; }; template <> struct rgb565_pixel::kth_channel_t<2> { typedef blue_565_chan_ref_t reference; }; You don't have to worry about planar representation here, as 565 pixels are always interleaved as far as I know. In theory GIL shouldn't have to change. "In theory, theory and practice are the same, but in practice..." :-) I suspect there will be a few glitches since this will be the first such model. Lubomir

Lubomir Bourdev wrote:
Stefan Seefeld wrote:
...IPP provides a heterogenous 16-bit RGB type with 5 bits red, 6 bits green, and 5 bits blue. I wonder whether that could be mapped to GIL, and how.
Although we don't have an existing model, I believe it is possible.
You will need to make a heterogeneous pixel (i.e. pixel whose channels are of different type). Its channels will have proxy references.
Here is how I would go about doing it.
[...] Excellent ! I'll play a bit with it and see whether I can bind IPP to it. As I indicated in my other mail, I'm very interested into a C++ API to manipulate images and pixels, but it is important to be able to provide bindings to highly optimized 'backends' such as IPP for special functions, because a) that is what users at present use and b) there is a lot of efford put into these so it would be foolish to throw them away just because they aren't written in C++.
You don't have to worry about planar representation here, as 565 pixels are always interleaved as far as I know.
Indeed, that's what I would expect, too.
In theory GIL shouldn't have to change. "In theory, theory and practice are the same, but in practice..." :-) I suspect there will be a few glitches since this will be the first such model.
That's why I ask. :-) By the way, I think such a hands-on guide to write a user-defined pixel type may make a good addition to the documentation. Many thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin...

- What is your evaluation of the design? I believe that the various Concepts namely treating a data sequence as a view on a matrix or grid, iterating a matrix or Grid , and Color have not been adequately thought about in their own right. I do like the locator concept, though AFAICS this has a lot of similarities to an iterator over a 2D or 3D grid, but again these could Concepts (e.g step iterator) could be abstracted out and then more generally useful - What is your evaluation of the implementation? I didnt look in detail. It appeared that some typedefs were only there to meet GIL Concept requirements and appeared to be unused. This is an indication that the implementation probably has unneccessary dependencies. - What is your evaluation of the documentation? Links to the Concepts don't work (they point to local files on my system). This makes it tedious to wade through. It would be preferable to have local HTML documentataion in full. The use of ConceptC++ style concepts is problematic. My suggestion is that if one wishes to use this style in the Docs then you should also follow through and actually put the Concepts into code and compile them on ConceptGCC. You could then verify that the documentation is correct. I suspect that the current GIL docs would require a lot of work to pass that test. In fact I would make that a requirement for any Docs that wish to use the ConceptGCC format. This would be an interesting discipline and I suspect would result in design changes to the library itself. Overall the docs seems patchy. I would suggest looking at other boost docs and seeing the difference in format. For one thing separating code tutorial and docs into separate dwnloads is confusing and time wasting. Look at librraies in the boost vault to see somethng of a common format. - What is your evaluation of the potential usefulness of the library? I see the usefulness in terms of separating the above Concepts. This would be more useful as various separate libraries. Each of the above Concepts is complex enough to deserve its own library. - Did you try to use the library? With what compiler? Did you have any problems? There is a format (official or unofficial) for library reviews, which has not been followed. The code is designed to be copied into the reviewers boost distribution for evaluation. The installations section of the tutorial doesnt cover quite what you are meant to do to install the library. The lack of detailed information is a common theme. I get the impression I am meant to be an expert in the domain to use the library. This put me off sufficiently not to bother trying the code. Suggestions for a first example. Load an image, provide some means or suggestions so that you can view the image. Show code for a transform of the image. View the result. Proceed like that. This would make for a much more powerful and comprehensible tutorial and would certainly get me more interested. - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? I found the documentation hard to follow for the reasons given above, hence I didnt look in detail. - Are you knowledgeable about the problem domain? No. Overall this is the libraries problem . It is very domain specific and the domain is specialised. Some of The Concepts ( actual rather than documented) are interesting, but the library seems intended for experts in this particular domain, not fro average users like myself. I vote to reject GIL in its current state. I would suggest the most useful part of the library potentially is that dealing with Colour, but not from an experts point of view as currently implemented. I would suggest thinking about how to make an easy to use Colour interface, maybe looking at current "standards" such as VRML, SVG, HTML, MFC. These are written from a users rather than a hardware viewpoint and all have similarities and AFAICS there is no insurmountable problem to provide an interface for the general user in an image processing library. regards Andy Little

Andy Little wrote:
- What is your evaluation of the design?
I believe that the various Concepts namely treating a data sequence as a view on a matrix or grid, iterating a matrix or Grid , and Color have not been adequately thought about in their own right.
We have discussed our disagreements on separating out Color and Grid in separate lengthy emails. To summarize my points: For Color: Though the idea is appealing, you cannot deal with color in an abstract way; in a color library you must account for how color is represented in memory. There are a large number of representations out there (order of channels, channel depth, subbyte representation, etc.) Where do all these representations come from? Almost exclusively they come from the way colors in _images_ are represented. So all these memory representations are specific to images and therefore logically they must be supported inside an image library. This explains why color is inside every image library I have looked at. For Grid: Again, what concrete example can you give me that requires GIL-s generalization of a Grid and cannot be represented as an image. Your example of a DisplayMatrix for text characters quickly falls apart. If you need a generic grid navigation, why not use a library whose goal is more aligned to grids and navigation, such as boost::MultiArray?
- What is your evaluation of the implementation?
I didnt look in detail. It appeared that some typedefs were only there
to
meet GIL Concept requirements and appeared to be unused. This is an indication that the implementation probably has unneccessary dependencies.
There are typedefs that are required by N-dimensional concepts, for example get an iterator over the N-th dimension of a locator/view/image. These are important so that current 2D models can be used in future generic N-dimensional algorithms.
- What is your evaluation of the documentation?
Links to the Concepts don't work (they point to local files on my system).
The use of ConceptC++ style concepts is problematic. My suggestion is
Yes, after you brought it up, I discovered that the PDF files have dead links. In the future we will remove the links when generating the PDFs. The HTML links should be working properly. that
if one wishes to use this style in the Docs then you should also follow through and actually put the Concepts into code and compile them on ConceptGCC. You could then verify that the documentation is correct. I suspect that the current GIL docs would require a lot of work to pass that test. In fact I would make that a requirement for any Docs that wish to use the ConceptGCC format. This would be an interesting discipline and I suspect would result in design changes to the library itself.
Compiling GIL on ConceptGCC is a great idea and we will do so. Where can we get ConceptGCC from? Had you looked in the code though, you would have discovered that GIL concepts have associated Concept classes, and yes, GIL compiles successfully with boost::concept_check enabled. While this is probably not as strict as ConceptGCC, it suggests that large changes to the library are unlikely.
There is a format (official or unofficial) for library reviews, which has not been followed. The code is designed to be copied into the reviewers boost distribution for evaluation.
Given that some other libraries have used external links as well, my understanding is that the vault is for convenience to developers that don't have an easy way to distribute their library. I believe our web page has an easy way to get to the code and documentation and may be more convenient to navigate than a directory of files.
The installations section of the tutorial doesnt cover quite what you are meant to do to install the library.
The tutorial refers you to GIL's web page which contains detailed installation instructions.
Suggestions for a first example. Load an image, provide some means or suggestions so that you can view the image. Show code for a transform
of
the image. View the result. Proceed like that.
Have you looked at slide 88 of the video presentation? It is doing exactly this. It shows GIL code to the left and the result to the right. Having a GUI to display an image in a platform independent way is beyond the scope of the library, as we have discussed in an earlier thread. You can just save the image and use your favorite image viewer. Lubomir

Lubomir Bourdev wrote:
Compiling GIL on ConceptGCC is a great idea and we will do so. Where can we get ConceptGCC from?
http://www.generic-programming.org/software/ConceptGCC/ Jeff

"Lubomir Bourdev" <lbourdev@adobe.com> wrote in message news:B55F4112A7B48C44AF51E442990015C0016B74D0@namail1.corp.adobe.com...
Andy Little wrote:
- What is your evaluation of the design?
I believe that the various Concepts namely treating a data sequence as a view on a matrix or grid, iterating a matrix or Grid , and Color have not been adequately thought about in their own right.
We have discussed our disagreements on separating out Color and Grid in separate lengthy emails. To summarize my points:
For Color: Though the idea is appealing, you cannot deal with color in an abstract way; in a color library you must account for how color is represented in memory. There are a large number of representations out there (order of channels, channel depth, subbyte representation, etc.) Where do all these representations come from? Almost exclusively they come from the way colors in _images_ are represented. So all these memory representations are specific to images and therefore logically they must be supported inside an image library. This explains why color is inside every image library I have looked at.
I think I have covered this in other posts.
For Grid: Again, what concrete example can you give me that requires GIL-s generalization of a Grid and cannot be represented as an image. Your example of a DisplayMatrix for text characters quickly falls apart. If you need a generic grid navigation, why not use a library whose goal is more aligned to grids and navigation, such as boost::MultiArray?
The Concept of grid navigation is interesting and I like the locator in GIL. Boost.MultiArray is a model of a grid sure. The example that interests me is the mandelbrot example. The mandelbrot function itself is a poor demo because generating points is nearly random, but here there is no actual grid in required to be in memory, but there is still the concept of a grid.. All that is required is the clipping region (You could have a grid in memory to cache the results of previous views if the user is moving about, but it is not required). The locator ( I would call it cursor) maybe provides the ability to iterate over a non-existent grid, which appeals to me. dereferencing the cursor where some elements are functions and others are data elements and things could get interesting. An obvious application for a bitmap is where many pixels are the same colour. One could also provide a function which allows the cursor to navigate the grid maybe following an edge ( a significant difference between two neigbouring elements maybe) and recording its path for example.
- What is your evaluation of the implementation?
I didnt look in detail. It appeared that some typedefs were only there to meet GIL Concept requirements and appeared to be unused. This is an indication that the implementation probably has unneccessary dependencies.
There are typedefs that are required by N-dimensional concepts, for example get an iterator over the N-th dimension of a locator/view/image. These are important so that current 2D models can be used in future generic N-dimensional algorithms.
- What is your evaluation of the documentation?
Links to the Concepts don't work (they point to local files on my system).
Yes, after you brought it up, I discovered that the PDF files have dead links. In the future we will remove the links when generating the PDFs. The HTML links should be working properly.
The use of ConceptC++ style concepts is problematic. My suggestion is that if one wishes to use this style in the Docs then you should also follow through and actually put the Concepts into code and compile them on ConceptGCC. You could then verify that the documentation is correct. I suspect that the current GIL docs would require a lot of work to pass that test. In fact I would make that a requirement for any Docs that wish to use the ConceptGCC format. This would be an interesting discipline and I suspect would result in design changes to the library itself.
Compiling GIL on ConceptGCC is a great idea and we will do so. Where can we get ConceptGCC from? Had you looked in the code though, you would have discovered that GIL concepts have associated Concept classes, and yes, GIL compiles successfully with boost::concept_check enabled. While this is probably not as strict as ConceptGCC, it suggests that large changes to the library are unlikely.
ConceptC++ is a different animal. My limited experience with it so far has been to throw away most ideas regarding templates. Porting a libary to ConceptC++ will be AFAICS, a major project.
There is a format (official or unofficial) for library reviews, which has not been followed. The code is designed to be copied into the reviewers boost distribution for evaluation.
Given that some other libraries have used external links as well, my understanding is that the vault is for convenience to developers that don't have an easy way to distribute their library. I believe our web page has an easy way to get to the code and documentation and may be more convenient to navigate than a directory of files.
The installations section of the tutorial doesnt cover quite what you are meant to do to install the library.
The tutorial refers you to GIL's web page which contains detailed installation instructions.
Suggestions for a first example. Load an image, provide some means or suggestions so that you can view the image. Show code for a transform
of
the image. View the result. Proceed like that.
Have you looked at slide 88 of the video presentation? It is doing exactly this. It shows GIL code to the left and the result to the right. Having a GUI to display an image in a platform independent way is beyond the scope of the library, as we have discussed in an earlier thread. You can just save the image and use your favorite image viewer.
As regards the way the review material is packaged for potential reviewers, I think you could revisit it ;-) regards Andy Little
participants (13)
-
Andy Little
-
David Abrahams
-
Doug Gregor
-
Douglas Gregor
-
Ian McCulloch
-
Jeff Garland
-
Kevin Wheatley
-
Lubomir Bourdev
-
Michael Goldshteyn
-
Robert Ramey
-
Sebastian Redl
-
Stefan Seefeld
-
Tom Brinkman