Review Request: Variadic Macro Data library

I am requesting that my library, the Variadic Macro Data library, which is in the sandbox in the variadic_macro_data directory, be reviewed for inclusion into Boost. The variadic_macro_data library adds support and functionality for variadic macros to Boost as well as integrating variadic macros with the Boost PP library without changing the latter library in any way. I believe others have used my library, can attest to its quality and that it does what it is supposed to do. and have found it useful when using variadic macros with Boost PP. I myself have used its functionality in my own TTI library in the sandbox. Support for variadic macros is implemented in nearly all modern C++ compilers and the syntax is natural for an end-user. The library is finalized as far as functionality is concerned and I would like to see it in Boost and am willing to maintain it as a Boost library. Edward Diener

On Thu, Feb 17, 2011 at 5:13 PM, Edward Diener <eldiener@tropicsoft.com> wrote:
I am requesting that my library, the Variadic Macro Data library, which is in the sandbox in the variadic_macro_data directory, be reviewed for inclusion into Boost.
The variadic_macro_data library adds support and functionality for variadic macros to Boost as well as integrating variadic macros with the Boost PP library without changing the latter library in any way.
I believe others have used my library, can attest to its quality and that it
I have been using the variadic_macro_data library to rework Boost.Local macros to variadic macros. It turned out I only needed a couple of macros from this library (VMD_SIZE and VMD_TO_SEQ) so I cannot comment much about the library at this point. However, I read the entire library documentation and I am personally pleased with the functionalities that it offers and especially with its design that does not change any of the Boost.Preprocessor macros. I would personally find it useful if this library is reviewed for addition to Boost.
does what it is supposed to do. and have found it useful when using variadic macros with Boost PP. I myself have used its functionality in my own TTI library in the sandbox. Support for variadic macros is implemented in nearly all modern C++ compilers and the syntax is natural for an end-user. The library is finalized as far as functionality is concerned and I would like to see it in Boost and am willing to maintain it as a Boost library.
-- Lorenzo

On Feb 17, 2011, at 9:57 PM, Lorenzo Caminiti wrote:
On Thu, Feb 17, 2011 at 5:13 PM, Edward Diener <eldiener@tropicsoft.com> wrote:
I am requesting that my library, the Variadic Macro Data library, which is in the sandbox in the variadic_macro_data directory, be reviewed for inclusion into Boost.
The variadic_macro_data library adds support and functionality for variadic macros to Boost as well as integrating variadic macros with the Boost PP library without changing the latter library in any way.
I believe others have used my library, can attest to its quality and that it
I have been using the variadic_macro_data library to rework Boost.Local macros to variadic macros. It turned out I only needed a couple of macros from this library (VMD_SIZE and VMD_TO_SEQ) so I cannot comment much about the library at this point. However, I read the entire library documentation and I am personally pleased with the functionalities that it offers and especially with its design that does not change any of the Boost.Preprocessor macros.
I would personally find it useful if this library is reviewed for addition to Boost.
does what it is supposed to do. and have found it useful when using variadic macros with Boost PP. I myself have used its functionality in my own TTI library in the sandbox. Support for variadic macros is implemented in nearly all modern C++ compilers and the syntax is natural for an end-user. The library is finalized as far as functionality is concerned and I would like to see it in Boost and am willing to maintain it as a Boost library.
If I remember correctly, VS 2008's support for Veridic Macros is limited to pasting the entire __VA_ARGS__ as one token vs pasting it as a series of tokens that may be further manipulated. Has this been fixed in VS2010 or how does your library get around this significant bug? Is there an online reference to your library somewhere?
-- Lorenzo _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 2/18/2011 8:53 AM, Daniel Larimer wrote:
On Feb 17, 2011, at 9:57 PM, Lorenzo Caminiti wrote:
On Thu, Feb 17, 2011 at 5:13 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
I am requesting that my library, the Variadic Macro Data library, which is in the sandbox in the variadic_macro_data directory, be reviewed for inclusion into Boost.
The variadic_macro_data library adds support and functionality for variadic macros to Boost as well as integrating variadic macros with the Boost PP library without changing the latter library in any way.
I believe others have used my library, can attest to its quality and that it
I have been using the variadic_macro_data library to rework Boost.Local macros to variadic macros. It turned out I only needed a couple of macros from this library (VMD_SIZE and VMD_TO_SEQ) so I cannot comment much about the library at this point. However, I read the entire library documentation and I am personally pleased with the functionalities that it offers and especially with its design that does not change any of the Boost.Preprocessor macros.
I would personally find it useful if this library is reviewed for addition to Boost.
does what it is supposed to do. and have found it useful when using variadic macros with Boost PP. I myself have used its functionality in my own TTI library in the sandbox. Support for variadic macros is implemented in nearly all modern C++ compilers and the syntax is natural for an end-user. The library is finalized as far as functionality is concerned and I would like to see it in Boost and am willing to maintain it as a Boost library.
If I remember correctly, VS 2008's support for Veridic Macros is limited to pasting the entire __VA_ARGS__ as one token vs pasting it as a series of tokens that may be further manipulated. Has this been fixed in VS2010 or how does your library get around this significant bug?
I found a way around all VC++ bugs regarding variadic macros and __VA_ARGS__. You can retrieve any single token from the variadic data using my library, whether in VC++ or any other compiler supporting variadic macros.
Is there an online reference to your library somewhere?
The library is in the Boost sandbox in the variadic_macro_data directory, and it already comes with full documentation there, both html and PDF. All you have to do is checkout using SVN from the sandbox directory.

Hi Edward, I have received your request, and am adding your library to the review queue. At this time you should try to find a review manager for the library. On Feb 17, 2011, at 5:13 PM, Edward Diener wrote:
I am requesting that my library, the Variadic Macro Data library, which is in the sandbox in the variadic_macro_data directory, be reviewed for inclusion into Boost.
The variadic_macro_data library adds support and functionality for variadic macros to Boost as well as integrating variadic macros with the Boost PP library without changing the latter library in any way.
I believe others have used my library, can attest to its quality and that it does what it is supposed to do. and have found it useful when using variadic macros with Boost PP. I myself have used its functionality in my own TTI library in the sandbox. Support for variadic macros is implemented in nearly all modern C++ compilers and the syntax is natural for an end-user. The library is finalized as far as functionality is concerned and I would like to see it in Boost and am willing to maintain it as a Boost library.
Edward Diener
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 2/18/2011 9:36 AM, Ronald Garcia wrote:
Hi Edward,
I have received your request, and am adding your library to the review queue.
Thank you !
At this time you should try to find a review manager for the library.
I am not sure how one goes about doing that, but I will assume I am supposed to ask for one on this mailing list.
On Feb 17, 2011, at 5:13 PM, Edward Diener wrote:
I am requesting that my library, the Variadic Macro Data library, which is in the sandbox in the variadic_macro_data directory, be reviewed for inclusion into Boost.
The variadic_macro_data library adds support and functionality for variadic macros to Boost as well as integrating variadic macros with the Boost PP library without changing the latter library in any way.
I believe others have used my library, can attest to its quality and that it does what it is supposed to do. and have found it useful when using variadic macros with Boost PP. I myself have used its functionality in my own TTI library in the sandbox. Support for variadic macros is implemented in nearly all modern C++ compilers and the syntax is natural for an end-user. The library is finalized as far as functionality is concerned and I would like to see it in Boost and am willing to maintain it as a Boost library.
Edward Diener

Hi Edward, On Feb 18, 2011, at 10:06 AM, Edward Diener wrote:
On 2/18/2011 9:36 AM, Ronald Garcia wrote:
At this time you should try to find a review manager for the library.
I am not sure how one goes about doing that, but I will assume I am supposed to ask for one on this mailing list.
That's right. The review wizards will also be putting out a report which requests managers for libraries that don't have one, but individual appeals are often more effective. Best, Ron

Hi Ron, Ed, all, On Feb 18, 2011, at 10:58 AM, Ronald Garcia wrote:
On Feb 18, 2011, at 10:06 AM, Edward Diener wrote:
On 2/18/2011 9:36 AM, Ronald Garcia wrote:
At this time you should try to find a review manager for the library.
I am not sure how one goes about doing that, but I will assume I am supposed to ask for one on this mailing list.
That's right. The review wizards will also be putting out a report which requests managers for libraries that don't have one, but individual appeals are often more effective.
I have been mulling over Joachim's Review Manager Assistant proposal, and I have a couple of thoughts about review manager assignments. First, as Joachim and many others have pointed out, the review "queue" that's described in http://www.boost.org/community/reviews.html#Wizard doesn't exist and isn't the right idea. I don't think it makes sense for the Review Wizards to try to assign review managers to libraries - it's much better for people to volunteer based on their own interests and expertise. That is what is happening in practice. Ed, you volunteered to be a review manager about a month ago. Have you approached any of the authors of prospective libraries on http://www.boost.org/community/review_schedule.html who are listed as needing review managers? I think rather than a queue of review managers, there is really a bag of libraries wanting review managers, and prospective review managers should just volunteer and shouldn't be chosen by the Review Wizards. It would be incredibly difficult for the Wizards to judge whether someone is going to do a good job of managing a review, and even harder to assign a review manager to a library review. I've only read the list for the past four years so I don't know if the queue was once functional. Wizards serve a valuable role by planning the review schedule and mediating any problems that come up in reviews. But I think that keeping track of prospective review managers without realistically being able to assign them to reviews is needless bureaucracy. The only use for this I can see, is if this list were visible so authors seeking review managers could know who they might write to personally. I'd like to see the cruft removed from the Formal Review Process. It's confusing and discouraging. Second, Joachim proposed the role of Review Manager Assistant as a way for new authors like Ed (and myself and many others) to manage reviews in conjunction with more seasoned boosters. The Assistant would do most of the work of summarizing the debate, and then the senior Review Manager would make decisions and produce the final report. I would like to suggest a generalization which is more fluid (and which would allow RMAs to put the nicer title "Review Manager" on their CVs): simply allow multiple Review Managers for a review. The managers can decide how to split up the work: in sequence like Joachim's RMA/RM idea, or in parallel by divvying up topics for the report. Or some mix of the two. Having sorted through 260 messages with many dozens of topics when I served as replacement review manager for Constrained Value, I can attest that it's a lot of work to manage a review. Besides making the task more "manageable", having multiple sets of eyes should help ensure that that the report is fair and nothing is missed. Of course multiple managers would have to agree on the result, but I'll leave off of more difficult subjects for now. (The other being what to do about the possibility of badly managed reviews or disputed reviews, which AFAIK have not been a problem so far.) On a personal note, I just want to say what a valuable experience it is to manage a Boost review. MPL.Graph's code and documentation are almost ready for review, and I feel that I am also psychologically ready because of what I learned from managing the Constrained Value review last summer. I recommend the experience to all prospective authors. Gordon P.S. A lot of discussion on this topic has assumed that a Review Manager has to be an accepted author, but the Formal Review Process just says that they have to be an "an active boost member not connected with the library submission", which I interpret as an active member of the mailing list. Am I correct? P.P.S. I'd volunteer to manage this review, as I'm eager to be an RM again, but I don't know much about macros.

On 2/19/2011 3:57 PM, Gordon Woodhull wrote:
Hi Ron, Ed, all,
On Feb 18, 2011, at 10:58 AM, Ronald Garcia wrote:
On Feb 18, 2011, at 10:06 AM, Edward Diener wrote:
On 2/18/2011 9:36 AM, Ronald Garcia wrote:
At this time you should try to find a review manager for the library.
I am not sure how one goes about doing that, but I will assume I am supposed to ask for one on this mailing list.
That's right. The review wizards will also be putting out a report which requests managers for libraries that don't have one, but individual appeals are often more effective.
I have been mulling over Joachim's Review Manager Assistant proposal, and I have a couple of thoughts about review manager assignments.
First, as Joachim and many others have pointed out, the review "queue" that's described in http://www.boost.org/community/reviews.html#Wizard doesn't exist and isn't the right idea.
It does not exist as far as that anybody can see it but maybe it does exist otherwise. But I think it is the right idea in that there should be some people who are accepted as possible reviewers who can be contacted if no one else volunteers to review a particular library. I see nothing wrong with that situation.
I don't think it makes sense for the Review Wizards to try to assign review managers to libraries - it's much better for people to volunteer based on their own interests and expertise. That is what is happening in practice.
I agree on principle but if no one volunteers then certainly a library in the review queue should not be rejected simple for lack of a review manager.
Ed, you volunteered to be a review manager about a month ago. Have you approached any of the authors of prospective libraries on http://www.boost.org/community/review_schedule.html who are listed as needing review managers?
I was told to contact the review wizards and offer up my services to review libraries. I did so, and mentioned the libraries I felt I could review. As I understand it, it was then up to the review wizards to determine whether I was qualified to review a library and to contact me about doing so if they thought that I was. I was not subsequently contacted. What do you mean "approached any of the authors of prospective libraries" ? I initially offered to review a number of libraries but was told to do what I described in the preceding paragraph.

On Feb 19, 2011, at 4:46 PM, Edward Diener wrote:
the review "queue" ... doesn't exist and isn't the right idea. It does not exist as far as that anybody can see it but maybe it does exist otherwise. But I think it is the right idea in that there should be some people who are accepted as possible reviewers who can be contacted if no one else volunteers to review a particular library. I see nothing wrong with that situation.
AFAIK the review wizards are not doing this. I don't think they should be expected to because it's hard to know who's qualified to manage a particular review. It takes domain knowledge. Sure it would be nice if it worked.
I don't think it makes sense for the Review Wizards to try to assign review managers to libraries - it's much better for people to volunteer based on their own interests and expertise. That is what is happening in practice.
I agree on principle but if no one volunteers then certainly a library in the review queue should not be rejected simple for lack of a review manager.
Certainly not. I am fine with libraries remaining in the "queue" as long as the author still wants the library considered for Boost. The only sort of deadline that makes sense in a volunteer organization is if someone loses patience and volunteers to take over from someone else (maintenance, review management...)
Ed, you volunteered to be a review manager about a month ago. Have you approached any of the authors of prospective libraries on http://www.boost.org/community/review_schedule.html who are listed as needing review managers?
I was told to contact the review wizards and offer up my services to review libraries. I did so, and mentioned the libraries I felt I could review. As I understand it, it was then up to the review wizards to determine whether I was qualified to review a library and to contact me about doing so if they thought that I was. I was not subsequently contacted.
Again, this is what it says on the Process page but I don't think it actually happens. I'd like the Review Wizards to correct me if I'm wrong. As for determining whether a prospective manager is qualified, doesn't that require infinite wisdom? Isn't it undecidable, like the halting problem? I think someone could prove oneself incompetent, but the only way to prove competence is to do it.
What do you mean "approached any of the authors of prospective libraries" ? I initially offered to review a number of libraries but was told to do what I described in the preceding paragraph.
Oh, I didn't notice that you had specifically chosen some libraries - that's good. I think you should volunteer directly to the authors. This seems to be what works in practice. Again, if the Wizards could be expected to do this, it would be easier, but I don't see how this could be expected to work in general. I'm trying to provoke some debate here, and hopefully to get the process amended. Maybe I will start a new thread if this fails to get notice. (Ed, sorry for hijacking your thread - I just happened to notice that you were having difficulty both volunteering as RM and in finding an RM.) Cheers, Gordon

2011/2/19 Edward Diener <eldiener@tropicsoft.com>:
On 2/19/2011 3:57 PM, Gordon Woodhull wrote:
Hi Ron, Ed, all,
On Feb 18, 2011, at 10:58 AM, Ronald Garcia wrote:
That's right. The review wizards will also be putting out a report which requests managers for libraries that don't have one, but individual appeals are often more effective.
On Feb 18, 2011, at 10:06 AM, Edward Diener wrote:
I am not sure how one goes about doing that, but I will assume I am supposed to ask for one on this mailing list.
On 2/18/2011 9:36 AM, Ronald Garcia wrote:
At this time you should try to find a review manager for the library.
Ed, you volunteered to be a review manager about a month ago. Have you approached any of the authors of prospective libraries on http://www.boost.org/community/review_schedule.html who are listed as needing review managers?
I was told to contact the review wizards and offer up my services to review libraries. I did so, and mentioned the libraries I felt I could review. As I understand it, it was then up to the review wizards to determine whether I was qualified to review a library and to contact me about doing so if they thought that I was. I was not subsequently contacted.
!! This is at least delicate and definitely "not amusing". (1) Ed volunteered to be Review Manager for a couple of libs (2) Ed is around in the Boost community for quite some time including BoostCon and is going to be first time library contributor. As an active Boost member he should qualify to be an RM, shouldn't he. At least he deserves a respose! (3) Many libs still don't have an RM AND (4) he was not subsequently contacted! There are different interpretations one can have about facts (1) to (4). I prefer this one: The construction of a looooong term duty, like the Reveiw Wizard Role, is just not working so well. RWs have a lot of duties and a lot of expectations put on them (e.g. by phony statements and standards on the web-site). At the heart of the review process they control things by accepting or rejecting RMs for library submissions. That gives them power and responsibility. What do they get for that? How are they motivated for all the work over years? What if they just have too little time to always care for all the requests and expectations directed towards them, follow all those discussions and keeping track of the qualifications of potential review managers. What if they aren't real Wizards but only humans? In my view, (1) Initiative and action should be "passed" to the people that are highly motivated: The group of contributors that have ideas and projects. (2) Quality control and the technical management of library review can be organized by contributors via the Review Manager Assistant role. (3) The "sovereign" is the community of developers: Discussions and formal reviews are crucial for acceptance or rejection of new libraries. (4) Seasoned boosters can veto if things go astray. In addition they contribute as mentors and help finding decisions in controversial cases. Best regards, Joachim -- Interval Container Library [Boost.Icl] http://www.joachim-faulhaber.de

On 2/20/2011 12:12 PM, Joachim Faulhaber wrote:
2011/2/19 Edward Diener<eldiener@tropicsoft.com>:
On 2/19/2011 3:57 PM, Gordon Woodhull wrote:
Hi Ron, Ed, all,
On Feb 18, 2011, at 10:58 AM, Ronald Garcia wrote:
That's right. The review wizards will also be putting out a report which requests managers for libraries that don't have one, but individual appeals are often more effective.
On Feb 18, 2011, at 10:06 AM, Edward Diener wrote:
I am not sure how one goes about doing that, but I will assume I am supposed to ask for one on this mailing list.
On 2/18/2011 9:36 AM, Ronald Garcia wrote:
At this time you should try to find a review manager for the library.
Ed, you volunteered to be a review manager about a month ago. Have you approached any of the authors of prospective libraries on http://www.boost.org/community/review_schedule.html who are listed as needing review managers?
I was told to contact the review wizards and offer up my services to review libraries. I did so, and mentioned the libraries I felt I could review. As I understand it, it was then up to the review wizards to determine whether I was qualified to review a library and to contact me about doing so if they thought that I was. I was not subsequently contacted.
!! This is at least delicate and definitely "not amusing". (1) Ed volunteered to be Review Manager for a couple of libs (2) Ed is around in the Boost community for quite some time including BoostCon
I have to correct that. I have never been to BoostCon.
and is going to be first time library contributor. As an active Boost member he should qualify to be an RM, shouldn't he. At least he deserves a respose! (3) Many libs still don't have an RM AND (4) he was not subsequently contacted!
Thanks for the boost on Boost, but I am fine with the decision. I do believe that having so many libraries waiting for reviews, which puts the potential, at least, for a library being added to Boost as some time pretty far relatively in the future, is not a great thing. I made the suggestion in the past that more than one review going on at a time, and a longer review process for each library, be allowed in order to get libraries reviewed more quickly. I honestly do not see that the slowness of the review process has much to do with libraries having review managers or not. Usually a library which is scheduled for review fairly soon will get a review manager somehow. But maybe this is a problem in that the review schedule is at least partially determined by those libraries which have review managers. At the same time the process for a library submitter finding a review manager for his library seems very odd to me. One posts a message on this mailing list and hopes someone responds saying that they are willing to be the review manager. if no one responds, what does one do then ? If someone responds, and the library implementer does not know that person from the mailing list, how does one choose whether or not that person is acceptable or not ? In a real way I would rather a review wizard go through a list of people which he knows are knowledgable and experienced enough to be a review manager and contact each of those people until he finds one to be a review manager for a library. It would be much easier than placing the burden of finding a review manager for a library on the library submitter. It would also almost assuredly mean that the review manager would have little personal bias approving or not approving a library for inclusion into Boost at the end of the review process.
There are different interpretations one can have about facts (1) to (4). I prefer this one:
The construction of a looooong term duty, like the Reveiw Wizard Role, is just not working so well. RWs have a lot of duties and a lot of expectations put on them (e.g. by phony statements and standards on the web-site). At the heart of the review process they control things by accepting or rejecting RMs for library submissions. That gives them power and responsibility. What do they get for that? How are they motivated for all the work over years? What if they just have too little time to always care for all the requests and expectations directed towards them, follow all those discussions and keeping track of the qualifications of potential review managers. What if they aren't real Wizards but only humans?
In my view,
(1) Initiative and action should be "passed" to the people that are highly motivated: The group of contributors that have ideas and projects. (2) Quality control and the technical management of library review can be organized by contributors via the Review Manager Assistant role. (3) The "sovereign" is the community of developers: Discussions and formal reviews are crucial for acceptance or rejection of new libraries. (4) Seasoned boosters can veto if things go astray. In addition they contribute as mentors and help finding decisions in controversial cases.

2011/2/20 Edward Diener <eldiener@tropicsoft.com>:
On 2/20/2011 12:12 PM, Joachim Faulhaber wrote:
(2) Ed is around in the Boost community for quite some time including BoostCon
I have to correct that. I have never been to BoostCon.
Sorry, I mistook you for another Ed whom I met at BoostCon ;-) Joachim

2011/2/20 Edward Diener <eldiener@tropicsoft.com>:
At the same time the process for a library submitter finding a review manager for his library seems very odd to me. One posts a message on this mailing list and hopes someone responds saying that they are willing to be the review manager. if no one responds, what does one do then ?
As Luke Simonson pointed out in another thread, authors of successful libraries ask. Approach people, that you know from discussions, directly and personally. Go to BoostCon and give a talk about your project. Meet people and ask them if they would be willing to be review manager for your library. That's most effective.
In a real way I would rather a review wizard go through a list of people which he knows are knowledgable and experienced enough to be a review manager and contact each of those people until he finds one to be a review manager for a library. It would be much easier than placing the burden of finding a review manager for a library on the library submitter.
You're not living in reality here. Why should the Review Wizards take on such a tedious work to pamper your personal project? Do you think they don't have enough pressing work to be done? Although the web-site gives a different impression, it is the most important steps for an author to find a review manager in order to get his library into a formal review. * First you find a review manager * then you ask the RVs if your RM is acceptable * then you usually get your library scheduled pretty quickly This is how things are working currently in my experience. Best regards, Joachim -- Interval Container Library [Boost.Icl] http://www.joachim-faulhaber.de

On 2/20/2011 4:52 PM, Joachim Faulhaber wrote:
2011/2/20 Edward Diener<eldiener@tropicsoft.com>:
At the same time the process for a library submitter finding a review manager for his library seems very odd to me. One posts a message on this mailing list and hopes someone responds saying that they are willing to be the review manager. if no one responds, what does one do then ?
As Luke Simonson pointed out in another thread, authors of successful libraries ask. Approach people, that you know from discussions, directly and personally. Go to BoostCon and give a talk about your project. Meet people and ask them if they would be willing to be review manager for your library. That's most effective.
That I should have to travel about and "meet people" just to have someone act as review manager for my library is ludicrous. I really have other things to do in my life. Nor do I think it is the job of creative people to play the fool in order to impress others.
In a real way I would rather a review wizard go through a list of people which he knows are knowledgable and experienced enough to be a review manager and contact each of those people until he finds one to be a review manager for a library. It would be much easier than placing the burden of finding a review manager for a library on the library submitter.
You're not living in reality here. Why should the Review Wizards take on such a tedious work to pamper your personal project?
You are not living in reality. You somehow think that writing software is some sort of a political job.
Do you think they don't have enough pressing work to be done?
I think the work they do is exactly as it is described on the Boost web site.
Although the web-site gives a different impression
That's the impression I go by. , it is the most important steps for an
author to find a review manager in order to get his library into a formal review.
* First you find a review manager * then you ask the RVs if your RM is acceptable * then you usually get your library scheduled pretty quickly
My library is already scheduled using the protocol which Boost describes.
This is how things are working currently in my experience.
Good. I claim that the way things work could be much better if the task of having to find a review manager for a library were taken from the submitter of that library and placed in the hands of review wizards.

2011/2/21 Edward Diener <eldiener@tropicsoft.com>:
On 2/20/2011 4:52 PM, Joachim Faulhaber wrote:
2011/2/20 Edward Diener<eldiener@tropicsoft.com>:
At the same time the process for a library submitter finding a review manager for his library seems very odd to me. One posts a message on this mailing list and hopes someone responds saying that they are willing to be the review manager. if no one responds, what does one do then ?
As Luke Simonson pointed out in another thread, authors of successful libraries ask. Approach people, that you know from discussions, directly and personally. Go to BoostCon and give a talk about your project. Meet people and ask them if they would be willing to be review manager for your library. That's most effective.
That I should have to travel about and "meet people"
I don't know what you should do. I just answered your question telling what is most effective from my experience. In your case you are lucky because you found Gordon as a volunteer. But I have seen many general requests on the mailing list for review managers that were not answered. Best regards, Joachim

On Feb 21, 2011, at 1:23 AM, Joachim Faulhaber wrote:
In your case you are lucky because you found Gordon as a volunteer.
I'm not volunteering to manage the review, because I'm not competent in this subject. I said:
I'd volunteer to manage this review, as I'm eager to be an RM again, but I don't know much about macros.
Sorry if my English is a little Baroque at times - I can see how that could be misread. :-D Although I would like to learn more about macro metaprogramming, I don't think being a Review Manager is an appropriate place to start learning something that arcane.
But I have seen many general requests on the mailing list for review managers that were not answered.
I think we're all in agreement about that. Gordon

2011/2/21 Edward Diener <eldiener@tropicsoft.com>:
On 2/20/2011 4:52 PM, Joachim Faulhaber wrote:
Although the web-site gives a different impression
That's the impression I go by.
, it is the most important steps for an
author to find a review manager in order to get his library into a formal review.
* First you find a review manager * then you ask the RVs if your RM is acceptable * then you usually get your library scheduled pretty quickly
My library is already scheduled using the protocol which Boost describes.
It's not difficult at all to get listed on the review schedule. The difficult part is to find a review manager. There are libraries that are staying in the queue for many months even years and nothing happens while libraries that are entering the queue that have a review manager are scheduled and reviewed within weeks. I'm not saying this to bother you. It's just my experience from the last 3 years. The Review Schedule on the boost web-size says: "Reviews are usually scheduled on a first-come-first-served basis" Yet in fact it has been a last come first serve process most of the time. And the crucial point is finding a review manager. Regards, Joachim

On Feb 20, 2011, at 2:24 PM, Edward Diener wrote:
On 2/20/2011 12:12 PM, Joachim Faulhaber wrote:
2011/2/19 Edward Diener<eldiener@tropicsoft.com>:
On 2/19/2011 3:57 PM, Gordon Woodhull wrote:
Ed, you volunteered to be a review manager about a month ago. Have you approached any of the authors of prospective libraries on http://www.boost.org/community/review_schedule.html who are listed as needing review managers?
I was told to contact the review wizards and offer up my services to review libraries. I did so, and mentioned the libraries I felt I could review. As I understand it, it was then up to the review wizards to determine whether I was qualified to review a library and to contact me about doing so if they thought that I was. I was not subsequently contacted.
!! This is at least delicate and definitely "not amusing". (1) Ed volunteered to be Review Manager for a couple of libs (2) Ed is around in the Boost community for quite some time including BoostCon
I have to correct that. I have never been to BoostCon.
I object to the idea that going to BoostCon makes one a better member of the community. While BoostCon is truly amazing and I'm glad to have seen and heard people, proving oneself helpful and knowledgeable enough to manage a review happens right here on the list, and in the code.
and is going to be first time library contributor. As an active Boost member he should qualify to be an RM, shouldn't he. At least he deserves a respose! (3) Many libs still don't have an RM AND (4) he was not subsequently contacted!
Thanks for the boost on Boost, but I am fine with the decision.
You thought that they decided they shouldn't assign you a review to manage because they didn't get back to you? I sincerely doubt that; every reply I saw was positive. The "review queue algorithm" is broken.
I do believe that having so many libraries waiting for reviews, which puts the potential, at least, for a library being added to Boost as some time pretty far relatively in the future, is not a great thing.
Right.
I made the suggestion in the past that more than one review going on at a time, and a longer review process for each library, be allowed in order to get libraries reviewed more quickly.
Yes!
I honestly do not see that the slowness of the review process has much to do with libraries having review managers or not. Usually a library which is scheduled for review fairly soon will get a review manager somehow.
More like, a library that has a review manager gets scheduled quickly.
At the same time the process for a library submitter finding a review manager for his library seems very odd to me. One posts a message on this mailing list and hopes someone responds saying that they are willing to be the review manager. if no one responds, what does one do then ?
You're right, it would be nice if the Wizards could help out newcomers. But really, the best way for this to happen is for one of the thousands of people who read this list to step forward and express interest.
If someone responds, and the library implementer does not know that person from the mailing list, how does one choose whether or not that person is acceptable or not ?
I think you just have to trust that the person is competent and not hostile, otherwise why would they have volunteered? A couple of times I have seen someone volunteer to manage a review on the list, and then someone else who was obviously more competent jumped in and said "no, actually I should." And the first person defers. I think RMAs (although I object to that name - more on that later) can really help out here, because sometimes the really really experienced people don't have much time to do a totally thorough write-up. I have never doubted the decisions and every report is a fascinating read, but I like when the review report takes the time to summarize every single issue that made any sense, and how it was resolved.
In a real way I would rather a review wizard go through a list of people which he knows are knowledgable and experienced enough to be a review manager and contact each of those people until he finds one to be a review manager for a library. It would be much easier than placing the burden of finding a review manager for a library on the library submitter.
I agree it would be nice, especially if the Wizards knew all of the potential review managers so well that they could just decide the right person. Infinitely wise and perfectly tuned in to the competencies of the entire community... I'm not being sarcastic, that would be really nice!
It would also almost assuredly mean that the review manager would have little personal bias approving or not approving a library for inclusion into Boost at the end of the review process.
You seem to be hoping that the system can ensure objectivity. But you are talking about a group of very passionate, brilliant programmers! Of course the review manager is going to have opinions. [Case in point: Christophe Henry, who is using my library MPL.Graph, has volunteered to manage the review. Of course he is going to be biased toward acceptance, but I don't doubt that he'll be objective enough to take any No votes or Conditions on Acceptance seriously.] But I've never seen someone maliciously volunteer to manage a review because they wanted to reject the library, although I've seen review managers reluctantly vote against the library. Generally if a library is rejected the author is encouraged to rewrite and resubmit, and they could certainly request a different review manager the next time. Cheer up, this is a friendly place, if strongly opinionated. Everyone wants you to do your best work. Gordon

On 2/20/2011 7:58 PM, Gordon Woodhull wrote:
On Feb 20, 2011, at 2:24 PM, Edward Diener wrote:
On 2/20/2011 12:12 PM, Joachim Faulhaber wrote:
2011/2/19 Edward Diener<eldiener@tropicsoft.com>:
On 2/19/2011 3:57 PM, Gordon Woodhull wrote:
Ed, you volunteered to be a review manager about a month ago. Have you approached any of the authors of prospective libraries on http://www.boost.org/community/review_schedule.html who are listed as needing review managers?
I was told to contact the review wizards and offer up my services to review libraries. I did so, and mentioned the libraries I felt I could review. As I understand it, it was then up to the review wizards to determine whether I was qualified to review a library and to contact me about doing so if they thought that I was. I was not subsequently contacted.
!! This is at least delicate and definitely "not amusing". (1) Ed volunteered to be Review Manager for a couple of libs (2) Ed is around in the Boost community for quite some time including BoostCon
I have to correct that. I have never been to BoostCon.
I object to the idea that going to BoostCon makes one a better member of the community. While BoostCon is truly amazing and I'm glad to have seen and heard people, proving oneself helpful and knowledgeable enough to manage a review happens right here on the list, and in the code. snipped...
It would also almost assuredly mean that the review manager would have little personal bias approving or not approving a library for inclusion into Boost at the end of the review process.
You seem to be hoping that the system can ensure objectivity. But you are talking about a group of very passionate, brilliant programmers! Of course the review manager is going to have opinions.
[Case in point: Christophe Henry, who is using my library MPL.Graph, has volunteered to manage the review. Of course he is going to be biased toward acceptance, but I don't doubt that he'll be objective enough to take any No votes or Conditions on Acceptance seriously.]
But I've never seen someone maliciously volunteer to manage a review because they wanted to reject the library, although I've seen review managers reluctantly vote against the library. Generally if a library is rejected the author is encouraged to rewrite and resubmit, and they could certainly request a different review manager the next time.
I am more concerned that a review manager will tend to approve of a library in which he has a vested interest than that a library will be rejected because a review manager is being malicious in any way.
Cheer up, this is a friendly place, if strongly opinionated. Everyone wants you to do your best work.
Good programmers are strongly opinionated ? Heaven forfend ! <g>

On Feb 20, 2011, at 8:37 PM, Edward Diener wrote:
I am more concerned that a review manager will tend to approve of a library in which he has a vested interest than that a library will be rejected because a review manager is being malicious in any way.
Fair enough. This doesn't worry me very much, but - I think there should be a way to challenge a review afterward, through the Wizards and/or list as appropriate. - If someone doubts the objectivity, independence, or competence of a review manager before the review starts, they should contact the Review Wizards. Again, I don't think the RWs should be expected to decide the fitness of an RM before scheduling the review, but they should be there to mediate disputes and make judgements. I don't think we should have some strict rule that would e.g. disqualify Hartmut from managing the Phoenix review because Spirit uses Phoenix, or Christophe from managing my review because he's using my library. Boost is all about encouraging connections between libraries and people. (and "neat stuff" :) Certainly an author of the library under review should be disqualified because of ego attachment. It's hard for me to think of other reasons why a manager should be rejected out of hand, although I encourage you to think about any guidelines that would help the Wizards guard against "vested interests". Cheers, Gordon

Gordon Woodhull wrote:
On Feb 20, 2011, at 8:37 PM, Edward Diener wrote:
Fair enough. This doesn't worry me very much, but - I think there should be a way to challenge a review afterward, through the Wizards and/or list as appropriate.
Actually there is already a mechanism for addressing this. I know this for a fact because I personally availed myself of it. The procedure can be summarized as follows: a) Understand and accept the objections raised the review which resulted in the library being rejected, b) redo the code/documentation c) re-submit. Worked for me. Robert Ramey

On Feb 21, 2011, at 12:05 AM, Robert Ramey wrote:
Gordon Woodhull wrote:
On Feb 20, 2011, at 8:37 PM, Edward Diener wrote:
Fair enough. This doesn't worry me very much, but - I think there should be a way to challenge a review afterward, through the Wizards and/or list as appropriate.
Actually there is already a mechanism for addressing this. I know this for a fact because I personally availed myself of it. The procedure can be summarized as follows:
a) Understand and accept the objections raised the review which resulted in the library being rejected,
b) redo the code/documentation
c) re-submit.
Yes, this works great and I hope that I'm emphasizing that side enough. If a library is rejected, the author can rewrite and seek a second review with (IMO?) a different review manager if necessary. Ed's concern is with the far thornier issue of, what if you think a library was improperly *accepted*, if the review manager was too biased in favor of the library? Aside from authors of competing libraries (who are free to submit theirs or try to get their features merged), and Luddites, have you ever seen someone severely angered by a result of acceptance? I'm not saying it can't happen, and I want to be sure there is a mechanism to resolve conflicts in case of incompetent or over-favoring review managers. If we admit that Review Wizards can't tell beforehand whether a Review Manager is going to do a good job, as Joachim and I are arguing, then people should just volunteer to manage reviews and not bother registering with the Wizards for a nonexistent queue. But it is possible that these volunteers could be nincompoops or shills, and the Review Wizards and the whole community need the power to remove a bad manager or nullify the results. The classic case might be, what if some bad company submitted a library that would lock everyone using C++ into their products (not sure how this would happen, but bear with me) and then "volunteered" one of their employees to manage the review? At that point I think someone should complain to the Wizards, who have the authority to reject managers. (I doubt this is exactly what Ed is worried about, just musing.) Gordon

AMDG On 2/20/2011 9:55 PM, Gordon Woodhull wrote:
Yes, this works great and I hope that I'm emphasizing that side enough.
If a library is rejected, the author can rewrite and seek a second review with (IMO?) a different review manager if necessary.
Ed's concern is with the far thornier issue of, what if you think a library was improperly *accepted*, if the review manager was too biased in favor of the library?
Aside from authors of competing libraries (who are free to submit theirs or try to get their features merged), and Luddites, have you ever seen someone severely angered by a result of acceptance?
I'm not saying it can't happen, and I want to be sure there is a mechanism to resolve conflicts in case of incompetent or over-favoring review managers.
If anyone can ever make a reasonable claim that there's been a serious problem, we can deal with it then. We don't need any formalized process to deal with hypothetical problems.
If we admit that Review Wizards can't tell beforehand whether a Review Manager is going to do a good job, as Joachim and I are arguing, then people should just volunteer to manage reviews and not bother registering with the Wizards for a nonexistent queue.
But it is possible that these volunteers could be nincompoops or shills, and the Review Wizards and the whole community need the power to remove a bad manager or nullify the results.
The classic case might be, what if some bad company submitted a library that would lock everyone using C++ into their products (not sure how this would happen, but bear with me) and then "volunteered" one of their employees to manage the review? At that point I think someone should complain to the Wizards, who have the authority to reject managers. (I doubt this is exactly what Ed is worried about, just musing.)
This would clearly violate the license requirements. In Christ, Steven Watanabe

On Feb 21, 2011, at 1:38 AM, Steven Watanabe wrote:
If anyone can ever make a reasonable claim that there's been a serious problem, we can deal with it then. We don't need any formalized process to deal with hypothetical problems.
I'm not saying we need a process. I'm just saying that if this ever were a problem, the Community and the Wizards would be the solution.
(silly scenario)
This would clearly violate the license requirements.
Good.

2011/2/19 Gordon Woodhull <gordon@woodhull.com>:
Hi Ron, Ed, all,
On Feb 18, 2011, at 10:58 AM, Ronald Garcia wrote:
On Feb 18, 2011, at 10:06 AM, Edward Diener wrote:
On 2/18/2011 9:36 AM, Ronald Garcia wrote:
At this time you should try to find a review manager for the library.
I am not sure how one goes about doing that, but I will assume I am supposed to ask for one on this mailing list.
That's right. The review wizards will also be putting out a report which requests managers for libraries that don't have one, but individual appeals are often more effective.
I have been mulling over Joachim's Review Manager Assistant proposal, and I have a couple of thoughts about review manager assignments.
First, as Joachim and many others have pointed out, the review "queue" that's described in http://www.boost.org/community/reviews.html#Wizard doesn't exist and isn't the right idea.
+1 What the web-site states (1) does not happen in reality (2) leads to false expectations specifically from contributors (3) puts expectations on the review wizards that they can not fulfill (4) leads to ineffective behavior on the side of the contributors (waiting, complaining) (5) and to frustration.
I'd like to see the cruft removed from the Formal Review Process. It's confusing and discouraging. +1
Second, Joachim proposed the role of Review Manager Assistant as a way for new authors like Ed (and myself and many others) to manage reviews in conjunction with more seasoned boosters. The Assistant would do most of the work of summarizing the debate, and then the senior Review Manager would make decisions and produce the final report.
I would like to suggest a generalization which is more fluid (and which would allow RMAs to put the nicer title "Review Manager" on their CVs): simply allow multiple Review Managers for a review. The managers can decide how to split up the work: in sequence like Joachim's RMA/RM idea, or in parallel by divvying up topics for the report. Or some mix of the two.
Personally I am more interested in clear commitments for a particular role and taking responsibility for that role within the Boost community. A "Review Manager Crowd" I dislike ... More fundamental to my thoughts http://lists.boost.org/Archives/boost/2010/05/166423.php is the idea to make the RMA job (1) A precondition to an own first library submission because obviously the most energy, excitement and motivation is in the endeavour of library contribution. First time contributors should (2) learn thoroughly all aspects and standards around boost libraries being RMA (3) help to enhance the quality of submissions of libraries of others and the quality of the review queue as a whole. (4) have an opportunity to establish themselves in a role of contributing for others (5) empower the group of contributors and make them more independent of the boost functionaries (6) unburden the boost functionaries (7) Finally, functionaries can not discourage contributions by mere inaction anymore. They would have to veto. Which I think is appropriate because contributors deserve a response. Not necessarily a yes, but a response.
On a personal note, I just want to say what a valuable experience it is to manage a Boost review. MPL.Graph's code and documentation are almost ready for review, and I feel that I am also psychologically ready because of what I learned from managing the Constrained Value review last summer.
Congratulations!
P.P.S. I'd volunteer to manage this review, as I'm eager to be an RM again, but I don't know much about macros.
+1! Thanks for your numerous contributions to boost! Cheers, Joachim

On Feb 20, 2011, at 8:54 AM, Joachim Faulhaber wrote:
Gordon Woodhull wrote:
Second, Joachim proposed the role of Review Manager Assistant as a way for new authors like Ed (and myself and many others) to manage reviews in conjunction with more seasoned boosters. The Assistant would do most of the work of summarizing the debate, and then the senior Review Manager would make decisions and produce the final report.
I would like to suggest a generalization which is more fluid (and which would allow RMAs to put the nicer title "Review Manager" on their CVs): simply allow multiple Review Managers for a review. The managers can decide how to split up the work: in sequence like Joachim's RMA/RM idea, or in parallel by divvying up topics for the report. Or some mix of the two.
Personally I am more interested in clear commitments for a particular role and taking responsibility for that role within the Boost community. A "Review Manager Crowd" I dislike ...
Okay, I will admit it, it's only the name I don't like about Review Manager Assistant! Unfortunately in the USA, with the bizarre way that Political Correct language evolves and devolves, Assistant is synonymous with Secretary, and has the same negative connotations. Let's be realistic, we want something that looks nice on a CV - but I do hope that if anyone shoddily tries to manage a review just to pad their resume, they'll be forced to actually do the work. Really, the RMA is doing all of the work and making all of the decisions, and the Review Manager should approve, or disagree and redo part of the work. That is why I like the idea of just calling the RMA a Review Manager, and it may happen that their work is being checked by another Review Manager. But I'd also be open to other names, such as Review Compiler (a pun) Review Summarizer (awkward) Review Reader (too mild) Review Investigator ...
More fundamental to my thoughts http://lists.boost.org/Archives/boost/2010/05/166423.php is the idea to make the RMA job (1) A precondition to an own first library submission because obviously the most energy, excitement and motivation is in the endeavour of library contribution.
I don't think someone who has some insanely brilliant library should be deterred from submitting it because they haven't "compiled" a review. But I think this should be one of the ways to show that they have the cooperative instincts and stick-to-it-iveness to be a good maintainer and member of the community.
First time contributors should (2) learn thoroughly all aspects and standards around boost libraries being RMA (3) help to enhance the quality of submissions of libraries of others and the quality of the review queue as a whole. (4) have an opportunity to establish themselves in a role of contributing for others
Above all, to understand the review process and learn how debates are resolved and consensus is reached before they head into their own review.
(5) empower the group of contributors and make them more independent of the boost functionaries (6) unburden the boost functionaries
I think it might be possible for for the Wizards to encourage these connections to happen, but I have to think about it.
(7) Finally, functionaries can not discourage contributions by mere inaction anymore. They would have to veto. Which I think is appropriate because contributors deserve a response. Not necessarily a yes, but a response.
I think you are suggesting that the Wizards get to veto a review report if they don't think it was done properly or fairly. I want to democratize that suggestion, and say that it should be possible for a review to be challenged by anyone on the list. If the Wizards think a challenge has merit -- using the same anti-troll (and anti-"I just won't use this ridiculous stuff") criteria that Review Managers use to judge No votes -- then it can go for a second review with a different review manager.
Thanks for your numerous contributions to boost!
You ain't seen nothin' yet. Cheers, Gordon

2011/2/21 Gordon Woodhull <gordon@woodhull.com>:
On Feb 20, 2011, at 8:54 AM, Joachim Faulhaber wrote:
Gordon Woodhull wrote:
Second, Joachim proposed the role of Review Manager Assistant as a way for new authors like Ed (and myself and many others) to manage reviews in conjunction with more seasoned boosters. The Assistant would do most of the work of summarizing the debate, and then the senior Review Manager would make decisions and produce the final report.
I would like to suggest a generalization which is more fluid (and which would allow RMAs to put the nicer title "Review Manager" on their CVs): simply allow multiple Review Managers for a review. The managers can decide how to split up the work: in sequence like Joachim's RMA/RM idea, or in parallel by divvying up topics for the report. Or some mix of the two.
Personally I am more interested in clear commitments for a particular role and taking responsibility for that role within the Boost community. A "Review Manager Crowd" I dislike ...
Okay, I will admit it, it's only the name I don't like about Review Manager Assistant! Unfortunately in the USA, with the bizarre way that Political Correct language evolves and devolves, Assistant is synonymous with Secretary, and has the same negative connotations. Let's be realistic, we want something that looks nice on a CV - but I do hope that if anyone shoddily tries to manage a review just to pad their resume, they'll be forced to actually do the work.
Really, the RMA is doing all of the work and making all of the decisions, and the Review Manager should approve, or disagree and redo part of the work.
That is why I like the idea of just calling the RMA a Review Manager, and it may happen that their work is being checked by another Review Manager. But I'd also be open to other names, such as
Review Compiler (a pun) Review Summarizer (awkward) Review Reader (too mild) Review Investigator ...
from the Boost Bureau of Investigation (BBI ;-) what about Review Comanager.
More fundamental to my thoughts http://lists.boost.org/Archives/boost/2010/05/166423.php is the idea to make the RMA job (1) A precondition to an own first library submission because obviously the most energy, excitement and motivation is in the endeavour of library contribution.
I don't think someone who has some insanely brilliant library should be deterred from submitting it because they haven't "compiled" a review.
Depends on the rules. Generally my proposals are trying to strengthen the effects of action and weakening the effects of inaction. If there is a great interest in a library, such that seasoned boosters or moderators have an interest of getting a library into boost such a library could be *invited* for a review. Like on a conference: Most of the submitted talks are reviewed and then accepted or rejected in a standard way but the key note speaker is invited.
But I think this should be one of the ways to show that they have the cooperative instincts and stick-to-it-iveness to be a good maintainer and member of the community.
First time contributors should (2) learn thoroughly all aspects and standards around boost libraries being RMA (3) help to enhance the quality of submissions of libraries of others and the quality of the review queue as a whole. (4) have an opportunity to establish themselves in a role of contributing for others
Above all, to understand the review process and learn how debates are resolved and consensus is reached before they head into their own review.
+1
(5) empower the group of contributors and make them more independent of the boost functionaries (6) unburden the boost functionaries
I think it might be possible for for the Wizards to encourage these connections to happen,
currently the Wizards seem not to be very much in favor of my proposal. I asked them off list. John Phillips dislikes the idea and thinks the RMA role made everything only more complicated. But they didn't take much time to discuss things in detail. As I said, they are only humans no real Wizards. I can understand they won't be able to comment on each and every one of the numerous proposals on the review process that come from the mailing list.
but I have to think about it.
(7) Finally, functionaries can not discourage contributions by mere inaction anymore. They would have to veto. Which I think is appropriate because contributors deserve a response. Not necessarily a yes, but a response.
I think you are suggesting that the Wizards get to veto a review report if they don't think it was done properly or fairly.
No, the crucial point in my proposal is the strengthening of the active contributors. (7.1) RMAs are not necessary for a review. So it can be done as before with an RM only (7.2) yet in addition, the review process can be scheduled after the RMA has thoroughly checked library for all preconditions. (7.3) The submitter and RMA are supposed to find an RM but if no one steps up, the formal review can be conducted by the RMA. I think this is acceptable, because the "sovereign" is the community itself, not the RM nor the RWs. If a library gets a lot of yes votes, it hard for an RM to reject it anyway. (7.4) But a library thats review is conducted without RM should have a weaker position. Because it lacks the person that is designed for the usual way of quality control. Therefore there should be a mechanism of control that depends on action: A veto. If e.g. the review went pretty controversial and the RMA seems to vote in favor of the submitter without convincing subject-specific arguments, RWs or Boost moderators or a booster that would otherwise quality as RM could issue a veto giving a substantial justification, of course. (7.5) This is a mechanism of control or blockade, if you will, that demands action. Mere inaction should not discourage contributions.
I want to democratize that suggestion, and say that it should be possible for a review to be challenged by anyone on the list. If the Wizards think a challenge has merit -- using the same anti-troll (and anti-"I just won't use this ridiculous stuff") criteria that Review Managers use to judge No votes -- then it can go for a second review with a different review manager.
I'm not sure if I understand you completely here. But challenging a review result should be a serious action and should not happen too often. So I would like to couple it with competence. My concern is to maximize contribution, throughput, quality and fun and minimize blockade, waiting, false expectations and frustration. Cheers, Joachim -- Interval Container Library [Boost.Icl] http://www.joachim-faulhaber.de

On Thu, Feb 17, 2011 at 5:13 PM, Edward Diener <eldiener@tropicsoft.com> wrote:
I am requesting that my library, the Variadic Macro Data library, which is in the sandbox in the variadic_macro_data directory, be reviewed for inclusion into Boost.
The variadic_macro_data library adds support and functionality for variadic macros to Boost as well as integrating variadic macros with the Boost PP library without changing the latter library in any way.
I believe others have used my library, can attest to its quality and that it does what it is supposed to do. and have found it useful when using variadic macros with Boost PP. I myself have used its functionality in my own TTI library in the sandbox. Support for variadic macros is implemented in nearly all modern C++ compilers and the syntax is natural for an end-user. The library is finalized as far as functionality is concerned and I would like to see it in Boost and am willing to maintain it as a Boost library.
Is it possible to use variadic macros to detect empty parameters? For example: #include <boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib. VMD_DATA_SIZE(1, 2) // 2 VMD_DATA_SIZE(1) // 1 VMD_DATA_SIZE() // 1 not 0 :(( But I would like to the last size to expand to 0 (or have a different macro that would expand to 0 in that case). With a real C99 preprocessor (e.g., GCC) I can do the following because empty macro parameters are supported: #include <boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib. #include <boost/preprocessor.hpp> #include <boost/preprocessor/facilities/is_empty.hpp> #define PP_VA_EAT(...) /* must expand to nothing */ #define PP_VA_SIZE_1OR0_(x) BOOST_PP_IIF(BOOST_PP_IS_EMPTY(x), 0, 1) #define PP_VA_SIZE_(size, ...) \ BOOST_PP_IIF(BOOST_PP_EQUAL(size, 1), \ PP_VA_SIZE_1OR0_ \ , \ size PP_VA_EAT \ )(__VA_ARGS__) #define PP_VA_SIZE(...) PP_VA_SIZE_(VMD_DATA_SIZE(__VA_ARGS__), __VA_ARGS__) PP_VA_SIZE(1, 2) // 2 PP_VA_SIZE(1) // 1 PP_VA_SIZE() // 0 :)) But this does not compile on MSVC because while it supports variadics it does not support empty macro parameters :( BTW, a few minor comments on you library: 1) I think a name like BOOST_PP_VA_... is better than BOOST_VMD. "VA" is the same abbreviation for variadics that C99 uses in __VA_ARGS__ and Boost.Preprocessor already uses abbreviations like in BOOST_PP_SEQ. Alternatively, I would consider BOOST_PP_VARIADICS_... but still not "VMD" because programmers will not know what VMD stands for. 2) I would add PP_VA_EAT, PP_VA_IDENTITY, PP_VA_CAT (similar to what they do already in Boost.Preprocessor and they should be trivial to implement). Also, I would add `#define PP_VA_SAME(...) __VA_ARGS__`. Thanks. -- Lorenzo

On 2/18/2011 7:27 PM, Lorenzo Caminiti wrote:
On Thu, Feb 17, 2011 at 5:13 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
I am requesting that my library, the Variadic Macro Data library, which is in the sandbox in the variadic_macro_data directory, be reviewed for inclusion into Boost.
The variadic_macro_data library adds support and functionality for variadic macros to Boost as well as integrating variadic macros with the Boost PP library without changing the latter library in any way.
I believe others have used my library, can attest to its quality and that it does what it is supposed to do. and have found it useful when using variadic macros with Boost PP. I myself have used its functionality in my own TTI library in the sandbox. Support for variadic macros is implemented in nearly all modern C++ compilers and the syntax is natural for an end-user. The library is finalized as far as functionality is concerned and I would like to see it in Boost and am willing to maintain it as a Boost library.
Is it possible to use variadic macros to detect empty parameters?
My understanding of variadic macro data is that at least one parameter must be specified.
For example:
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib.
VMD_DATA_SIZE(1, 2) // 2 VMD_DATA_SIZE(1) // 1 VMD_DATA_SIZE() // 1 not 0 :((
But I would like to the last size to expand to 0 (or have a different macro that would expand to 0 in that case).
With a real C99 preprocessor (e.g., GCC) I can do the following because empty macro parameters are supported:
Are they for variadic macro data in C++ ?
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib. #include<boost/preprocessor.hpp> #include<boost/preprocessor/facilities/is_empty.hpp>
#define PP_VA_EAT(...) /* must expand to nothing */
#define PP_VA_SIZE_1OR0_(x) BOOST_PP_IIF(BOOST_PP_IS_EMPTY(x), 0, 1)
#define PP_VA_SIZE_(size, ...) \ BOOST_PP_IIF(BOOST_PP_EQUAL(size, 1), \ PP_VA_SIZE_1OR0_ \ , \ size PP_VA_EAT \ )(__VA_ARGS__)
#define PP_VA_SIZE(...) PP_VA_SIZE_(VMD_DATA_SIZE(__VA_ARGS__), __VA_ARGS__)
PP_VA_SIZE(1, 2) // 2 PP_VA_SIZE(1) // 1 PP_VA_SIZE() // 0 :))
But this does not compile on MSVC because while it supports variadics it does not support empty macro parameters :(
I will look at that and see what I can come up with. If variadic macros support an empty parameter list, I should provide a correct size of 0. If it does not I should indicate an error. So either way I will look to make a correction. Thanks for pointing this out.
BTW, a few minor comments on you library: 1) I think a name like BOOST_PP_VA_... is better than BOOST_VMD. "VA" is the same abbreviation for variadics that C99 uses in __VA_ARGS__ and Boost.Preprocessor already uses abbreviations like in BOOST_PP_SEQ.
What does the BOOST_PP_SEQ abbreviation have to do with BOOST_VMD_ ?
Alternatively, I would consider BOOST_PP_VARIADICS_... but still not "VMD" because programmers will not know what VMD stands for.
I used VMD to represent (V)ariadic(M)acro(D)ata, which is really what my library is about and also integrating that data with Boost PP. I rejected VA because its connotation in C++ is "variable argument(s)" and my library is about manipulating variadic macro data. I feel that something like VARIADICS is too long but I would certainly agree to it if others found it more expressive. I also do not want it to be BOOST_PP_ anything unless others decide it should be part of Boost PP and not its own library, and in that case I feel Paul Mensonides would need to find that acceptable.
2) I would add PP_VA_EAT, PP_VA_IDENTITY, PP_VA_CAT (similar to what they do already in Boost.Preprocessor and they should be trivial to implement). Also, I would add `#define PP_VA_SAME(...) __VA_ARGS__`.
I am willing to add functionality to the library but I wonder where that would stop. Essentially variadic macro data is similar to any of the other data types in Boost PP in that it is really just a separate data type. Since the Boost PP data types already have a rich set of functionality I sort of feel that duplicating any of that functionality for variadic data itself will be redundant. This is especially true as my library has support for converting back and forth variadic data to any of the Boost PP data types. I feel pretty strongly that the use of variadic data with Boost PP should really be to provide a more pleasant syntax for the end-user, but once the programmer handles the variadic data he should convert it to a Boost PP data simply because that data type already has a rich set of functionality for dealing with the data. I would be willing to add BOOST_VMD_CAT and BOOST_VMD_IDENTITY since they are both trivial ( just call to the appropiate BOOST_PP_ macros passing __VA_ARGS__ ). Your last is just __VA_ARGS__ as you show. But I really wonder if more functionality on variadic macro data is really worth it considering that the goal of the library, other than letting the end-user access individual tokens in the variadic macro data itself, is to convert back and forth to the Boost PP data types. I can understand that an end-user of the library such as yourself might want a number of additional operations on the variadic macro data itself, but I think if you look at the Boost PP data types you will see that their rich functionality offers most anything one would want to do with the data once you get it.

On Fri, Feb 18, 2011 at 9:58 PM, Edward Diener <eldiener@tropicsoft.com> wrote:
On 2/18/2011 7:27 PM, Lorenzo Caminiti wrote:
On Thu, Feb 17, 2011 at 5:13 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
I am requesting that my library, the Variadic Macro Data library, which is in the sandbox in the variadic_macro_data directory, be reviewed for inclusion into Boost.
The variadic_macro_data library adds support and functionality for variadic macros to Boost as well as integrating variadic macros with the Boost PP library without changing the latter library in any way.
I believe others have used my library, can attest to its quality and that it does what it is supposed to do. and have found it useful when using variadic macros with Boost PP. I myself have used its functionality in my own TTI library in the sandbox. Support for variadic macros is implemented in nearly all modern C++ compilers and the syntax is natural for an end-user. The library is finalized as far as functionality is concerned and I would like to see it in Boost and am willing to maintain it as a Boost library.
Is it possible to use variadic macros to detect empty parameters?
My understanding of variadic macro data is that at least one parameter must be specified.
For example:
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib.
VMD_DATA_SIZE(1, 2) // 2 VMD_DATA_SIZE(1) // 1 VMD_DATA_SIZE() // 1 not 0 :((
But I would like to the last size to expand to 0 (or have a different macro that would expand to 0 in that case).
With a real C99 preprocessor (e.g., GCC) I can do the following because empty macro parameters are supported:
Are they for variadic macro data in C++ ?
I think variadics and empty macro parameters are different things. C99 preprocessor (e.g., GCC) supports both while MSVC only supports variadics. That is why I was wondering if variadics can be used to detect empty macro parameters so I can do so also on MSVC. On Mon, Sep 6, 2010 at 3:29 PM, Paul Mensonides <pmenso57@comcast.net> wrote:
... However, IS_EMPTY is _not_ a macro for general-purpose emptiness detection. Its implementation requires the concatenation of an identifier to the front of the argument which rules out all arguments for which that isn't valid. For example, IS_EMPTY(+) is undefined behavior according to all revisions of both the C and C++ standards (including the forthcoming C++0x). Thus, at minimum, the argument must be an identifier (or keyword--same thing at this point) or a numeric literal that doesn't contain a decimal point.
It is valid (and has been since C90) to pass something that expands to nothing as an argument to a macro. However, it is not valid to pass nothing. E.g.
See http://lists.boost.org/Archives/boost/2010/09/170639.php
I will look at that and see what I can come up with. If variadic macros support an empty parameter list, I should provide a correct size of 0. If it does not I should indicate an error. So either way I will look to make a correction. Thanks for pointing this out.
From http://www.open-std.org/JTC1/SC22/WG14/www/docs/C99RationaleV5.10.pdf There must be at least one argument to match the ellipsis. This requirement avoids the problems
This works on both MSVC and GCC :) Does it work on other preprocessors? Can anyone please check? #include <boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib in Boost's sandbox. #include <boost/preprocessor.hpp> #include <boost/preprocessor/facilities/is_empty.hpp> VMD_DATA_SIZE(1, 2) // 2 VMD_DATA_SIZE(1) // 1 VMD_DATA_SIZE() // 1 not 0 :(( #define PP_VA_EAT(...) /* must expand to nothing */ #define PP_VA_SIZE_1OR0_(maybe_empty) \ BOOST_PP_IIF(BOOST_PP_IS_EMPTY(maybe_empty (/*exapnd empty */) ), 0, 1) #define PP_VA_SIZE_(size, ...) \ BOOST_PP_IIF(BOOST_PP_EQUAL(size, 1), \ PP_VA_SIZE_1OR0_ \ , \ size PP_VA_EAT \ )(__VA_ARGS__ BOOST_PP_EMPTY) #define PP_VA_SIZE(...) PP_VA_SIZE_(VMD_DATA_SIZE(__VA_ARGS__), __VA_ARGS__) PP_VA_SIZE(1, 2) // 2 PP_VA_SIZE(1) // 1 PP_VA_SIZE() // 0 :)) The strange thing about this code is that `PP_VA_SIZE()` as well as `VMD_DATA_SIZE()` don't give an error in the first place. They should error because they are passed with an empty macro parameter (which is not legal). It would have been legal instead to pass a parameter expanding to empty like in `PP_VA_SIZE(BOOST_PP_EMPTY())` or `VMD_DATA_SIZE(BOOST_PP_EMPTY())`. Why `PP_VA_SIZE()` and `VMD_DATA_SIZE()` accept empty macro parameter? Is that a variadic macros' feature or a MSVC bug? that occur when the trailing arguments are included in a list of arguments to another macro or function. For example, if dprintf had been defined as 10 #define dprintf(format, ...) \ dfprintf(stderr, format, _ _VA_ARGS_ _) and it were allowed for there to be only one argument, then there would be a trailing comma in the expanded form. While some implementations have used various notations or conventions to work around this problem, the Committee felt it better to avoid the problem altogether. 15 Similarly, the _ _VA_ARGS_ _ notation was preferred to other proposals for this syntax. A new feature of C99: Function-like macro invocations may also now have empty arguments, that is, an argument may consist of no preprocessing tokens. In C89, any argument that consisted of no preprocessing tokens had undefined behavior, but was noted as a common extension. 20 A function-like macro invocation f() has the form of either a call with no arguments or a call with one empty argument. Which form it actually takes is determined by the definition of f, which indicates the expected number of arguments.
BTW, a few minor comments on you library: 1) I think a name like BOOST_PP_VA_... is better than BOOST_VMD. "VA" is the same abbreviation for variadics that C99 uses in __VA_ARGS__ and Boost.Preprocessor already uses abbreviations like in BOOST_PP_SEQ.
What does the BOOST_PP_SEQ abbreviation have to do with BOOST_VMD_ ?
I was pointing out hat Boost.Preprocessor already uses some abbreviation so "VARIADICS" doesn't necessarily need to be spelled out completely.
Alternatively, I would consider BOOST_PP_VARIADICS_... but still not "VMD" because programmers will not know what VMD stands for.
I used VMD to represent (V)ariadic(M)acro(D)ata, which is really what my library is about and also integrating that data with Boost PP. I rejected VA because its connotation in C++ is "variable argument(s)" and my library is
I think __VA_ARGS__ stands for VAriadics ARGumentS so VA only abbreviates VAriadics. However, I couldn't find an actual standard reference that explicitly defines what the word __VA_ARGS__ stands for.
about manipulating variadic macro data. I feel that something like VARIADICS is too long but I would certainly agree to it if others found it more expressive. I also do not want it to be BOOST_PP_ anything unless others decide it should be part of Boost PP and not its own library, and in that case I feel Paul Mensonides would need to find that acceptable.
IMO, library users would expect your library to be part of Boost.Preprocessor. It's like adding another data set to Boost.Preprocessor for variadics.
2) I would add PP_VA_EAT, PP_VA_IDENTITY, PP_VA_CAT (similar to what they do already in Boost.Preprocessor and they should be trivial to implement). Also, I would add `#define PP_VA_SAME(...) __VA_ARGS__`.
I am willing to add functionality to the library but I wonder where that would stop. Essentially variadic macro data is similar to any of the other
I agree, no point in duplicating Boost.Preprocessor API. However, in using your library even just a bit I needed things like PP_VA_EAT, PP_VA_IDENTITY, PP_VA_CAT to program control statements like BOOST_PP_IIF, etc. While all of these are trivial to implement, it would be annoying to to re-implement all these facility macros all the times... It would be interesting to know what other programmer experience is in using your library to decide which Boost.Preprocessor control and facility macros are the most commonly used with variadics arguments.
data types in Boost PP in that it is really just a separate data type. Since the Boost PP data types already have a rich set of functionality I sort of feel that duplicating any of that functionality for variadic data itself will be redundant. This is especially true as my library has support for converting back and forth variadic data to any of the Boost PP data types. I feel pretty strongly that the use of variadic data with Boost PP should really be to provide a more pleasant syntax for the end-user, but once the programmer handles the variadic data he should convert it to a Boost PP data simply because that data type already has a rich set of functionality for dealing with the data.
I would be willing to add BOOST_VMD_CAT and BOOST_VMD_IDENTITY since they are both trivial ( just call to the appropiate BOOST_PP_ macros passing __VA_ARGS__ ). Your last is just __VA_ARGS__ as you show. But I really
I often do: #define DO_(p) // do something wiht p... #define DO(cond, p) BOOST_PP_IIF(cond, DO_, p BOOST_PP_TUPLE_EAT(1))(p) But I can't do this with variadic p because the IF will have too many arguments is p is __VA_ARGS__. So SAME() would be handy: #define PP_VA_SAME(...) __VA_ARGS__ #define DO_(...) // do something with __VA_ARGS__... #define DO(cond, ...) BOOST_PP_IIF(cond, DO_, PP_VA_SAME)(__VA_ARGS__)
wonder if more functionality on variadic macro data is really worth it considering that the goal of the library, other than letting the end-user access individual tokens in the variadic macro data itself, is to convert back and forth to the Boost PP data types. I can understand that an end-user of the library such as yourself might want a number of additional operations on the variadic macro data itself, but I think if you look at the Boost PP data types you will see that their rich functionality offers most anything one would want to do with the data once you get it.
-- Lorenzo

On 2/19/2011 10:48 AM, Lorenzo Caminiti wrote:
On Fri, Feb 18, 2011 at 9:58 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
On 2/18/2011 7:27 PM, Lorenzo Caminiti wrote:
On Thu, Feb 17, 2011 at 5:13 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
I am requesting that my library, the Variadic Macro Data library, which is in the sandbox in the variadic_macro_data directory, be reviewed for inclusion into Boost.
The variadic_macro_data library adds support and functionality for variadic macros to Boost as well as integrating variadic macros with the Boost PP library without changing the latter library in any way.
I believe others have used my library, can attest to its quality and that it does what it is supposed to do. and have found it useful when using variadic macros with Boost PP. I myself have used its functionality in my own TTI library in the sandbox. Support for variadic macros is implemented in nearly all modern C++ compilers and the syntax is natural for an end-user. The library is finalized as far as functionality is concerned and I would like to see it in Boost and am willing to maintain it as a Boost library.
Is it possible to use variadic macros to detect empty parameters?
My understanding of variadic macro data is that at least one parameter must be specified.
For example:
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib.
VMD_DATA_SIZE(1, 2) // 2 VMD_DATA_SIZE(1) // 1 VMD_DATA_SIZE() // 1 not 0 :((
But I would like to the last size to expand to 0 (or have a different macro that would expand to 0 in that case).
With a real C99 preprocessor (e.g., GCC) I can do the following because empty macro parameters are supported:
Are they for variadic macro data in C++ ?
I think variadics and empty macro parameters are different things. C99 preprocessor (e.g., GCC) supports both while MSVC only supports variadics. That is why I was wondering if variadics can be used to detect empty macro parameters so I can do so also on MSVC.
On Mon, Sep 6, 2010 at 3:29 PM, Paul Mensonides<pmenso57@comcast.net> wrote:
... However, IS_EMPTY is _not_ a macro for general-purpose emptiness detection. Its implementation requires the concatenation of an identifier to the front of the argument which rules out all arguments for which that isn't valid. For example, IS_EMPTY(+) is undefined behavior according to all revisions of both the C and C++ standards (including the forthcoming C++0x). Thus, at minimum, the argument must be an identifier (or keyword--same thing at this point) or a numeric literal that doesn't contain a decimal point.
It is valid (and has been since C90) to pass something that expands to nothing as an argument to a macro. However, it is not valid to pass nothing. E.g.
See http://lists.boost.org/Archives/boost/2010/09/170639.php
Thanks for the clarification.
I will look at that and see what I can come up with. If variadic macros support an empty parameter list, I should provide a correct size of 0. If it does not I should indicate an error. So either way I will look to make a correction. Thanks for pointing this out.
This works on both MSVC and GCC :) Does it work on other preprocessors? Can anyone please check?
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib in Boost's sandbox. #include<boost/preprocessor.hpp> #include<boost/preprocessor/facilities/is_empty.hpp>
VMD_DATA_SIZE(1, 2) // 2 VMD_DATA_SIZE(1) // 1 VMD_DATA_SIZE() // 1 not 0 :((
#define PP_VA_EAT(...) /* must expand to nothing */
#define PP_VA_SIZE_1OR0_(maybe_empty) \ BOOST_PP_IIF(BOOST_PP_IS_EMPTY(maybe_empty (/*exapnd empty */) ), 0, 1)
#define PP_VA_SIZE_(size, ...) \ BOOST_PP_IIF(BOOST_PP_EQUAL(size, 1), \ PP_VA_SIZE_1OR0_ \ , \ size PP_VA_EAT \ )(__VA_ARGS__ BOOST_PP_EMPTY)
#define PP_VA_SIZE(...) PP_VA_SIZE_(VMD_DATA_SIZE(__VA_ARGS__), __VA_ARGS__)
PP_VA_SIZE(1, 2) // 2 PP_VA_SIZE(1) // 1 PP_VA_SIZE() // 0 :))
Thanks for this code. I will try to see if it works, and also try to test it on compilers other than gcc and VC++. If it does, I will incorporate it in order to return a size of 0 when the parameter list is empty if that is the correct thing to do. OTOH if it is illegal for variadic macros to take an empty parameter list, which seems to be the case, it would be better to see if I can generate an error, even if the compiler allows it.
The strange thing about this code is that `PP_VA_SIZE()` as well as `VMD_DATA_SIZE()` don't give an error in the first place. They should error because they are passed with an empty macro parameter (which is not legal). It would have been legal instead to pass a parameter expanding to empty like in `PP_VA_SIZE(BOOST_PP_EMPTY())` or `VMD_DATA_SIZE(BOOST_PP_EMPTY())`.
Why `PP_VA_SIZE()` and `VMD_DATA_SIZE()` accept empty macro parameter? Is that a variadic macros' feature or a MSVC bug?
I am not sure if it is a compiler bug or not. I will look at the problem and see what I can do. BTW there has already been an updated version of the variadic_macro_data library in the sandbox now ( version 1.3 ) which changes no functionality but which follows Boost naming conventions ( and also conveniently shortens the name of the header file to vmd.hpp ).
From http://www.open-std.org/JTC1/SC22/WG14/www/docs/C99RationaleV5.10.pdf There must be at least one argument to match the ellipsis. This requirement avoids the problems that occur when the trailing arguments are included in a list of arguments to another macro or function. For example, if dprintf had been defined as 10 #define dprintf(format, ...) \ dfprintf(stderr, format, _ _VA_ARGS_ _) and it were allowed for there to be only one argument, then there would be a trailing comma in the expanded form. While some implementations have used various notations or conventions to work around this problem, the Committee felt it better to avoid the problem altogether. 15 Similarly, the _ _VA_ARGS_ _ notation was preferred to other proposals for this syntax. A new feature of C99: Function-like macro invocations may also now have empty arguments, that is, an argument may consist of no preprocessing tokens. In C89, any argument that consisted of no preprocessing tokens had undefined behavior, but was noted as a common extension. 20 A function-like macro invocation f() has the form of either a call with no arguments or a call with one empty argument. Which form it actually takes is determined by the definition of f, which indicates the expected number of arguments.
BTW, a few minor comments on you library: 1) I think a name like BOOST_PP_VA_... is better than BOOST_VMD. "VA" is the same abbreviation for variadics that C99 uses in __VA_ARGS__ and Boost.Preprocessor already uses abbreviations like in BOOST_PP_SEQ.
What does the BOOST_PP_SEQ abbreviation have to do with BOOST_VMD_ ?
I was pointing out hat Boost.Preprocessor already uses some abbreviation so "VARIADICS" doesn't necessarily need to be spelled out completely.
OK, understood. I try to keep the names short in my libraries if I can.
Alternatively, I would consider BOOST_PP_VARIADICS_... but still not "VMD" because programmers will not know what VMD stands for.
I used VMD to represent (V)ariadic(M)acro(D)ata, which is really what my library is about and also integrating that data with Boost PP. I rejected VA because its connotation in C++ is "variable argument(s)" and my library is
I think __VA_ARGS__ stands for VAriadics ARGumentS so VA only abbreviates VAriadics. However, I couldn't find an actual standard reference that explicitly defines what the word __VA_ARGS__ stands for.
It is not that important. VMD expresses what I want, VA does not.
about manipulating variadic macro data. I feel that something like VARIADICS is too long but I would certainly agree to it if others found it more expressive. I also do not want it to be BOOST_PP_ anything unless others decide it should be part of Boost PP and not its own library, and in that case I feel Paul Mensonides would need to find that acceptable.
IMO, library users would expect your library to be part of Boost.Preprocessor. It's like adding another data set to Boost.Preprocessor for variadics.
I do not agree with this, although I understand the reasoning. I do not think one can just add to another library, even if one does something which in itself is an addition/extension to another library's implementation, without the other library programmer's approval. If it were decided that the VMD library were to become a part of Boost PP, and Paul Mensonides approved that, I would have no problem making it so.
2) I would add PP_VA_EAT, PP_VA_IDENTITY, PP_VA_CAT (similar to what they do already in Boost.Preprocessor and they should be trivial to implement). Also, I would add `#define PP_VA_SAME(...) __VA_ARGS__`.
I am willing to add functionality to the library but I wonder where that would stop. Essentially variadic macro data is similar to any of the other
I agree, no point in duplicating Boost.Preprocessor API. However, in using your library even just a bit I needed things like PP_VA_EAT, PP_VA_IDENTITY, PP_VA_CAT to program control statements like BOOST_PP_IIF, etc. While all of these are trivial to implement, it would be annoying to to re-implement all these facility macros all the times... It would be interesting to know what other programmer experience is in using your library to decide which Boost.Preprocessor control and facility macros are the most commonly used with variadics arguments.
I agree with that. I am of course willing to add some basic facilities. But I still think that you should think about converting variadic macro data to another functionally richer Boost PP data type first before you do anything else with the data.
data types in Boost PP in that it is really just a separate data type. Since the Boost PP data types already have a rich set of functionality I sort of feel that duplicating any of that functionality for variadic data itself will be redundant. This is especially true as my library has support for converting back and forth variadic data to any of the Boost PP data types. I feel pretty strongly that the use of variadic data with Boost PP should really be to provide a more pleasant syntax for the end-user, but once the programmer handles the variadic data he should convert it to a Boost PP data simply because that data type already has a rich set of functionality for dealing with the data.
I would be willing to add BOOST_VMD_CAT and BOOST_VMD_IDENTITY since they are both trivial ( just call to the appropiate BOOST_PP_ macros passing __VA_ARGS__ ). Your last is just __VA_ARGS__ as you show. But I really
I often do:
#define DO_(p) // do something wiht p... #define DO(cond, p) BOOST_PP_IIF(cond, DO_, p BOOST_PP_TUPLE_EAT(1))(p)
Convert variadic macro data to a Boost PP tuple with BOOST_VMD_DATA_TO_PP_TUPLE(...) and then you can manipulate the result as a tuple as above. Sure it is an extra step but, as I said previously, I think it is a mistake to try to duplicate any of the functionality of the much more functional Boost PP data types. Of course I can try to do it if people want to work with variadic data directly, but considering how much richer in functionality the Boost PP data types already are it just does not seem worthwhile to do, and would end up being a great deal of work for little actual purpose.
But I can't do this with variadic p because the IF will have too many arguments is p is __VA_ARGS__. So SAME() would be handy:
#define PP_VA_SAME(...) __VA_ARGS__ #define DO_(...) // do something with __VA_ARGS__... #define DO(cond, ...) BOOST_PP_IIF(cond, DO_, PP_VA_SAME)(__VA_ARGS__)
wonder if more functionality on variadic macro data is really worth it considering that the goal of the library, other than letting the end-user access individual tokens in the variadic macro data itself, is to convert back and forth to the Boost PP data types. I can understand that an end-user of the library such as yourself might want a number of additional operations on the variadic macro data itself, but I think if you look at the Boost PP data types you will see that their rich functionality offers most anything one would want to do with the data once you get it.

On Feb 19, 2011, at 12:35 PM, Edward Diener wrote:
IMO, library users would expect your library to be part of Boost.Preprocessor. It's like adding another data set to Boost.Preprocessor for variadics.
I do not agree with this, although I understand the reasoning. I do not think one can just add to another library, even if one does something which in itself is an addition/extension to another library's implementation, without the other library programmer's approval. If it were decided that the VMD library were to become a part of Boost PP, and Paul Mensonides approved that, I would have no problem making it so.
IMO you should ask Paul (directly off-list) before the review to see if this is an option. True it couldn't be added without his permission.

On 2/19/2011 3:42 PM, Gordon Woodhull wrote:
On Feb 19, 2011, at 12:35 PM, Edward Diener wrote:
IMO, library users would expect your library to be part of Boost.Preprocessor. It's like adding another data set to Boost.Preprocessor for variadics.
I do not agree with this, although I understand the reasoning. I do not think one can just add to another library, even if one does something which in itself is an addition/extension to another library's implementation, without the other library programmer's approval. If it were decided that the VMD library were to become a part of Boost PP, and Paul Mensonides approved that, I would have no problem making it so.
IMO you should ask Paul (directly off-list) before the review to see if this is an option. True it couldn't be added without his permission.
if during the review the majority of others feel that the library should be part of Boost PP, I am certainly willing to do that. But I really do not see the purpose of doing that beforehand.

On Sat, 19 Feb 2011 16:50:03 -0500, Edward Diener wrote:
On 2/19/2011 3:42 PM, Gordon Woodhull wrote:
IMO you should ask Paul (directly off-list) before the review to see if this is an option. True it couldn't be added without his permission.
if during the review the majority of others feel that the library should be part of Boost PP, I am certainly willing to do that. But I really do not see the purpose of doing that beforehand.
Besides the difficulties with compilers, there are two root issues with variadics (and placemarkers) in regards to the pp-lib. First, adding proper support requires breaking changes (argument orderings through the library, for example). If this is to happen, I'd prefer it to happen all at once in one major upgrade. Second, the way that the pp-lib would use variadics is not the same as what Edward's library does. AFAIK, Edward's library treats variadic content as a standalone data structure. By "variadic content," I'm referring to a comma-separated list of preprocessing token sequences such as a, b, c as opposed to variadic tuples or sequences such as (a, b, c) (a)(b, c)(d, e, f) In a general sense, I consider treating variadic content as a data structure as going in the wrong direction. There are far better ways to utilize variadics than as input data structures. In particular, data structures get passed into algorithms (otherwise, they're pointless). However, an given interface can only have one variadic "argument". It is far more useful to spend that variadic argument on the auxiliary arguments to the algorithm. By "auxiliary arguments," I'm referring to additional arguments that get forwarded by higher-order algorithms to the user-supplied macros that they invoke. Take a FOR_EACH algorithm as an example. A FOR_EACH algorithm requires a sequence, a macro to invoke for each element of that sequence, and auxiliary data to be passed through the algorithm to the macro invoked for each element. What frequently happens now, with all such algorithms in the pp-lib, is that more than one piece of auxiliary data needs to get passed through the algorithm, so it gets encoded in another data structure. Each of those pieces of auxiliary data then need to be extracted latter--which leads to massive clutter and inefficiency. In reality, it comes down to two choices for interface: 1) FOR_EACH(macro, auxiliary_data, ...) where __VA_ARGS__ is the data structure. This scenario leads to the following (slightly simplified): #define M(E, D) \ /* excessive unpacking with TUPLE_ELEM, */ \ /* or equivalent, goes here */ \ /**/ FOR_EACH(M, (D1, D2, D3), E1, E2, E3) 2) FOR_EACH(macro, data_structure, ...) where __VA_ARGS__ is the auxiliary data. #define M(E, D1, D2, D3) \ /* no unpacking required */ \ /**/ FOR_EACH(M, (E1, E2, E3), D1, D2, D3) The latter case is also extensible to scenarios where the elements of the data structure are non-unary. For example, #define M(E, F, D1, D2, D3) // ... FOR_EACH(M, (E1, F1)(E2, F2)(E3, F3), D1, D2, D3) The only time you really need to unpack is when the data structure is truly variadic (i.e. elements have different arity) such as: #define M(E, D1, D2, D3) // possibly unpack EF VARIADIC_FOR_EACH(M, (a)(b, c)(d, e, f), D1, D2, D3) (This scenario happens, but is comparatively rare. It happens in fancier scenarios, and it happens with sequences of types, e.g. std::pair<int, double>.) IMO, the second interface option is far superior to the first. As a concrete example, I recently had to generate some stuff for the Greek alphabet. However, I didn't want to mess around with multi-byte encodings directly. This is exactly what I needed, but it contains the basic idea: template<class T> struct entry { T id, lc, uc; }; int main(int argc, char* argv[]) { std::vector<entry<const char*>> entries; #define _(s, id, lc, uc, type, enc) \ CHAOS_PP_WALL( \ entries.push_back(entry<type> { \ enc(id), enc(lc), enc(uc) \ }); \ ) \ /**/ CHAOS_PP_EXPR(CHAOS_PP_SEQ_FOR_EACH( _, (alpha, α, Α) (beta, β, Β) (gamma, γ, Γ) (delta, δ, Δ) (epsilon, ε, Ε) (zeta, ζ, Ζ) (eta, η, Η) (theta, θ, Θ) (iota, ι, Ι) (kappa, κ, Κ) (lambda, λ, Λ) (mu, μ, Μ) (nu, ν, Ν) (xi, ξ, Ξ) (omicron, ο, Ο) (pi, π, Π) (rho, ρ, Ρ) (sigma, σ, Σ) (tau, τ, Τ) (upsilon, υ, Υ) (phi, φ, Φ) (chi, χ, Χ) (psi, ψ, Ψ) (omega, ω, Ω), const char*, CHAOS_PP_USTRINGIZE(8) )) #undef _ for (auto i = entries.begin(); i != entries.end(); ++i) { std::cout << i->id << ": " << i->lc << ", " << i->uc << '\n'; } return 0; } Regards, Paul Mensonides

On 2/20/2011 7:09 AM, Paul Mensonides wrote:
On Sat, 19 Feb 2011 16:50:03 -0500, Edward Diener wrote:
On 2/19/2011 3:42 PM, Gordon Woodhull wrote:
IMO you should ask Paul (directly off-list) before the review to see if this is an option. True it couldn't be added without his permission.
if during the review the majority of others feel that the library should be part of Boost PP, I am certainly willing to do that. But I really do not see the purpose of doing that beforehand.
Besides the difficulties with compilers, there are two root issues with variadics (and placemarkers) in regards to the pp-lib.
First, adding proper support requires breaking changes (argument orderings through the library, for example). If this is to happen, I'd prefer it to happen all at once in one major upgrade.
Second, the way that the pp-lib would use variadics is not the same as what Edward's library does. AFAIK, Edward's library treats variadic content as a standalone data structure. By "variadic content," I'm referring to a comma-separated list of preprocessing token sequences such as
a, b, c
as opposed to variadic tuples or sequences such as
(a, b, c) (a)(b, c)(d, e, f)
In a general sense, I consider treating variadic content as a data structure as going in the wrong direction.
I want to clarify my purpose here, because while you think treating variadic content as a data structure is the wrong direction, and I agree with you, it is not what I do in my library. It was not my goal to change the pp-lib in any way, and I was only too aware of how much pp knowledge would be needed in order to do that. I do not deny that using variadic macro content within the pp-lib would be worthwhile, but that is something you would know about and you might decide to do, as the remainder of your response also attests. My goal is only to allow programmers to specify using variadic macros and then provide the means to convert that sequence to/from pp-lib data types. I have also provided the means to access any individual variadic token from the sequence as well as its length, but that hardly means "treating variadic content as a data structure" to me. As an additional convenience, and because of variadic macros, I also mimicked pp-lib tuple functionality without the need to pass the length of a tuple directly. My goal is simple usability of variadic data in connection with pp-lib. I always have said, both in the documentation to my library and in responding to others about my library, that the pp-lib data structures should be used. They are much richer in functionality than what little I provided for variadic macro data. I never wanted to provide more functionality for working with variadic data per se, since it would be redundant to do so given what pp-lib dats structures offer. While others may want to provide a whole set of functionality for working with variadic macro data directly, completely outside of its use with pp-lib, given what pp-lib offers in the area of its data types it is not something I want to pursue. The only advantage to that which I see is slightly faster compile times and slightly simpler pp programming, but that is not enough for me to pursue that path as I am very comfortable in my own use of pp programming to use what is currently provided by pp-lib.
There are far better ways to utilize variadics than as input data structures. In particular, data structures get passed into algorithms (otherwise, they're pointless). However, an given interface can only have one variadic "argument". It is far more useful to spend that variadic argument on the auxiliary arguments to the algorithm.
By "auxiliary arguments," I'm referring to additional arguments that get forwarded by higher-order algorithms to the user-supplied macros that they invoke. Take a FOR_EACH algorithm as an example. A FOR_EACH algorithm requires a sequence, a macro to invoke for each element of that sequence, and auxiliary data to be passed through the algorithm to the macro invoked for each element. What frequently happens now, with all such algorithms in the pp-lib, is that more than one piece of auxiliary data needs to get passed through the algorithm, so it gets encoded in another data structure. Each of those pieces of auxiliary data then need to be extracted latter--which leads to massive clutter and inefficiency.
In reality, it comes down to two choices for interface:
1) FOR_EACH(macro, auxiliary_data, ...) where __VA_ARGS__ is the data structure.
This scenario leads to the following (slightly simplified):
#define M(E, D) \ /* excessive unpacking with TUPLE_ELEM, */ \ /* or equivalent, goes here */ \ /**/
FOR_EACH(M, (D1, D2, D3), E1, E2, E3)
2) FOR_EACH(macro, data_structure, ...) where __VA_ARGS__ is the auxiliary data.
#define M(E, D1, D2, D3) \ /* no unpacking required */ \ /**/
FOR_EACH(M, (E1, E2, E3), D1, D2, D3)
The latter case is also extensible to scenarios where the elements of the data structure are non-unary. For example,
#define M(E, F, D1, D2, D3) // ...
FOR_EACH(M, (E1, F1)(E2, F2)(E3, F3), D1, D2, D3)
The only time you really need to unpack is when the data structure is truly variadic (i.e. elements have different arity) such as:
#define M(E, D1, D2, D3) // possibly unpack EF
VARIADIC_FOR_EACH(M, (a)(b, c)(d, e, f), D1, D2, D3)
(This scenario happens, but is comparatively rare. It happens in fancier scenarios, and it happens with sequences of types, e.g. std::pair<int, double>.)
IMO, the second interface option is far superior to the first.
As a concrete example, I recently had to generate some stuff for the Greek alphabet. However, I didn't want to mess around with multi-byte encodings directly. This is exactly what I needed, but it contains the basic idea:
template<class T> struct entry { T id, lc, uc; };
int main(int argc, char* argv[]) { std::vector<entry<const char*>> entries; #define _(s, id, lc, uc, type, enc) \ CHAOS_PP_WALL( \ entries.push_back(entry<type> { \ enc(id), enc(lc), enc(uc) \ }); \ ) \ /**/ CHAOS_PP_EXPR(CHAOS_PP_SEQ_FOR_EACH( _, (alpha, α, Α) (beta, β, Β) (gamma, γ, Γ) (delta, δ, Δ) (epsilon, ε, Ε) (zeta, ζ, Ζ) (eta, η, Η) (theta, θ, Θ) (iota, ι, Ι) (kappa, κ, Κ) (lambda, λ, Λ) (mu, μ, Μ) (nu, ν, Ν) (xi, ξ, Ξ) (omicron, ο, Ο) (pi, π, Π) (rho, ρ, Ρ) (sigma, σ, Σ) (tau, τ, Τ) (upsilon, υ, Υ) (phi, φ, Φ) (chi, χ, Χ) (psi, ψ, Ψ) (omega, ω, Ω), const char*, CHAOS_PP_USTRINGIZE(8) )) #undef _ for (auto i = entries.begin(); i != entries.end(); ++i) { std::cout<< i->id<< ": "<< i->lc<< ", "<< i->uc<< '\n'; } return 0; }

On Sun, 20 Feb 2011 09:47:02 -0500, Edward Diener wrote:
On 2/20/2011 7:09 AM, Paul Mensonides wrote:
In a general sense, I consider treating variadic content as a data structure as going in the wrong direction.
My goal is only to allow programmers to specify using variadic macros and then provide the means to convert that sequence to/from pp-lib data types. I have also provided the means to access any individual variadic token from the sequence as well as its length, but that hardly means "treating variadic content as a data structure" to me. As an additional convenience, and because of variadic macros, I also mimicked pp-lib tuple functionality without the need to pass the length of a tuple directly. My goal is simple usability of variadic data in connection with pp-lib.
I was merely expressing my thoughts related to the use of variadics in an overall sense, not whether something that converts __VA_ARGS__ to some other form is useful. The interface to your library is small enough that you could just add it to the pp-lib. Regards, Paul Mensonides

On Sun, Feb 20, 2011 at 10:13 AM, Paul Mensonides <pmenso57@comcast.net> wrote:
On Sun, 20 Feb 2011 09:47:02 -0500, Edward Diener wrote:
On 2/20/2011 7:09 AM, Paul Mensonides wrote:
In a general sense, I consider treating variadic content as a data structure as going in the wrong direction.
My goal is only to allow programmers to specify using variadic macros and then provide the means to convert that sequence to/from pp-lib data types. I have also provided the means to access any individual variadic token from the sequence as well as its length, but that hardly means "treating variadic content as a data structure" to me. As an additional convenience, and because of variadic macros, I also mimicked pp-lib tuple functionality without the need to pass the length of a tuple directly. My goal is simple usability of variadic data in connection with pp-lib.
I was merely expressing my thoughts related to the use of variadics in an overall sense, not whether something that converts __VA_ARGS__ to some other form is useful. The interface to your library is small enough that you could just add it to the pp-lib.
I would personally find this addition to the pp-lib useful. -- Lorenzo

On 2/20/2011 10:13 AM, Paul Mensonides wrote:
On Sun, 20 Feb 2011 09:47:02 -0500, Edward Diener wrote:
On 2/20/2011 7:09 AM, Paul Mensonides wrote:
In a general sense, I consider treating variadic content as a data structure as going in the wrong direction.
My goal is only to allow programmers to specify using variadic macros and then provide the means to convert that sequence to/from pp-lib data types. I have also provided the means to access any individual variadic token from the sequence as well as its length, but that hardly means "treating variadic content as a data structure" to me. As an additional convenience, and because of variadic macros, I also mimicked pp-lib tuple functionality without the need to pass the length of a tuple directly. My goal is simple usability of variadic data in connection with pp-lib.
I was merely expressing my thoughts related to the use of variadics in an overall sense, not whether something that converts __VA_ARGS__ to some other form is useful. The interface to your library is small enough that you could just add it to the pp-lib.
If during the review of my library the majority of other reviewers feel that they would like my library to be added to pp-lib I would be glad to do it if my library is accepted into Boost. If that happens I will check with you on the best way to do it, but I do not want to do anything right now in that direction considering that my library has not even been reviewed yet, does not have a review manager, and is still somewhere in a review queue where many other libraries are waiting for a review. Since you are the most knowledgable person regarding pp programming, if you would be willing to me the review manager for the library, that would be fine with me. It is certainly small enough and easy enough for anyone to understand, much less yourself who understand much more about the pp. Otherwise I am still looking for a review manager for the library when it comes up for review.

On Feb 20, 2011, at 7:09 AM, Paul Mensonides wrote:
On Sat, 19 Feb 2011 16:50:03 -0500, Edward Diener wrote:
On 2/19/2011 3:42 PM, Gordon Woodhull wrote:
IMO you should ask Paul (directly off-list) before the review to see if this is an option. True it couldn't be added without his permission.
if during the review the majority of others feel that the library should be part of Boost PP, I am certainly willing to do that. But I really do not see the purpose of doing that beforehand.
Besides the difficulties with compilers, there are two root issues with variadics (and placemarkers) in regards to the pp-lib.
First, adding proper support requires breaking changes (argument orderings through the library, for example). If this is to happen, I'd prefer it to happen all at once in one major upgrade.
Second, the way that the pp-lib would use variadics is not the same as what Edward's library does. AFAIK, Edward's library treats variadic content as a standalone data structure. By "variadic content," I'm referring to a comma-separated list of preprocessing token sequences such as
a, b, c
as opposed to variadic tuples or sequences such as
(a, b, c) (a)(b, c)(d, e, f)
In a general sense, I consider treating variadic content as a data structure as going in the wrong direction. There are far better ways to utilize variadics than as input data structures. In particular, data structures get passed into algorithms (otherwise, they're pointless). However, an given interface can only have one variadic "argument". It is far more useful to spend that variadic argument on the auxiliary arguments to the algorithm.
I think that you have identified a very good use for the variadic argument, but I am not sure that it completely conflicts with the ideas present in the VMD library. So unless providing utilities to convert from VA_ARGS -> SEQ, LIST, Array, etc somehow prevents you from expanding the FOR_EACH macro like you described, I would tend to favor an incremental approach. Don't hold up useful functionality because there is more useful functionality you could add. Reworking the whole of the PP library with VA support sounds like it would significantly delay the adoption of some very useful tools.
By "auxiliary arguments," I'm referring to additional arguments that get forwarded by higher-order algorithms to the user-supplied macros that they invoke. Take a FOR_EACH algorithm as an example. A FOR_EACH algorithm requires a sequence, a macro to invoke for each element of that sequence, and auxiliary data to be passed through the algorithm to the macro invoked for each element. What frequently happens now, with all such algorithms in the pp-lib, is that more than one piece of auxiliary data needs to get passed through the algorithm, so it gets encoded in another data structure. Each of those pieces of auxiliary data then need to be extracted latter--which leads to massive clutter and inefficiency.
In reality, it comes down to two choices for interface:
1) FOR_EACH(macro, auxiliary_data, ...) where __VA_ARGS__ is the data structure.
This scenario leads to the following (slightly simplified):
#define M(E, D) \ /* excessive unpacking with TUPLE_ELEM, */ \ /* or equivalent, goes here */ \ /**/
FOR_EACH(M, (D1, D2, D3), E1, E2, E3)
2) FOR_EACH(macro, data_structure, ...) where __VA_ARGS__ is the auxiliary data.
#define M(E, D1, D2, D3) \ /* no unpacking required */ \ /**/
FOR_EACH(M, (E1, E2, E3), D1, D2, D3)
The latter case is also extensible to scenarios where the elements of the data structure are non-unary. For example,
#define M(E, F, D1, D2, D3) // ...
FOR_EACH(M, (E1, F1)(E2, F2)(E3, F3), D1, D2, D3)
The only time you really need to unpack is when the data structure is truly variadic (i.e. elements have different arity) such as:
#define M(E, D1, D2, D3) // possibly unpack EF
VARIADIC_FOR_EACH(M, (a)(b, c)(d, e, f), D1, D2, D3)
(This scenario happens, but is comparatively rare. It happens in fancier scenarios, and it happens with sequences of types, e.g. std::pair<int, double>.)
IMO, the second interface option is far superior to the first.
As a concrete example, I recently had to generate some stuff for the Greek alphabet. However, I didn't want to mess around with multi-byte encodings directly. This is exactly what I needed, but it contains the basic idea:
template<class T> struct entry { T id, lc, uc; };
int main(int argc, char* argv[]) { std::vector<entry<const char*>> entries; #define _(s, id, lc, uc, type, enc) \ CHAOS_PP_WALL( \ entries.push_back(entry<type> { \ enc(id), enc(lc), enc(uc) \ }); \ ) \ /**/ CHAOS_PP_EXPR(CHAOS_PP_SEQ_FOR_EACH( _, (alpha, α, Α) (beta, β, Β) (gamma, γ, Γ) (delta, δ, Δ) (epsilon, ε, Ε) (zeta, ζ, Ζ) (eta, η, Η) (theta, θ, Θ) (iota, ι, Ι) (kappa, κ, Κ) (lambda, λ, Λ) (mu, μ, Μ) (nu, ν, Ν) (xi, ξ, Ξ) (omicron, ο, Ο) (pi, π, Π) (rho, ρ, Ρ) (sigma, σ, Σ) (tau, τ, Τ) (upsilon, υ, Υ) (phi, φ, Φ) (chi, χ, Χ) (psi, ψ, Ψ) (omega, ω, Ω), const char*, CHAOS_PP_USTRINGIZE(8) )) #undef _ for (auto i = entries.begin(); i != entries.end(); ++i) { std::cout << i->id << ": " << i->lc << ", " << i->uc << '\n'; } return 0; }
Regards, Paul Mensonides
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Sun, 20 Feb 2011 09:55:14 -0500, Daniel Larimer wrote:
I think that you have identified a very good use for the variadic argument, but I am not sure that it completely conflicts with the ideas present in the VMD library. So unless providing utilities to convert from VA_ARGS -> SEQ, LIST, Array, etc somehow prevents you from expanding the FOR_EACH macro like you described, I would tend to favor an incremental approach. Don't hold up useful functionality because there is more useful functionality you could add.
Reworking the whole of the PP library with VA support sounds like it would significantly delay the adoption of some very useful tools.
I don't have a problem with the particular macros. I do have a concern about establishing practices which I don't consider to be terribly good practices. I'd really like to see a use case that is different from what I'm envisioning. What I'm envisioning is something like: #define ALGORITHM_B(...) \ ALGORITHM_A(DATA_TO_SEQ(__VA_ARGS__)) \ /**/ ALGORITHM_B(a, b, c) I don't find that use case compelling, and that point of view is based on heavy experience utilizing variadics. When I was initially writing chaos- pp, I went down this path, but ultimately rejected it. It simply doesn't scale and results in other intrastructure that doesn't scale. To attempt to clarify, something like ALGORITHM_A(DATA_TO_SEQ(a, b, c)) is not really a problem, but the existence of the ALGORITHM_B definition (as a non-local construct) *is* a problem. Regards, Paul Mensonides

On 2/20/2011 10:43 AM, Paul Mensonides wrote:
On Sun, 20 Feb 2011 09:55:14 -0500, Daniel Larimer wrote:
I think that you have identified a very good use for the variadic argument, but I am not sure that it completely conflicts with the ideas present in the VMD library. So unless providing utilities to convert from VA_ARGS -> SEQ, LIST, Array, etc somehow prevents you from expanding the FOR_EACH macro like you described, I would tend to favor an incremental approach. Don't hold up useful functionality because there is more useful functionality you could add.
Reworking the whole of the PP library with VA support sounds like it would significantly delay the adoption of some very useful tools.
I don't have a problem with the particular macros. I do have a concern about establishing practices which I don't consider to be terribly good practices.
I'd really like to see a use case that is different from what I'm envisioning. What I'm envisioning is something like:
#define ALGORITHM_B(...) \ ALGORITHM_A(DATA_TO_SEQ(__VA_ARGS__)) \ /**/
ALGORITHM_B(a, b, c)
I don't find that use case compelling, and that point of view is based on heavy experience utilizing variadics. When I was initially writing chaos- pp, I went down this path, but ultimately rejected it. It simply doesn't scale and results in other intrastructure that doesn't scale.
I could not agree more that it does not scale well for general use, and I understand that it is not the way you would want to extend the pp-lib to use variadic macros internally. But consider just the case where an end-user is doing some pp programming for their own use, and the use of other end-user programmers using their library, which may have nothing to do with pp programming itself. They want to present a macro interface which uses variadic macros rather than, let's say, Boost PP sequences. They are doing this just to make their interface look easier to use, ie. SOME_MACRO(a,b,c,d,etc.) rather than SOME_MACRO((a),(b),(c),(d),etc.). In that case creating some internal macro like ALGORITHM_B ( which is of course a highly simplified version of using variadic macro data internally ) above is purely a matter of their own convenience. It's not meant to be part of some highly reusable code but just for their own specific purposes. In that situation I see nothing wrong with it on a technical basis. That it does not scale well in the sense that it is integrated into a whole system of uses of variadic macros I can see. But its only purpose is to get away from using variadic macros and to use a pp-lib data structure instead, while still presenting an external interface to the end user which uses variadic macros. I myself have done this in my TTI lib. I allow an alternative macro for some functionality which uses a variadic macro, and I take the data and pass it on internally to inner processing which uses a pp-lib data type. I am not trying to do internal pp programming with variadic macro data, as the pp-lib data type is much richer in functionality. Even in cases where I might internally use variadic macro syntax for my own library I would not attempt to do anything with the data that is in any way complicated, but just use it as a syntax convenience for myself. But even in that case I find it has little to offer and pp-lib data types, once one is comfortable using them and their syntax, is much better. Wanting to present variadic macro syntax as a service for end-user programmers was the main motivation for my VMD library. I believe others have also found the using of variadic macros as an end-user macro interface also useful.
To attempt to clarify, something like
ALGORITHM_A(DATA_TO_SEQ(a, b, c))
is not really a problem, but the existence of the ALGORITHM_B definition (as a non-local construct) *is* a problem.

On Sun, 20 Feb 2011 11:52:20 -0500, Edward Diener wrote:
I could not agree more that it does not scale well for general use, and I understand that it is not the way you would want to extend the pp-lib to use variadic macros internally.
But consider just the case where an end-user is doing some pp programming for their own use, and the use of other end-user programmers using their library, which may have nothing to do with pp programming itself.
All uses of macros are metaprogramming. The more programmers understand that, the less "macro problems" there will be.
They want to present a macro interface which uses variadic macros rather than, let's say, Boost PP sequences. They are doing this just to make their interface look easier to use, ie. SOME_MACRO(a,b,c,d,etc.) rather than SOME_MACRO((a),(b),(c),(d),etc.).
So, what's the problem with SOME_MACRO((a, b, c)) instead? I.e. a tuple rather just variadic content? The real scalability problem is with higher-order macros, not something like MACRO(a, b, c) vs. MACRO((a)(b) (c)) vs MACRO((a, b, c)). When higher-order macros are involved, there is a much higher price in providing this syntactic convenience. That price ultimately leads to lack of re-use and replication in library code. Believe it or not, I actually use preprocessor metaprogramming (as well as template metaprogramming) quite heavily in my own code. The only time that I would provide a macro that uses the variadic data as an element sequence (that's not a general-purpose pp-library macro) as an interface is if it is absolutely certain that it will never become higher-order and that the number of arguments before the variadic data will never change. Regards, Paul Mensonides

On 2/20/2011 12:19 PM, Paul Mensonides wrote:
On Sun, 20 Feb 2011 11:52:20 -0500, Edward Diener wrote:
I could not agree more that it does not scale well for general use, and I understand that it is not the way you would want to extend the pp-lib to use variadic macros internally.
But consider just the case where an end-user is doing some pp programming for their own use, and the use of other end-user programmers using their library, which may have nothing to do with pp programming itself.
All uses of macros are metaprogramming. The more programmers understand that, the less "macro problems" there will be.
A library presents an interface where one uses a macro to do "something". The programmer using that macro to accomplish some task in that library. The use of that macro is not "metaprogramming" as I understand it, although I think this is just an argument about a word meaning than any real disagreement. Imagine a relative newbie in C++ using a macro and then being told he is "metaprogramming". No doubt he would feel a little proud of that fact <g>, but I doubt if that would prove much about his real "metaprogramming" knowledge or skills, not that I think "metaprogramming" is that abstruse. Undoubtedly a C++ programmer should understand macros. But whether he understands it at his own level and whether he understands it as you do are different things.
They want to present a macro interface which uses variadic macros rather than, let's say, Boost PP sequences. They are doing this just to make their interface look easier to use, ie. SOME_MACRO(a,b,c,d,etc.) rather than SOME_MACRO((a),(b),(c),(d),etc.).
So, what's the problem with SOME_MACRO((a, b, c)) instead? I.e. a tuple rather just variadic content?
By the same token what is wrong with SOME_MACRO(a,b,c) ? The point is not that their is something wrong with SOME_MACRO((a,b,c)) or even with SOME_MACRO((a)(b)(c)) but that programmers using macros which a library provides seem more comfortable with SOME_MACRO(a,b,c). The reason for this is most probably because the latter mimics function call syntax and a library implementer is looking syntactically for as much sameness as possible in order to promote ease of use ( or memorability ). One may just as well ask why C++ adapted variadic macros as part of the next proposed standard ? I believe it was solely for syntactic ease of use with a variable amount of macro tokens. It sure wasn't because C++ provided much functionality to deal with the variadic data per se.
The real scalability problem is with higher-order macros, not something like MACRO(a, b, c) vs. MACRO((a)(b) (c)) vs MACRO((a, b, c)). When higher-order macros are involved, there is a much higher price in providing this syntactic convenience. That price ultimately leads to lack of re-use and replication in library code.
Believe it or not, I actually use preprocessor metaprogramming (as well as template metaprogramming) quite heavily in my own code.
Nah ! I don't believe it <g>.
The only time that I would provide a macro that uses the variadic data as an element sequence (that's not a general-purpose pp-library macro) as an interface is if it is absolutely certain that it will never become higher-order and that the number of arguments before the variadic data will never change.
Fair enough. But I believe some other proposed Boost libraries, besides my own TTI library, are using variadic macro syntax in their public interface. Why should they not do that and take advantage of pp-lib at the same time ?

On Sun, Feb 20, 2011 at 1:55 PM, Edward Diener <eldiener@tropicsoft.com> wrote:
Fair enough. But I believe some other proposed Boost libraries, besides my own TTI library, are using variadic macro syntax in their public interface. Why should they not do that and take advantage of pp-lib at the same time ?
For example, Boost.Local new syntax takes advantage of variadics. If variadics (using a couple of Edward's VMD library macros behind the scenes): #include <boost/local/function.hpp> #include <iostream> #include <sstream> #include <algorithm> #include <vector> int main () { std::ostringstream output; int BOOST_LOCAL_FUNCTION_PARAMS(int n, bool recursion, default false, bind& output) { int result = 0; if (n < 2 ) result = 1; else result = n * factorial(n - 1, true); // Recursive call. if (!recursion) output << result << " "; return result; } BOOST_LOCAL_FUNCTION_NAME(factorial) std::vector<int> v(3); v[0] = 1; v[1] = 4; v[2] = 7; std::for_each(v.begin(), v.end(), factorial); std::cout << output.str() << std::endl; return 0; } In any case (variadics or not): #include <boost/local/function.hpp> #include <iostream> #include <sstream> #include <algorithm> #include <vector> int main () { std::ostringstream output; int BOOST_LOCAL_FUNCTION_PARAMS( (int n) (bool recursion)(default false) (bind& output) ) { int result = 0; if (n < 2 ) result = 1; else result = n * factorial(n - 1, true); // Recursive call. if (!recursion) output << result << " "; return result; } BOOST_LOCAL_FUNCTION_NAME(factorial) std::vector<int> v(3); v[0] = 1; v[1] = 4; v[2] = 7; std::for_each(v.begin(), v.end(), factorial); std::cout << output.str() << std::endl; return 0; } P.S. I just go this to compile :)) -- Lorenzo

On Sun, 20 Feb 2011 13:55:47 -0500, Edward Diener wrote:
On 2/20/2011 12:19 PM, Paul Mensonides wrote:
All uses of macros are metaprogramming. The more programmers understand that, the less "macro problems" there will be.
A library presents an interface where one uses a macro to do "something". The programmer using that macro to accomplish some task in that library. The use of that macro is not "metaprogramming" as I understand it, although I think this is just an argument about a word meaning than any real disagreement. Imagine a relative newbie in C++ using a macro and then being told he is "metaprogramming". No doubt he would feel a little proud of that fact <g>, but I doubt if that would prove much about his real "metaprogramming" knowledge or skills, not that I think "metaprogramming" is that abstruse.
Undoubtedly a C++ programmer should understand macros. But whether he understands it at his own level and whether he understands it as you do are different things.
What I'm getting is actually what you mention before, mimicking function calls. That even being a goal (or considered a good thing by accident) rather than being benign (at best) is where things go wrong. Macros are code generators (or, I suppose, code "removers"). They are not even remotely tied to the syntax of the underlying language. When a macro (that isn't a general purpose pp-lib macro) is defined, all too often the author designs the macro such that its invocation appears to fit into the syntax of the underlying language. For example, hacks such as using do/ while loops in order to require a semicolon outside the macro. #define REGISTER(...) \ do { \ /* whatever */ \ } while (false) \ /**/ int main() { REGISTER(a, b, c); // <-- } That point of view is evil. Demanding that competent C++ programmers have a proper view of macros is not demanding that they know all of the tricks that I or others know to manipulate the preprocessor. However, it does require them to know that macros are a fundamentally different thing with fundamentally different mechanics (even if they don't understand those mechanics to the level of detail that I or others do).
So, what's the problem with SOME_MACRO((a, b, c)) instead? I.e. a tuple rather just variadic content?
By the same token what is wrong with SOME_MACRO(a,b,c) ?
The interface is (potentially) silently broken when some other argument needs to be passed. I.e. MACRO(a, b, c) => MACRO(id, a, b, c) is potentially a silent break. MACRO((a, b, c)) => MACRO(id, (a, b, c)) is not.
The point is not that their is something wrong with SOME_MACRO((a,b,c)) or even with SOME_MACRO((a)(b)(c)) but that programmers using macros which a library provides seem more comfortable with SOME_MACRO(a,b,c). The reason for this is most probably because the latter mimics function call syntax and a library implementer is looking syntactically for as much sameness as possible in order to promote ease of use ( or memorability ).
Another way to provide comfort is via education. Hardcore pp- metaprogramming knowledge is not required for this.
One may just as well ask why C++ adapted variadic macros as part of the next proposed standard ? I believe it was solely for syntactic ease of use with a variable amount of macro tokens. It sure wasn't because C++ provided much functionality to deal with the variadic data per se.
Actually, it is more likely just for the sake of compatibility with C. I originally proposed adding them at a committee meeting in Redmond. That proposal got rolled into a general "compatibility with C preprocessor" proposal.
Believe it or not, I actually use preprocessor metaprogramming (as well as template metaprogramming) quite heavily in my own code.
Nah ! I don't believe it <g>.
Shocking, but true!
The only time that I would provide a macro that uses the variadic data as an element sequence (that's not a general-purpose pp-library macro) as an interface is if it is absolutely certain that it will never become higher-order and that the number of arguments before the variadic data will never change.
Fair enough. But I believe some other proposed Boost libraries, besides my own TTI library, are using variadic macro syntax in their public interface. Why should they not do that and take advantage of pp-lib at the same time ?
As I said before, I don't have a problem with these particular macros. I.e. DATA_TO_SEQ(__VA_ARGS__). I only have a problem with ALGORITHM_B (...) => ALGORITHM_A(DATA_TO_SEQ(__VA_ARGS__)). There are lots of other uses of DATA_TO_SEQ (et al) and element extraction (ala TUPLE_ELEM) besides this. As long as the pp-lib doesn't provide definitions like ALGORITHM_B, I don't have a problem with it. If users provide that themselves, in spite of a lack of endorsement for the library, the ramifications are on their own heads. The following are the chaos-pp analogs of the macros that you have in the sandbox right now. If they were added to the pp-lib (where they belong if they are worthwhile) these are the names that I would use (not because I'm wedded to the names, but for symmetry with existing practice). ---- BOOST_VMD_DATA_SIZE(...) Chaos has no direct analog of this ATM. Normally the functionality is produced via CHAOS_PP_TUPLE_SIZE((...)). However, if it existed it would be called CHAOS_PP_VARIADIC_SIZE(...). If I (or you) added it to the pp- lib, I would prefer it be called BOOST_PP_VARIADIC_SIZE(...) for symmetry with existing practice. ---- BOOST_VMD_DATA_ELEM(n, ...) The direct analog of this in Chaos is CHAOS_PP_VARIADIC_ELEM(n, ...). If I (or you) added it to the pp-lib, I would prefer it be called BOOST_PP_VARIADIC_SIZE(...). ---- BOOST_VMD_DATA_TO_PP_TUPLE(...) Chaos has no direct analog of this because (as you know), it's pointless (unless you're doing something internally to for compiler workarounds). ----- BOOST_VMD_DATA_TO_PP_{ARRAY,LIST,SEQ}(...) Chaos has no direct analog for any of these because it doesn't consider variadic data to be a sequential data structure. Chaos does have conversion macros to one data structure to another. However, the basic functionality to do that is implemented by one generic algorithm: CHAOS_PP_CAST(CHAOS_PP_SEQ, (CHAOS_PP_TUPLE) (a, b, c)) => (a)(b)(c) CHAOS_PP_CAST(CHAOS_PP_LIST, (CHAOS_PP_TUPLE) (a, b, c)) => (a, (b, (c, ...))) The library normally only supplies direct non-generic versions when there is a reasonable efficiency gain over the generic version or when there is an interesting implementation mechanic that I want to preserve. There are a lot of these direct conversions for CHAOS_PP_SEQ and CHAOS_PP_TUPLE. The current set is: CHAOS_PP_ARRAY_TO_LIST CHAOS_PP_ARRAY_TO_SEQ CHAOS_PP_ARRAY_TO_STRING CHAOS_PP_ARRAY_TO_TUPLE CHAOS_PP_SEQ_TO_ARRAY CHAOS_PP_SEQ_TO_LIST CHAOS_PP_SEQ_TO_STRING CHAOS_PP_SEQ_TO_TUPLE CHAOS_PP_TUPLE_TO_LIST CHAOS_PP_TUPLE_TO_SEQ CHAOS_PP_TUPLE_TO_STRING The primary reason that there is no direct conversion (other than (__VA_ARGS__)) to the other data types is that with variadic data there is no such thing as an empty sequence and there is no safe way to define a rogue value for it without artificially limiting what the data can contain. ---- BOOST_VMD_PP_TUPLE_SIZE(tuple) The direct analog of this in Chaos is CHAOS_PP_TUPLE_SIZE(tuple). If I (or you) added this to the pp-lib, it should be just BOOST_PP_TUPLE_SIZE (tuple). ---- BOOST_VMD_PP_TUPLE_ELEM(n, tuple) The direct analog of this in Chaos is CHAOS_PP_TUPLE_ELEM(n, tuple) where it ignores the 'n' if variadics are enabled. CHAOS_PP_TUPLE_ELEM(?, 0, (a, b, c)) => a If this was added to the pp-lib, I would prefer that it did so the same way. However, another way would be "overload" the macro (which as no direct analog in the pp-lib or your library). Basically, that ends up being something like: #if (variadics enabled) #define BOOST_PP_TUPLE_ELEM(...) \ BOOST_PP_CAT( \ BOOST_PP_TUPLE_ELEM_, \ BOOST_PP_VARIADIC_SIZE(__VA_ARGS__) \ )(__VA_ARGS__) \ /**/ #define BOOST_PP_TUPLE_ELEM_2(n, tuple) // ... #define BOOST_PP_TUPLE_ELEM_3(size, n, tuple) \ BOOST_PP_TUPLE_ELEM_2(n, tuple) \ /**/ #else #define BOOST_PP_TUPLE_ELEM(size, n, tuple) // old way #endif ----- BOOST_VMD_PP_TUPLE_REM_CTOR(tuple) The direct analog of this in Chaos is CHAOS_PP_TUPLE_REM_CTOR (which, BTW, stands for "constructed parentheses removal"). The "constructed" refers to when the tuple is the direction result of a macro expansion. CHAOS_PP_REM_CTOR(?, MACRO()). Otherwise, if you already have a tuple 't', just use CHAOS_PP_REM t. ATM, Chaos has CHAOS_PP_TUPLE_REM, CHAOS_PP_TUPLE_REM_CTOR, and CHAOS_PP_REM. It does not have a CHAOS_PP_REM_CTOR. ---- BOOST_VMD_PP_TUPLE_{REVERSE,TO_{LIST,SEQ}}(tuple) If added to the pp-lib, should be the same as BOOST_PP_TUPLE_ELEM above. Either ignored size argument or overload. ---- BOOST_VMD_PP_TUPLE_TO_DATA(tuple) Isn't this just REM_CTOR? ---- BOOST_VMD_PP_{ARRAY,LIST,SEQ}_TO_DATA(ds) These I don't see the point of these--particularly with the pp-lib because of compiler issues. These already exist as BOOST_PP_LIST_ENUM and SEQ_ENUM. There isn't an ARRAY_ENUM currently, but it's easy to implement. The distinction between these names and the *_TO_DATA variety is that these are primary output macros. If an attempt is made by a user to use the result as macro arguments, all of the issues with compilers (e.g. VC++) will be pulled into the user's domain. ---- To summarize what I would prefer: BOOST_VMD_DATA_SIZE(...) -> BOOST_PP_VARIADIC_SIZE(...) BOOST_VMD_DATA_ELEM(n, ...) -> BOOST_PP_VARIADIC_ELEM(n, ...) BOOST_VMD_DATA_TO_PP_TUPLE(...) -> (nothing, unless workarounds are necessary) BOOST_VMD_DATA_TO_PP_ARRAY(...) -> BOOST_PP_TUPLE_TO_ARRAY((...)) or BOOST_PP_TUPLE_TO_ARRAY(size, (...)) BOOST_VMD_DATA_TO_PP_LIST(...) -> BOOST_PP_TUPLE_TO_LIST((...)) or BOOST_PP_TUPLE_TO_LIST(size, (...)) BOOST_VMD_DATA_TO_PP_SEQ(...) -> BOOST_PP_TUPLE_TO_SEQ((...)) or BOOST_PP_TUPLE_TO_SEQ(size, (...)) BOOST_VMD_PP_TUPLE_SIZE(tuple) -> BOOST_PP_TUPLE_SIZE(tuple) or BOOST_PP_TUPLE_SIZE(size, tuple) BOOST_VMD_PP_TUPLE_ELEM(n, tuple) -> BOOST_PP_TUPLE_ELEM(n, tuple) or BOOST_PP_TUPLE_ELEM(size, n, tuple) BOOST_VMD_PP_TUPLE_REM_CTOR(tuple) -> BOOST_PP_TUPLE_REM_CTOR(tuple) or BOOST_PP_TUPLE_REM_CTOR(size, tuple) BOOST_VMD_PP_TUPLE_REVERSE(tuple) -> BOOST_PP_TUPLE_REVERSE(tuple) or BOOST_PP_TUPLE_REVERSE(size, tuple) BOOST_VMD_PP_TUPLE_TO_LIST(tuple) -> BOOST_PP_TUPLE_TO_LIST(tuple) or BOOST_PP_TUPLE_TO_LIST(size, tuple) BOOST_VMD_PP_TUPLE_TO_SEQ(tuple) -> BOOST_PP_TUPLE_TO_SEQ(tuple) or BOOST_PP_TUPLE_TO_SEQ(size, tuple) BOOST_VMD_PP_TUPLE_TO_DATA(tuple) -> BOOST_PP_TUPLE_REM_CTOR(tuple) or BOOST_PP_TUPLE_REM_CTOR(size, tuple) BOOST_VMD_PP_ARRAY_TO_DATA(array) -> BOOST_PP_ARRAY_ENUM(array) BOOST_VMD_PP_LIST_TO_DATA(list) -> BOOST_PP_LIST_ENUM(list) BOOST_VMD_PP_SEQ_TO_DATA(seq) -> BOOST_PP_SEQ_ENUM(seq) also add: BOOST_PP_REM, BOOST_PP_EAT The basic gist is to add the low-level variadic stuff and adapt the existing tuple stuff to not require the size. Regards, Paul Mensonides

On Mon, Feb 21, 2011 at 3:57 AM, Paul Mensonides <pmenso57@comcast.net> wrote:
To summarize what I would prefer:
BOOST_VMD_DATA_SIZE(...) -> BOOST_PP_VARIADIC_SIZE(...)
BOOST_VMD_DATA_ELEM(n, ...) -> BOOST_PP_VARIADIC_ELEM(n, ...)
BOOST_VMD_DATA_TO_PP_TUPLE(...) -> (nothing, unless workarounds are necessary)
BOOST_VMD_DATA_TO_PP_ARRAY(...) -> BOOST_PP_TUPLE_TO_ARRAY((...)) or BOOST_PP_TUPLE_TO_ARRAY(size, (...))
BOOST_VMD_DATA_TO_PP_LIST(...) -> BOOST_PP_TUPLE_TO_LIST((...)) or BOOST_PP_TUPLE_TO_LIST(size, (...))
BOOST_VMD_DATA_TO_PP_SEQ(...) -> BOOST_PP_TUPLE_TO_SEQ((...)) or BOOST_PP_TUPLE_TO_SEQ(size, (...))
Sorry if I am a bit slow and ask for clarifications. Would this work as follow? a) If NO_VARIADIC, PP_TUPLE_TO_SEQ always requires 2 arguments for both the size and the tuple -- as in PP_TUPLE_TO_SEQ(size, (...)). b) If VARIADIC instead, the same PP_TUPLE_TO_SEQ macro can either accept 1 argument for the tuple -- as in PP_TUPLE_TO_SEQ((...)) -- or accept 2 arguments for both the size and the tuple as in a). Is this correct? Thanks a lot!
BOOST_VMD_PP_TUPLE_SIZE(tuple) -> BOOST_PP_TUPLE_SIZE(tuple) or BOOST_PP_TUPLE_SIZE(size, tuple)
BOOST_VMD_PP_TUPLE_ELEM(n, tuple) -> BOOST_PP_TUPLE_ELEM(n, tuple) or BOOST_PP_TUPLE_ELEM(size, n, tuple)
BOOST_VMD_PP_TUPLE_REM_CTOR(tuple) -> BOOST_PP_TUPLE_REM_CTOR(tuple) or BOOST_PP_TUPLE_REM_CTOR(size, tuple)
BOOST_VMD_PP_TUPLE_REVERSE(tuple) -> BOOST_PP_TUPLE_REVERSE(tuple) or BOOST_PP_TUPLE_REVERSE(size, tuple)
BOOST_VMD_PP_TUPLE_TO_LIST(tuple) -> BOOST_PP_TUPLE_TO_LIST(tuple) or BOOST_PP_TUPLE_TO_LIST(size, tuple)
BOOST_VMD_PP_TUPLE_TO_SEQ(tuple) -> BOOST_PP_TUPLE_TO_SEQ(tuple) or BOOST_PP_TUPLE_TO_SEQ(size, tuple)
BOOST_VMD_PP_TUPLE_TO_DATA(tuple) -> BOOST_PP_TUPLE_REM_CTOR(tuple) or BOOST_PP_TUPLE_REM_CTOR(size, tuple)
BOOST_VMD_PP_ARRAY_TO_DATA(array) -> BOOST_PP_ARRAY_ENUM(array)
BOOST_VMD_PP_LIST_TO_DATA(list) -> BOOST_PP_LIST_ENUM(list)
BOOST_VMD_PP_SEQ_TO_DATA(seq) -> BOOST_PP_SEQ_ENUM(seq)
also add: BOOST_PP_REM, BOOST_PP_EAT
The basic gist is to add the low-level variadic stuff and adapt the existing tuple stuff to not require the size.
-- Lorenzo

On Mon, 21 Feb 2011 11:57:54 -0500, Lorenzo Caminiti wrote:
Sorry if I am a bit slow and ask for clarifications. Would this work as follow?
a) If NO_VARIADIC, PP_TUPLE_TO_SEQ always requires 2 arguments for both the size and the tuple -- as in PP_TUPLE_TO_SEQ(size, (...)).
b) If VARIADIC instead, the same PP_TUPLE_TO_SEQ macro can either accept 1 argument for the tuple -- as in PP_TUPLE_TO_SEQ((...)) -- or accept 2 arguments for both the size and the tuple as in a).
Is this correct? Thanks a lot!
Yes. With variadics you can engineer it so that the size argument can be elided. -Paul

On 2/21/2011 3:57 AM, Paul Mensonides wrote:
On Sun, 20 Feb 2011 13:55:47 -0500, Edward Diener wrote:
On 2/20/2011 12:19 PM, Paul Mensonides wrote:
All uses of macros are metaprogramming. The more programmers understand that, the less "macro problems" there will be.
A library presents an interface where one uses a macro to do "something". The programmer using that macro to accomplish some task in that library. The use of that macro is not "metaprogramming" as I understand it, although I think this is just an argument about a word meaning than any real disagreement. Imagine a relative newbie in C++ using a macro and then being told he is "metaprogramming". No doubt he would feel a little proud of that fact<g>, but I doubt if that would prove much about his real "metaprogramming" knowledge or skills, not that I think "metaprogramming" is that abstruse.
Undoubtedly a C++ programmer should understand macros. But whether he understands it at his own level and whether he understands it as you do are different things.
What I'm getting is actually what you mention before, mimicking function calls. That even being a goal (or considered a good thing by accident) rather than being benign (at best) is where things go wrong. Macros are code generators (or, I suppose, code "removers"). They are not even remotely tied to the syntax of the underlying language. When a macro (that isn't a general purpose pp-lib macro) is defined, all too often the author designs the macro such that its invocation appears to fit into the syntax of the underlying language. For example, hacks such as using do/ while loops in order to require a semicolon outside the macro.
#define REGISTER(...) \ do { \ /* whatever */ \ } while (false) \ /**/
int main() { REGISTER(a, b, c); //<-- }
That point of view is evil.
Demanding that competent C++ programmers have a proper view of macros is not demanding that they know all of the tricks that I or others know to manipulate the preprocessor. However, it does require them to know that macros are a fundamentally different thing with fundamentally different mechanics (even if they don't understand those mechanics to the level of detail that I or others do).
So, what's the problem with SOME_MACRO((a, b, c)) instead? I.e. a tuple rather just variadic content?
By the same token what is wrong with SOME_MACRO(a,b,c) ?
The interface is (potentially) silently broken when some other argument needs to be passed. I.e. MACRO(a, b, c) => MACRO(id, a, b, c) is potentially a silent break. MACRO((a, b, c)) => MACRO(id, (a, b, c)) is not.
The point is not that their is something wrong with SOME_MACRO((a,b,c)) or even with SOME_MACRO((a)(b)(c)) but that programmers using macros which a library provides seem more comfortable with SOME_MACRO(a,b,c). The reason for this is most probably because the latter mimics function call syntax and a library implementer is looking syntactically for as much sameness as possible in order to promote ease of use ( or memorability ).
Another way to provide comfort is via education. Hardcore pp- metaprogramming knowledge is not required for this.
Providing function-like syntax for invoking a macro with a variable number of parameters, as an alternative to pp-lib data syntax, is important to end-users and library developers if just for the sake of familiarity and regularity. A programmer using a "call" syntax which may be a macro or a function is not going to stop and say: this is a function so I can call it as 'somefunction(a,b,c)', this is a macro and therefore I must call it as 'somemacro((a,b,c))'. Instead he will ask that the same syntax be applied to both. You seem to feel this is wrong and that someone invoking a macro should realize that it is a macro ( and normally does because it is capital letters ) and therefore be prepared to use a different syntax, but I think that regularity in this respect is to be valued.
One may just as well ask why C++ adapted variadic macros as part of the next proposed standard ? I believe it was solely for syntactic ease of use with a variable amount of macro tokens. It sure wasn't because C++ provided much functionality to deal with the variadic data per se.
Actually, it is more likely just for the sake of compatibility with C. I originally proposed adding them at a committee meeting in Redmond. That proposal got rolled into a general "compatibility with C preprocessor" proposal.
Believe it or not, I actually use preprocessor metaprogramming (as well as template metaprogramming) quite heavily in my own code.
Nah ! I don't believe it<g>.
Shocking, but true!
The only time that I would provide a macro that uses the variadic data as an element sequence (that's not a general-purpose pp-library macro) as an interface is if it is absolutely certain that it will never become higher-order and that the number of arguments before the variadic data will never change.
Fair enough. But I believe some other proposed Boost libraries, besides my own TTI library, are using variadic macro syntax in their public interface. Why should they not do that and take advantage of pp-lib at the same time ?
As I said before, I don't have a problem with these particular macros. I.e. DATA_TO_SEQ(__VA_ARGS__). I only have a problem with ALGORITHM_B (...) => ALGORITHM_A(DATA_TO_SEQ(__VA_ARGS__)). There are lots of other uses of DATA_TO_SEQ (et al) and element extraction (ala TUPLE_ELEM) besides this. As long as the pp-lib doesn't provide definitions like ALGORITHM_B, I don't have a problem with it. If users provide that themselves, in spite of a lack of endorsement for the library, the ramifications are on their own heads.
The following are the chaos-pp analogs of the macros that you have in the sandbox right now. If they were added to the pp-lib (where they belong if they are worthwhile) these are the names that I would use (not because I'm wedded to the names, but for symmetry with existing practice).
----
BOOST_VMD_DATA_SIZE(...)
Chaos has no direct analog of this ATM. Normally the functionality is produced via CHAOS_PP_TUPLE_SIZE((...)). However, if it existed it would be called CHAOS_PP_VARIADIC_SIZE(...). If I (or you) added it to the pp- lib, I would prefer it be called BOOST_PP_VARIADIC_SIZE(...) for symmetry with existing practice.
----
BOOST_VMD_DATA_ELEM(n, ...)
The direct analog of this in Chaos is CHAOS_PP_VARIADIC_ELEM(n, ...). If I (or you) added it to the pp-lib, I would prefer it be called BOOST_PP_VARIADIC_SIZE(...).
Did you mean BOOST_PP_VARIADIC_ELEM(n,...) ?
----
BOOST_VMD_DATA_TO_PP_TUPLE(...)
Chaos has no direct analog of this because (as you know), it's pointless (unless you're doing something internally to for compiler workarounds).
I do not think it is pointless. I am going variadics -> tuple. The end-user wants regularity, even if its a no-brainer to write '( __VA_ARGS__ )'. I think this is where we may differ, not technically, but in our view of what should be presented to an end-user. I really value regularity ( and orthogonality ) even when something is trivial. My view is generally that if it costs the library developer little and makes the end-user see a design as "regular", it is worth implementing as long as it is part of the design even if it is utterly trivial. You may feel I am cosseting an end-user, and perhaps you are right. But as an end-user myself I generally want things as regular as possible even if it causes some very small extra compile time.
-----
BOOST_VMD_DATA_TO_PP_{ARRAY,LIST,SEQ}(...)
Chaos has no direct analog for any of these because it doesn't consider variadic data to be a sequential data structure. Chaos does have conversion macros to one data structure to another. However, the basic functionality to do that is implemented by one generic algorithm:
CHAOS_PP_CAST(CHAOS_PP_SEQ, (CHAOS_PP_TUPLE) (a, b, c)) => (a)(b)(c)
CHAOS_PP_CAST(CHAOS_PP_LIST, (CHAOS_PP_TUPLE) (a, b, c)) => (a, (b, (c, ...)))
The library normally only supplies direct non-generic versions when there is a reasonable efficiency gain over the generic version or when there is an interesting implementation mechanic that I want to preserve. There are a lot of these direct conversions for CHAOS_PP_SEQ and CHAOS_PP_TUPLE. The current set is:
CHAOS_PP_ARRAY_TO_LIST CHAOS_PP_ARRAY_TO_SEQ CHAOS_PP_ARRAY_TO_STRING CHAOS_PP_ARRAY_TO_TUPLE
CHAOS_PP_SEQ_TO_ARRAY CHAOS_PP_SEQ_TO_LIST CHAOS_PP_SEQ_TO_STRING CHAOS_PP_SEQ_TO_TUPLE
CHAOS_PP_TUPLE_TO_LIST CHAOS_PP_TUPLE_TO_SEQ CHAOS_PP_TUPLE_TO_STRING
The primary reason that there is no direct conversion (other than (__VA_ARGS__)) to the other data types is that with variadic data there is no such thing as an empty sequence and there is no safe way to define a rogue value for it without artificially limiting what the data can contain.
I understand this. My point of view is that although it's theoretically a user error to use an empty sequence for a variadic macro even if a corresponding pp-lib data type can be empty, in reality it should be allowed since there is no way to detect it. I understand your point of view that you want to do everything possible to eliminate user error, and I agree with it, but sometimes nothing one can do in a language is going to work ( as you have pointed out with variadics and an empty parameter ). In that case I do not see why functionality should not be provided anyway and if the user then uses it accidentally it becomes his problem. To not provide functionality because a user error, which can not be detected, might occur is not the way I usually view software design ( sorry to sound dogmatic, because I am normally very practical as a programmer ). In the case of converting from variadics to pp data types, if the end-user passes an empty sequence, and the pp=lib data type supports an empty sequence, then it is fine with me. Going the other way from a pp data type to variadics, I admit I did not consider the case in my current library where the pp data type is empty. Since this can be detected on the pp data type side, I think I can put out a BOOST_PP_ASSERT_MSG(cond, msg) in that case, or I can choose to ignore it if that is what I decide to do. But in either case it is a detectable problem.
----
BOOST_VMD_PP_TUPLE_SIZE(tuple)
The direct analog of this in Chaos is CHAOS_PP_TUPLE_SIZE(tuple). If I (or you) added this to the pp-lib, it should be just BOOST_PP_TUPLE_SIZE (tuple).
----
BOOST_VMD_PP_TUPLE_ELEM(n, tuple)
The direct analog of this in Chaos is CHAOS_PP_TUPLE_ELEM(n, tuple) where it ignores the 'n' if variadics are enabled.
The 'n' tells which tuple element to return. How can it be ignored ?
CHAOS_PP_TUPLE_ELEM(?, 0, (a, b, c)) => a
OK, I see. the '?' is just a marker for the size of the tuple when there are variadics and must be specified when there are not.
If this was added to the pp-lib, I would prefer that it did so the same way. However, another way would be "overload" the macro (which as no direct analog in the pp-lib or your library). Basically, that ends up being something like:
#if (variadics enabled)
That is an interesting technique below. Bravo ! As long as it is documented, since it confused me until I took a few more looks and realized what you are doing.
#define BOOST_PP_TUPLE_ELEM(...) \ BOOST_PP_CAT( \ BOOST_PP_TUPLE_ELEM_, \ BOOST_PP_VARIADIC_SIZE(__VA_ARGS__) \ )(__VA_ARGS__) \ /**/
#define BOOST_PP_TUPLE_ELEM_2(n, tuple) // ... #define BOOST_PP_TUPLE_ELEM_3(size, n, tuple) \ BOOST_PP_TUPLE_ELEM_2(n, tuple) \ /**/
#else
#define BOOST_PP_TUPLE_ELEM(size, n, tuple) // old way
#endif
-----
BOOST_VMD_PP_TUPLE_REM_CTOR(tuple)
The direct analog of this in Chaos is CHAOS_PP_TUPLE_REM_CTOR (which, BTW, stands for "constructed parentheses removal"). The "constructed" refers to when the tuple is the direction result of a macro expansion. CHAOS_PP_REM_CTOR(?, MACRO()). Otherwise, if you already have a tuple 't', just use CHAOS_PP_REM t. ATM, Chaos has CHAOS_PP_TUPLE_REM, CHAOS_PP_TUPLE_REM_CTOR, and CHAOS_PP_REM. It does not have a CHAOS_PP_REM_CTOR.
----
BOOST_VMD_PP_TUPLE_{REVERSE,TO_{LIST,SEQ}}(tuple)
If added to the pp-lib, should be the same as BOOST_PP_TUPLE_ELEM above. Either ignored size argument or overload.
----
BOOST_VMD_PP_TUPLE_TO_DATA(tuple)
Isn't this just REM_CTOR?
Sure. But again I value regularity even when something is trivial.
----
BOOST_VMD_PP_{ARRAY,LIST,SEQ}_TO_DATA(ds)
These I don't see the point of these--particularly with the pp-lib because of compiler issues. These already exist as BOOST_PP_LIST_ENUM and SEQ_ENUM. There isn't an ARRAY_ENUM currently, but it's easy to implement. The distinction between these names and the *_TO_DATA variety is that these are primary output macros.
That's just a name. Names can always be changed. My macros are the same as yours in that they are output macros which convert the pp-lib data types to variadic data.
If an attempt is made by a user to use the result as macro arguments, all of the issues with compilers (e.g. VC++) will be pulled into the user's domain.
The returned variadic data is no different from what the user will enter himself when invoking a variadic data macro. The only compiler issue I see is the empty variadic data one. I do see the point of these macros again as a simple matter of regularity.
----
To summarize what I would prefer:
BOOST_VMD_DATA_SIZE(...) -> BOOST_PP_VARIADIC_SIZE(...)
BOOST_VMD_DATA_ELEM(n, ...) -> BOOST_PP_VARIADIC_ELEM(n, ...)
BOOST_VMD_DATA_TO_PP_TUPLE(...) -> (nothing, unless workarounds are necessary)
I know its trivial but I still think it should exist.
BOOST_VMD_DATA_TO_PP_ARRAY(...) -> BOOST_PP_TUPLE_TO_ARRAY((...)) or BOOST_PP_TUPLE_TO_ARRAY(size, (...))
BOOST_VMD_DATA_TO_PP_LIST(...) -> BOOST_PP_TUPLE_TO_LIST((...)) or BOOST_PP_TUPLE_TO_LIST(size, (...))
BOOST_VMD_DATA_TO_PP_SEQ(...) -> BOOST_PP_TUPLE_TO_SEQ((...)) or BOOST_PP_TUPLE_TO_SEQ(size, (...))
For the previous three, see above discussion about using SOME_MACRO(a,b,c) vs. SOME_MACRO((a,b,c)). I do understand your reason for this as a means of getting around the empty-variadics user error. But I still feel that treating variadic data here as tuples is wrong from the end-user point of view even though it elegantly solves the empty variadic data problem. In my internal code I am solving the problem in the exact same way, but I am keeping the syntax as SOME_MACRO(a,b,c) as opposed to SOME_MACRO((a,b,c)). So I would say, please consider using the SOME_MACRO(a,b,c) instead as I am doing. I would even say to change my names to: BOOST_PP_ENUM_TUPLE(...) BOOST_PP_ENUM_ARRAY(...) BOOST_PP_ENUM_LIST(...) BOOST_PP_ENUM_SEQ(...) in order to match the ENUM names you already have in pp-lib for converting the other way and which you have below. But keep the SOME_MACRO(a,b,c) like syntax.
BOOST_VMD_PP_TUPLE_SIZE(tuple) -> BOOST_PP_TUPLE_SIZE(tuple) or BOOST_PP_TUPLE_SIZE(size, tuple)
BOOST_VMD_PP_TUPLE_ELEM(n, tuple) -> BOOST_PP_TUPLE_ELEM(n, tuple) or BOOST_PP_TUPLE_ELEM(size, n, tuple)
BOOST_VMD_PP_TUPLE_REM_CTOR(tuple) -> BOOST_PP_TUPLE_REM_CTOR(tuple) or BOOST_PP_TUPLE_REM_CTOR(size, tuple)
BOOST_VMD_PP_TUPLE_REVERSE(tuple) -> BOOST_PP_TUPLE_REVERSE(tuple) or BOOST_PP_TUPLE_REVERSE(size, tuple)
BOOST_VMD_PP_TUPLE_TO_LIST(tuple) -> BOOST_PP_TUPLE_TO_LIST(tuple) or BOOST_PP_TUPLE_TO_LIST(size, tuple)
BOOST_VMD_PP_TUPLE_TO_SEQ(tuple) -> BOOST_PP_TUPLE_TO_SEQ(tuple) or BOOST_PP_TUPLE_TO_SEQ(size, tuple)
BOOST_VMD_PP_TUPLE_TO_DATA(tuple) -> BOOST_PP_TUPLE_REM_CTOR(tuple) or BOOST_PP_TUPLE_REM_CTOR(size, tuple)
Again I value the orthogonality of the pp-data to variadic data idea in common names. BOOST_PP_TUPLE_REM_CTOR does not suggest that to the end-user. How about: #define BOOST_PP_TUPLE_ENUM(tuple) \ BOOST_PP_TUPLE_REM_CTOR(tuple) in order to mimic your three following names.
BOOST_VMD_PP_ARRAY_TO_DATA(array) -> BOOST_PP_ARRAY_ENUM(array)
BOOST_VMD_PP_LIST_TO_DATA(list) -> BOOST_PP_LIST_ENUM(list)
BOOST_VMD_PP_SEQ_TO_DATA(seq) -> BOOST_PP_SEQ_ENUM(seq)
also add: BOOST_PP_REM, BOOST_PP_EAT
OK.
The basic gist is to add the low-level variadic stuff and adapt the existing tuple stuff to not require the size.
I think our only real disagreements can be summed up as: I want the end-user to view variadic data as such from a perceptual point of view, even with the empty-variadics-is-an-error-which-can-not-be-caught problem. That is why I supply the various conversions from variadic sequences to pp-lib types and back explicitly, and I want some regularity in names reflecting that although I do not insist on my own names. You feel that variadics as input for conversion should in general be treated as a pp-lib tuple since creating a tuple from variadic macro data is trivial.

On Mon, 21 Feb 2011 12:57:05 -0500, Edward Diener wrote:
On 2/21/2011 3:57 AM, Paul Mensonides wrote:
Another way to provide comfort is via education. Hardcore pp- metaprogramming knowledge is not required for this.
Providing function-like syntax for invoking a macro with a variable number of parameters, as an alternative to pp-lib data syntax, is important to end-users and library developers if just for the sake of familiarity and regularity. A programmer using a "call" syntax which may be a macro or a function is not going to stop and say: this is a function so I can call it as 'somefunction(a,b,c)', this is a macro and therefore I must call it as 'somemacro((a,b,c))'. Instead he will ask that the same syntax be applied to both. You seem to feel this is wrong and that someone invoking a macro should realize that it is a macro ( and normally does because it is capital letters ) and therefore be prepared to use a different syntax, but I think that regularity in this respect is to be valued.
Most C/C++ developers perceive macro expansion mechanics to be similar to function call mechanics. I.e. where a user "calls" a macro A, and that macro "calls" the macro B, the macro B "returns" something, which is, in turn "returned" by A. That is fundamentally *not* how macro expansion behaves. The perceived similarity, where there is none (going all the way back to way before preprocessor metaprogramming) is how developers have gotten into so much trouble on account of macros. I take serious issue with anything that intentionally perpetuates this mentality. It is one thing if the syntax required is the same by coincidence. It's another thing altogether when something is done to intentionally make it so.
----
BOOST_VMD_DATA_ELEM(n, ...)
The direct analog of this in Chaos is CHAOS_PP_VARIADIC_ELEM(n, ...). If I (or you) added it to the pp-lib, I would prefer it be called BOOST_PP_VARIADIC_SIZE(...).
Did you mean BOOST_PP_VARIADIC_ELEM(n,...) ?
Yes, sorry!
----
BOOST_VMD_DATA_TO_PP_TUPLE(...)
Chaos has no direct analog of this because (as you know), it's pointless (unless you're doing something internally to for compiler workarounds).
I do not think it is pointless. I am going variadics -> tuple. The end-user wants regularity, even if its a no-brainer to write '( __VA_ARGS__ )'.
I think this is where we may differ, not technically, but in our view of what should be presented to an end-user. I really value regularity ( and orthogonality ) even when something is trivial. My view is generally that if it costs the library developer little and makes the end-user see a design as "regular", it is worth implementing as long as it is part of the design even if it is utterly trivial. You may feel I am cosseting an end-user, and perhaps you are right. But as an end-user myself I generally want things as regular as possible even if it causes some very small extra compile time.
It isn't the end of the world to provide it for the sake of symmetry.
The primary reason that there is no direct conversion (other than (__VA_ARGS__)) to the other data types is that with variadic data there is no such thing as an empty sequence and there is no safe way to define a rogue value for it without artificially limiting what the data can contain.
I understand this.
My point of view is that although it's theoretically a user error to use an empty sequence for a variadic macro even if a corresponding pp-lib data type can be empty, in reality it should be allowed since there is no way to detect it. I understand your point of view that you want to do everything possible to eliminate user error, and I agree with it, but sometimes nothing one can do in a language is going to work ( as you have pointed out with variadics and an empty parameter ).
That is not what I'm referring to. To clarity, using the STL as an example, a typical algorithm processes a finite sequence by progressively closing a range of iterators [i,j). Effectively, this iterator range is a "view" of a sequence of elements. (The underlying data structure also has its own natural view.) However, with the preprocessor, there is no indirection and there is no imperative functionality (i.e. assignment). Because of that, you cannot form views (in whatever form). Instead, you have to embed the entire data structure. At that point, you can do one of two things. Either use the data structure itself as your "view" and progressively make it smaller as you "iterate" or you can embed the data structure into another data structure which let's you add an arbitrary "terminal" state--the equivalent of the iterator range [j,j). For variadic content as a sequence, you cannot directly use the variadic content as the view because you cannot encode this terminal state. Instead, you'd have to go with option two, but then you pay the price for it elsewhere.
Going the other way from a pp data type to variadics, I admit I did not consider the case in my current library where the pp data type is empty. Since this can be detected on the pp data type side, I think I can put out a BOOST_PP_ASSERT_MSG(cond, msg) in that case, or I can choose to ignore it if that is what I decide to do. But in either case it is a detectable problem.
For the most part, however, these macros already exist. They are named (e.g.) BOOST_PP_SEQ_ENUM. However, some do not exist such as BOOST_PP_ARRAY_ENUM, and others have different naming conventions such as BOOST_PP_TUPLE_REM_CTOR. For the sake of symmetry, BOOST_PP_ARRAY_ENUM and BOOST_PP_TUPLE_ENUM could be added. However, having them use the ENUM nomenclature is preferable for several reasons. First, because it expresses a distinction between a comma-separated list of arguments (variadic content) and a comma-separated list of elements which provides a definition that avoids the zero-element vs. single-empty-element problem. Second, is that if something like BOOST_PP_SEQ_ENUM is used to attempt to create a comma-separated list of _arguments_, the user is in for a world of hurt trying to write portable code.
BOOST_VMD_PP_TUPLE_ELEM(n, tuple)
The direct analog of this in Chaos is CHAOS_PP_TUPLE_ELEM(n, tuple) where it ignores the 'n' if variadics are enabled.
The 'n' tells which tuple element to return. How can it be ignored ?
CHAOS_PP_TUPLE_ELEM(?, 0, (a, b, c)) => a
OK, I see. the '?' is just a marker for the size of the tuple when there are variadics and must be specified when there are not.
Sorry, I was mixing up the arguments. Without variadics, you must have a size. With variadics, you don't need it, so you can leave it there for compatibility with the non-variadic scenario and ignore it and additionally provide an "overload" that doesn't have it at all. Except for compiler workarounds (which I'm sure you know how to solve in this case), detecting the difference between two and three arguments (where it must be either 2 or 3 arguments) is simple: #if VARIADICS #define TUPLE_ELEM(...) \ CAT( \ TUPLE_ELEM_, \ TEST_23(__VA_ARGS__, 3, 2,) \ )(__VA_ARGS__) \ /**/ #define TEST_23(_1, _2, _3, n, ...) n #define TUPLE_ELEM_2(n, tuple) // ... #define TUPLE_ELEM_3(size, n, tuple) TUPLE_ELEM_2(n, tuple) #else #define TUPLE_ELEM(size, n, tuple) // ... #endif This is a very fast dispatch.
That is an interesting technique below. Bravo ! As long as it is documented, since it confused me until I took a few more looks and realized what you are doing.
It is just a dispatcher to emulate overloading on number of arguments. You'd actually do something that makes the dispatch as fast as possible (as above) which is easy with a small set of possibilities (like 1|2 or 2| 3).
BOOST_VMD_PP_{ARRAY,LIST,SEQ}_TO_DATA(ds)
These I don't see the point of these--particularly with the pp-lib because of compiler issues. These already exist as BOOST_PP_LIST_ENUM and SEQ_ENUM. There isn't an ARRAY_ENUM currently, but it's easy to implement. The distinction between these names and the *_TO_DATA variety is that these are primary output macros.
That's just a name. Names can always be changed. My macros are the same as yours in that they are output macros which convert the pp-lib data types to variadic data.
The primary distinction is the perspective induced by the names. If user's attempt to use these macros to produce macro argument lists are in for portability problems. Particularly: #define REM(...) __VA_ARGS__ #define A(im) B(im) // im stands for "intermediate" // (chaos-pp nomenclature) #define B(x, y) x + y A(REM(1, 2)) // should work, most likely won't on many preprocessors My understanding is that you want to take a list of macro arguments and convert it to something that can be processed as a sequential data structure. That's one concept. Converting from a sequential data structure to a list of comma-separated values is another concept. But converting from a sequential data structure to a list of macro arguments is another concept altogether--one that is fraught with portability issues that cannot be encapsulated by the library.
If an attempt is made by a user to use the result as macro arguments, all of the issues with compilers (e.g. VC++) will be pulled into the user's domain.
The returned variadic data is no different from what the user will enter himself when invoking a variadic data macro. The only compiler issue I see is the empty variadic data one.
I think the difference is conceptual. A list of comma-separated things (like function parameters, structure initializers, etc.) is conceptually different that a list of macro arguments. The going back to a list of arguments part is where things go wrong.
BOOST_VMD_DATA_TO_PP_TUPLE(...) -> (nothing, unless workarounds are necessary)
I know its trivial but I still think it should exist.
It is quite possible that workarounds need to be applied anyway to (e.g.) force VC++ to "let go" of the variadic arguments as a single entity.
BOOST_VMD_DATA_TO_PP_ARRAY(...) -> BOOST_PP_TUPLE_TO_ARRAY((...)) or BOOST_PP_TUPLE_TO_ARRAY(size, (...))
BOOST_VMD_DATA_TO_PP_LIST(...) -> BOOST_PP_TUPLE_TO_LIST((...)) or BOOST_PP_TUPLE_TO_LIST(size, (...))
BOOST_VMD_DATA_TO_PP_SEQ(...) -> BOOST_PP_TUPLE_TO_SEQ((...)) or BOOST_PP_TUPLE_TO_SEQ(size, (...))
For the previous three, see above discussion about using SOME_MACRO(a,b,c) vs. SOME_MACRO((a,b,c)). I do understand your reason for this as a means of getting around the empty-variadics user error.
It isn't that. I don't like interface bloat. That's like not being able to decide on size() versus length() so providing both. If the use case is something like what you mentioned before: #define MOC(...) /* yes, that's you, Qt */ \ GENERATE_MOC_DATA(TUPLE_TO_SEQ((__VA_ARGS__))) \ /**/ Then why does the TUPLE_TO_SEQ((__VA_ARGS__)) part matter to the developer who invokes MOC?
But I still feel that treating variadic data here as tuples is wrong from the end-user point of view even though it elegantly solves the empty variadic data problem. In my internal code I am solving the problem in the exact same way, but I am keeping the syntax as SOME_MACRO(a,b,c) as opposed to SOME_MACRO((a,b,c)).
So I would say, please consider using the SOME_MACRO(a,b,c) instead as I am doing.
I would even say to change my names to:
BOOST_PP_ENUM_TUPLE(...) BOOST_PP_ENUM_ARRAY(...) BOOST_PP_ENUM_LIST(...) BOOST_PP_ENUM_SEQ(...)
I'm not terribly opposed to just BOOST_PP_TO_TUPLE(...), etc.. #define BOOST_PP_TO_TUPLE(...) (__VA_ARGS__) #define BOOST_PP_TO_ARRAY(...) \ (BOOST_PP_VARIADIC_SIZE(__VA_ARGS__), BOOST_PP_TO_TUPLE(__VA_ARGS__) \ /**/ // BTW, an "array" is a pointless data structure // when you have variadics, but whatever #define BOOST_PP_TO_LIST(...) \ BOOST_PP_TUPLE_TO_LIST((__VA_ARGS__)) \ /**/ #define BOOST_PP_TO_SEQ(...) \ BOOST_PP_TUPLE_TO_SEQ((__VA_ARGS__)) \ /**/ I'm a lot more opposed to going back from a proper data structure to an "argument list".
Again I value the orthogonality of the pp-data to variadic data idea in common names. BOOST_PP_TUPLE_REM_CTOR does not suggest that to the end-user. How about:
#define BOOST_PP_TUPLE_ENUM(tuple) \ BOOST_PP_TUPLE_REM_CTOR(tuple)
in order to mimic your three following names.
Sure, but with a better definition: #define BOOST_PP_TUPLE_ENUM BOOST_PP_TUPLE_REM_CTOR
BOOST_VMD_PP_ARRAY_TO_DATA(array) -> BOOST_PP_ARRAY_ENUM(array)
BOOST_VMD_PP_LIST_TO_DATA(list) -> BOOST_PP_LIST_ENUM(list)
BOOST_VMD_PP_SEQ_TO_DATA(seq) -> BOOST_PP_SEQ_ENUM(seq)
also add: BOOST_PP_REM, BOOST_PP_EAT
OK.
These latter two (REM and EAT) having nothing to do with data structures per se, but they are extremely useful macros.
The basic gist is to add the low-level variadic stuff and adapt the existing tuple stuff to not require the size.
I think our only real disagreements can be summed up as:
I want the end-user to view variadic data as such from a perceptual point of view, even with the empty-variadics-is-an-error-which-can-not-be-caught problem. That is why I supply the various conversions from variadic sequences to pp-lib types and back explicitly, and I want some regularity in names reflecting that although I do not insist on my own names.
You feel that variadics as input for conversion should in general be treated as a pp-lib tuple since creating a tuple from variadic macro data is trivial.
I don't like interface bloat, but if it is minor, it isn't the end of the world. The one thing that I really don't like is the blending of what I consider two different concepts: output and return value (even though I'm going against my own diatribe about macros != functions above by calling it "return value"). Going from a data structure to a list of comma-separated values (like enumerators, function arguments, whatever) is output and is reflected by the name ENUM. Going from a data structure to a list of comma-separated macro arguments is return value (for input into other macros as disparate arguments). This latter use scenario is fraught with portability problems on the user end, and not necessarily ones that immediately show up. Regards, Paul Mensonides

On 2/21/2011 2:37 PM, Paul Mensonides wrote:
On Mon, 21 Feb 2011 12:57:05 -0500, Edward Diener wrote:
On 2/21/2011 3:57 AM, Paul Mensonides wrote:
Another way to provide comfort is via education. Hardcore pp- metaprogramming knowledge is not required for this.
Providing function-like syntax for invoking a macro with a variable number of parameters, as an alternative to pp-lib data syntax, is important to end-users and library developers if just for the sake of familiarity and regularity. A programmer using a "call" syntax which may be a macro or a function is not going to stop and say: this is a function so I can call it as 'somefunction(a,b,c)', this is a macro and therefore I must call it as 'somemacro((a,b,c))'. Instead he will ask that the same syntax be applied to both. You seem to feel this is wrong and that someone invoking a macro should realize that it is a macro ( and normally does because it is capital letters ) and therefore be prepared to use a different syntax, but I think that regularity in this respect is to be valued.
Most C/C++ developers perceive macro expansion mechanics to be similar to function call mechanics. I.e. where a user "calls" a macro A, and that macro "calls" the macro B, the macro B "returns" something, which is, in turn "returned" by A. That is fundamentally *not* how macro expansion behaves. The perceived similarity, where there is none (going all the way back to way before preprocessor metaprogramming) is how developers have gotten into so much trouble on account of macros.
OTOH users of macros are not concerned, as developers should be, of how the macro expands. They are just given a macro syntax to use which the developer supposes should feel natural to them.
I take serious issue with anything that intentionally perpetuates this mentality. It is one thing if the syntax required is the same by coincidence. It's another thing altogether when something is done to intentionally make it so.
I really feel you are stretching your case for why you do not like #define SOME_MACRO(...) as opposed to #define SOME_MACRO((...)). I do understand your feeling that variadics can be more easily misused than pp-lib data types. But to me that is a programmer problem and not your problem.
----
BOOST_VMD_DATA_ELEM(n, ...)
The direct analog of this in Chaos is CHAOS_PP_VARIADIC_ELEM(n, ...). If I (or you) added it to the pp-lib, I would prefer it be called BOOST_PP_VARIADIC_SIZE(...).
Did you mean BOOST_PP_VARIADIC_ELEM(n,...) ?
Yes, sorry!
----
BOOST_VMD_DATA_TO_PP_TUPLE(...)
Chaos has no direct analog of this because (as you know), it's pointless (unless you're doing something internally to for compiler workarounds).
I do not think it is pointless. I am going variadics -> tuple. The end-user wants regularity, even if its a no-brainer to write '( __VA_ARGS__ )'.
I think this is where we may differ, not technically, but in our view of what should be presented to an end-user. I really value regularity ( and orthogonality ) even when something is trivial. My view is generally that if it costs the library developer little and makes the end-user see a design as "regular", it is worth implementing as long as it is part of the design even if it is utterly trivial. You may feel I am cosseting an end-user, and perhaps you are right. But as an end-user myself I generally want things as regular as possible even if it causes some very small extra compile time.
It isn't the end of the world to provide it for the sake of symmetry.
The primary reason that there is no direct conversion (other than (__VA_ARGS__)) to the other data types is that with variadic data there is no such thing as an empty sequence and there is no safe way to define a rogue value for it without artificially limiting what the data can contain.
I understand this.
My point of view is that although it's theoretically a user error to use an empty sequence for a variadic macro even if a corresponding pp-lib data type can be empty, in reality it should be allowed since there is no way to detect it. I understand your point of view that you want to do everything possible to eliminate user error, and I agree with it, but sometimes nothing one can do in a language is going to work ( as you have pointed out with variadics and an empty parameter ).
That is not what I'm referring to. To clarity, using the STL as an example, a typical algorithm processes a finite sequence by progressively closing a range of iterators [i,j). Effectively, this iterator range is a "view" of a sequence of elements. (The underlying data structure also has its own natural view.) However, with the preprocessor, there is no indirection and there is no imperative functionality (i.e. assignment). Because of that, you cannot form views (in whatever form). Instead, you have to embed the entire data structure.
At that point, you can do one of two things. Either use the data structure itself as your "view" and progressively make it smaller as you "iterate" or you can embed the data structure into another data structure which let's you add an arbitrary "terminal" state--the equivalent of the iterator range [j,j). For variadic content as a sequence, you cannot directly use the variadic content as the view because you cannot encode this terminal state. Instead, you'd have to go with option two, but then you pay the price for it elsewhere.
I understand what you are saying. The two basic pieces of functionality ( size and element access ) and the back and forth conversions are certainly a very limited set of functionality. Anybody who wants to try doing further tricks with a variadic sequences, as opposed to your own pp-lib data, is welcome to it but my original goal was really just to provide a syntax interface for variadics to pp-lib.
Going the other way from a pp data type to variadics, I admit I did not consider the case in my current library where the pp data type is empty. Since this can be detected on the pp data type side, I think I can put out a BOOST_PP_ASSERT_MSG(cond, msg) in that case, or I can choose to ignore it if that is what I decide to do. But in either case it is a detectable problem.
For the most part, however, these macros already exist. They are named (e.g.) BOOST_PP_SEQ_ENUM. However, some do not exist such as BOOST_PP_ARRAY_ENUM, and others have different naming conventions such as BOOST_PP_TUPLE_REM_CTOR. For the sake of symmetry, BOOST_PP_ARRAY_ENUM and BOOST_PP_TUPLE_ENUM could be added. However, having them use the ENUM nomenclature is preferable for several reasons. First, because it expresses a distinction between a comma-separated list of arguments (variadic content) and a comma-separated list of elements which provides a definition that avoids the zero-element vs. single-empty-element problem. Second, is that if something like BOOST_PP_SEQ_ENUM is used to attempt to create a comma-separated list of _arguments_, the user is in for a world of hurt trying to write portable code.
Your names are fine with me, as I indicated further in my previous reply.
BOOST_VMD_PP_TUPLE_ELEM(n, tuple)
The direct analog of this in Chaos is CHAOS_PP_TUPLE_ELEM(n, tuple) where it ignores the 'n' if variadics are enabled.
The 'n' tells which tuple element to return. How can it be ignored ?
CHAOS_PP_TUPLE_ELEM(?, 0, (a, b, c)) => a
OK, I see. the '?' is just a marker for the size of the tuple when there are variadics and must be specified when there are not.
Sorry, I was mixing up the arguments. Without variadics, you must have a size. With variadics, you don't need it, so you can leave it there for compatibility with the non-variadic scenario and ignore it and additionally provide an "overload" that doesn't have it at all. Except for compiler workarounds (which I'm sure you know how to solve in this case), detecting the difference between two and three arguments (where it must be either 2 or 3 arguments) is simple:
#if VARIADICS
#define TUPLE_ELEM(...) \ CAT( \ TUPLE_ELEM_, \ TEST_23(__VA_ARGS__, 3, 2,) \ )(__VA_ARGS__) \ /**/
#define TEST_23(_1, _2, _3, n, ...) n
#define TUPLE_ELEM_2(n, tuple) // ... #define TUPLE_ELEM_3(size, n, tuple) TUPLE_ELEM_2(n, tuple)
#else
#define TUPLE_ELEM(size, n, tuple) // ...
#endif
This is a very fast dispatch.
That is an interesting technique below. Bravo ! As long as it is documented, since it confused me until I took a few more looks and realized what you are doing.
It is just a dispatcher to emulate overloading on number of arguments. You'd actually do something that makes the dispatch as fast as possible (as above) which is easy with a small set of possibilities (like 1|2 or 2| 3).
BOOST_VMD_PP_{ARRAY,LIST,SEQ}_TO_DATA(ds)
These I don't see the point of these--particularly with the pp-lib because of compiler issues. These already exist as BOOST_PP_LIST_ENUM and SEQ_ENUM. There isn't an ARRAY_ENUM currently, but it's easy to implement. The distinction between these names and the *_TO_DATA variety is that these are primary output macros.
That's just a name. Names can always be changed. My macros are the same as yours in that they are output macros which convert the pp-lib data types to variadic data.
The primary distinction is the perspective induced by the names. If user's attempt to use these macros to produce macro argument lists are in for portability problems. Particularly:
#define REM(...) __VA_ARGS__
#define A(im) B(im) // im stands for "intermediate" // (chaos-pp nomenclature) #define B(x, y) x + y
A(REM(1, 2)) // should work, most likely won't on many preprocessors
I understand your concerns. But I don't think you can do anything about how programmers use things. You provide functionality because it has its uses. If some of the uses lead to potential problems because of programmer misunderstanding or compiler weakness you warn the programmer. That's the best you can do without removing decent functionality just because of programmer misuse or compiler fallability. Of course good docs about pitfalls always help.
My understanding is that you want to take a list of macro arguments and convert it to something that can be processed as a sequential data structure. That's one concept. Converting from a sequential data structure to a list of comma-separated values is another concept. But converting from a sequential data structure to a list of macro arguments is another concept altogether--one that is fraught with portability issues that cannot be encapsulated by the library.
I am more than aware of that.
If an attempt is made by a user to use the result as macro arguments, all of the issues with compilers (e.g. VC++) will be pulled into the user's domain.
The returned variadic data is no different from what the user will enter himself when invoking a variadic data macro. The only compiler issue I see is the empty variadic data one.
I think the difference is conceptual. A list of comma-separated things (like function parameters, structure initializers, etc.) is conceptually different that a list of macro arguments. The going back to a list of arguments part is where things go wrong.
BOOST_VMD_DATA_TO_PP_TUPLE(...) -> (nothing, unless workarounds are necessary)
I know its trivial but I still think it should exist.
It is quite possible that workarounds need to be applied anyway to (e.g.) force VC++ to "let go" of the variadic arguments as a single entity.
I will look further into this issue. I did have a couple of VC++ workarounds I had to use, which I was able to solve thanks to your own previous cleverness dealing with VC++.
BOOST_VMD_DATA_TO_PP_ARRAY(...) -> BOOST_PP_TUPLE_TO_ARRAY((...)) or BOOST_PP_TUPLE_TO_ARRAY(size, (...))
BOOST_VMD_DATA_TO_PP_LIST(...) -> BOOST_PP_TUPLE_TO_LIST((...)) or BOOST_PP_TUPLE_TO_LIST(size, (...))
BOOST_VMD_DATA_TO_PP_SEQ(...) -> BOOST_PP_TUPLE_TO_SEQ((...)) or BOOST_PP_TUPLE_TO_SEQ(size, (...))
For the previous three, see above discussion about using SOME_MACRO(a,b,c) vs. SOME_MACRO((a,b,c)). I do understand your reason for this as a means of getting around the empty-variadics user error.
It isn't that. I don't like interface bloat. That's like not being able to decide on size() versus length() so providing both.
If the use case is something like what you mentioned before:
#define MOC(...) /* yes, that's you, Qt */ \ GENERATE_MOC_DATA(TUPLE_TO_SEQ((__VA_ARGS__))) \ /**/
Then why does the TUPLE_TO_SEQ((__VA_ARGS__)) part matter to the developer who invokes MOC?
Because I am not converting a tuple to a seq but a variadic sequence to a seq, and I feel the syntax should support that idea.
But I still feel that treating variadic data here as tuples is wrong from the end-user point of view even though it elegantly solves the empty variadic data problem. In my internal code I am solving the problem in the exact same way, but I am keeping the syntax as SOME_MACRO(a,b,c) as opposed to SOME_MACRO((a,b,c)).
So I would say, please consider using the SOME_MACRO(a,b,c) instead as I am doing.
I would even say to change my names to:
BOOST_PP_ENUM_TUPLE(...) BOOST_PP_ENUM_ARRAY(...) BOOST_PP_ENUM_LIST(...) BOOST_PP_ENUM_SEQ(...)
I'm not terribly opposed to just BOOST_PP_TO_TUPLE(...), etc..
#define BOOST_PP_TO_TUPLE(...) (__VA_ARGS__) #define BOOST_PP_TO_ARRAY(...) \ (BOOST_PP_VARIADIC_SIZE(__VA_ARGS__), BOOST_PP_TO_TUPLE(__VA_ARGS__) \ /**/ // BTW, an "array" is a pointless data structure // when you have variadics, but whatever #define BOOST_PP_TO_LIST(...) \ BOOST_PP_TUPLE_TO_LIST((__VA_ARGS__)) \ /**/ #define BOOST_PP_TO_SEQ(...) \ BOOST_PP_TUPLE_TO_SEQ((__VA_ARGS__)) \ /**/
I'm a lot more opposed to going back from a proper data structure to an "argument list".
Then I will go back to an 'element list" <g>. If the end-user uses it as an "argument list" you can sue me but not for too much because I am poor. <g><g>
Again I value the orthogonality of the pp-data to variadic data idea in common names. BOOST_PP_TUPLE_REM_CTOR does not suggest that to the end-user. How about:
#define BOOST_PP_TUPLE_ENUM(tuple) \ BOOST_PP_TUPLE_REM_CTOR(tuple)
in order to mimic your three following names.
Sure, but with a better definition:
#define BOOST_PP_TUPLE_ENUM BOOST_PP_TUPLE_REM_CTOR
Yes, that's better.
BOOST_VMD_PP_ARRAY_TO_DATA(array) -> BOOST_PP_ARRAY_ENUM(array)
BOOST_VMD_PP_LIST_TO_DATA(list) -> BOOST_PP_LIST_ENUM(list)
BOOST_VMD_PP_SEQ_TO_DATA(seq) -> BOOST_PP_SEQ_ENUM(seq)
also add: BOOST_PP_REM, BOOST_PP_EAT
OK.
These latter two (REM and EAT) having nothing to do with data structures per se, but they are extremely useful macros.
I agree.
The basic gist is to add the low-level variadic stuff and adapt the existing tuple stuff to not require the size.
I think our only real disagreements can be summed up as:
I want the end-user to view variadic data as such from a perceptual point of view, even with the empty-variadics-is-an-error-which-can-not-be-caught problem. That is why I supply the various conversions from variadic sequences to pp-lib types and back explicitly, and I want some regularity in names reflecting that although I do not insist on my own names.
You feel that variadics as input for conversion should in general be treated as a pp-lib tuple since creating a tuple from variadic macro data is trivial.
I don't like interface bloat, but if it is minor, it isn't the end of the world.
The one thing that I really don't like is the blending of what I consider two different concepts: output and return value (even though I'm going against my own diatribe about macros != functions above by calling it "return value").
Going from a data structure to a list of comma-separated values (like enumerators, function arguments, whatever) is output and is reflected by the name ENUM. Going from a data structure to a list of comma-separated macro arguments is return value (for input into other macros as disparate arguments). This latter use scenario is fraught with portability problems on the user end, and not necessarily ones that immediately show up.
Then let's use the ENUm name for output. And I will add a section to the docs explaining the danger of using such output as arguments to other macros.

On Mon, 21 Feb 2011 18:29:46 -0500, Edward Diener wrote:
On 2/21/2011 2:37 PM, Paul Mensonides wrote:
Most C/C++ developers perceive macro expansion mechanics to be similar to function call mechanics. I.e. where a user "calls" a macro A, and that macro "calls" the macro B, the macro B "returns" something, which is, in turn "returned" by A. That is fundamentally *not* how macro expansion behaves. The perceived similarity, where there is none (going all the way back to way before preprocessor metaprogramming) is how developers have gotten into so much trouble on account of macros.
OTOH users of macros are not concerned, as developers should be, of how the macro expands. They are just given a macro syntax to use which the developer supposes should feel natural to them.
Natural for the domain, not natural because it matches the underlying language.
I take serious issue with anything that intentionally perpetuates this mentality. It is one thing if the syntax required is the same by coincidence. It's another thing altogether when something is done to intentionally make it so.
I really feel you are stretching your case for why you do not like #define SOME_MACRO(...) as opposed to #define SOME_MACRO((...)). I do understand your feeling that variadics can be more easily misused than pp-lib data types. But to me that is a programmer problem and not your problem.
1) I don't like the lack of revision control available with the MACRO (...) form. That form simply doesn't scale. 2) I don't like duplicating interfaces to remove a pair of parentheses-- particularly to make it look like a C++ function call. 3) I don't like libraries that choose to increase compile time for no functional gain and only a perceived syntactic gain (and a minor one at that). My dislike really has very little do with misusing variadic content.
#define REM(...) __VA_ARGS__
#define A(im) B(im) // im stands for "intermediate" // (chaos-pp nomenclature) #define B(x, y) x + y
A(REM(1, 2)) // should work, most likely won't on many preprocessors
I understand your concerns. But I don't think you can do anything about how programmers use things. You provide functionality because it has its uses. If some of the uses lead to potential problems because of programmer misunderstanding or compiler weakness you warn the programmer. That's the best you can do without removing decent functionality just because of programmer misuse or compiler fallability. Of course good docs about pitfalls always help.
Unfortunately, that's not the way it works. When a library doesn't work on a compiler, that ends up being the library's problem, not the toolchain vendor's problem. Look at VC++, for example, after all these years the pp-lib *still* needs all of the blatant hacks put in place to support it and MS still basically says #@!$-off. And (unfortunately) it *has* to be supported.
BOOST_VMD_DATA_TO_PP_TUPLE(...) -> (nothing, unless workarounds are necessary)
I know its trivial but I still think it should exist.
It is quite possible that workarounds need to be applied anyway to (e.g.) force VC++ to "let go" of the variadic arguments as a single entity.
I will look further into this issue. I did have a couple of VC++ workarounds I had to use, which I was able to solve thanks to your own previous cleverness dealing with VC++.
The main issue is that a problem may not surface until somewhere "far away". E.g. it may get passed around through tons of other stuff before causing a failure.
If the use case is something like what you mentioned before:
#define MOC(...) /* yes, that's you, Qt */ \ GENERATE_MOC_DATA(TUPLE_TO_SEQ((__VA_ARGS__))) \ /**/
Then why does the TUPLE_TO_SEQ((__VA_ARGS__)) part matter to the developer who invokes MOC?
Because I am not converting a tuple to a seq but a variadic sequence to a seq, and I feel the syntax should support that idea.
Right, but in this context, you're the intermediate library developer. You are providing some domain-specific functionality (e.g. generating Qt bloat), but you're doing preprocessor metaprogramming to do it. So, in this case, you're giving your users the "pretty" syntax that you think is important and encapsulating the significant metaprogramming behind that interface. When you're doing significant metaprogramming, what is syntax? Why is there an expectation that a DSEL like a metaprogramming library have any particular syntax?
I'm a lot more opposed to going back from a proper data structure to an "argument list".
Then I will go back to an 'element list" <g>. If the end-user uses it as an "argument list" you can sue me but not for too much because I am poor. <g><g>
What ultimately happens is that user's try to do something, and they run into problems (often due to compilers), but they don't have the know-how to work around the problems or do something in a different way. So, they go to the library author for help. Providing that help can be a lot of work, require a lot of explanation, implementation help, etc.. Like most other Boost developers, I've had to do that countless times. Failing to do that, however, fails to achieve (what I consider to be) one of the fundamental purposes of publishing the library as open-source to begin with. This is particularly true in the Boost context, which is largely about advancing idioms and techniques and "improving the world". I.e. though not exclusively, there is a lot of altruism involved. Regards, Paul Mensonides

On Mon, Feb 21, 2011 at 2:37 PM, Paul Mensonides <pmenso57@comcast.net> wrote:
On Mon, 21 Feb 2011 12:57:05 -0500, Edward Diener wrote:
On 2/21/2011 3:57 AM, Paul Mensonides wrote:
Another way to provide comfort is via education. Hardcore pp- metaprogramming knowledge is not required for this.
Providing function-like syntax for invoking a macro with a variable number of parameters, as an alternative to pp-lib data syntax, is important to end-users and library developers if just for the sake of familiarity and regularity. A programmer using a "call" syntax which may be a macro or a function is not going to stop and say: this is a function so I can call it as 'somefunction(a,b,c)', this is a macro and therefore I must call it as 'somemacro((a,b,c))'. Instead he will ask that the same syntax be applied to both. You seem to feel this is wrong and that someone invoking a macro should realize that it is a macro ( and normally does because it is capital letters ) and therefore be prepared to use a different syntax, but I think that regularity in this respect is to be valued.
Most C/C++ developers perceive macro expansion mechanics to be similar to function call mechanics. I.e. where a user "calls" a macro A, and that macro "calls" the macro B, the macro B "returns" something, which is, in turn "returned" by A. That is fundamentally *not* how macro expansion behaves. The perceived similarity, where there is none (going all the way back to way before preprocessor metaprogramming) is how developers have gotten into so much trouble on account of macros.
I take serious issue with anything that intentionally perpetuates this mentality. It is one thing if the syntax required is the same by coincidence. It's another thing altogether when something is done to intentionally make it so.
It might be useful to discuss this topic using the Boost.Local `PARAMS` macro as an example. IMO, the following syntax is better from a pp metaprogramming prospective because the arguments are a proper pp data structure (i.e., a sequence) and it is very clear from the syntax that you are invoking a macro: int BOOST_LOCAL_FUNCTION_PARAMS( (int x) (const bind this) ) { // [1] ... } BOOST_LOCAL_FUNCTION_NAME(l) l(-1); However, from the users' prospective the following syntax is preferred because it looks more like a C++ function parameter declaration so they are more familiar with it: int BOOST_LOCAL_FUNCTION_PARAMS(int x, const bind this) { // [2] ... } BOOST_LOCAL_FUNCTION_NAME(l) l(-1); Therefore, as a pp metaprogrammer I'd prefer [1] but as the Boost.Local library developer I must keep in mind my users' preference and also provide [2] when variadics are available. On Mon, Feb 21, 2011 at 2:37 PM, Paul Mensonides <pmenso57@comcast.net> wrote:
I'm not terribly opposed to just BOOST_PP_TO_TUPLE(...), etc..
#define BOOST_PP_TO_TUPLE(...) (__VA_ARGS__) #define BOOST_PP_TO_ARRAY(...) \ (BOOST_PP_VARIADIC_SIZE(__VA_ARGS__), BOOST_PP_TO_TUPLE(__VA_ARGS__) \ /**/ // BTW, an "array" is a pointless data structure // when you have variadics, but whatever #define BOOST_PP_TO_LIST(...) \ BOOST_PP_TUPLE_TO_LIST((__VA_ARGS__)) \ /**/ #define BOOST_PP_TO_SEQ(...) \ BOOST_PP_TUPLE_TO_SEQ((__VA_ARGS__)) \ /**/
I'm a lot more opposed to going back from a proper data structure to an "argument list".
IMO, it would be nice if Boost.Preprocessor supported variadics to make metaprogramming [2] easy. However, that does not necessarily mean providing: BOOST_PP_VARIADIC_TUPLE(...) I would find having these two macros just as useful (and perhaps more correct): #define BOOST_PP_TO_TUPLE(...) (__VA_ARGS__) BOOST_PP_TUPLE((...)) Then at some point in my pp metaprogram, I will have `BOOST_PP_TUPLE(BOOST_PP_TO_TUPLE(__VA_ARGS__))` which would be as convenient for me (a pp metaprogrammer) to use as `BOOST_PP_TUPLE(__VA_ARGS__)` directly. Of course, the `BOOST_PP_TO_TUPLE(__VA_ARGS__)` invocation will be hidden inside `BOOST_LOCAL_FUNCTION_PARAMS(...)` expansion to respect my library users' request that the `PARAMS` macro invocation should look like a normal C++ function parameter declaration as much as possible. In summary, I would think that providing `BOOST_PP_TO_TUPLE(...)` and `BOOST_PP_TUPLE((...))` is a good approach. -- Lorenzo

On Mon, Feb 21, 2011 at 6:40 PM, Lorenzo Caminiti <lorcaminiti@gmail.com> wrote:
On Mon, Feb 21, 2011 at 2:37 PM, Paul Mensonides <pmenso57@comcast.net> wrote:
On Mon, 21 Feb 2011 12:57:05 -0500, Edward Diener wrote:
On 2/21/2011 3:57 AM, Paul Mensonides wrote:
Another way to provide comfort is via education. Hardcore pp- metaprogramming knowledge is not required for this.
Providing function-like syntax for invoking a macro with a variable number of parameters, as an alternative to pp-lib data syntax, is important to end-users and library developers if just for the sake of familiarity and regularity. A programmer using a "call" syntax which may be a macro or a function is not going to stop and say: this is a function so I can call it as 'somefunction(a,b,c)', this is a macro and therefore I must call it as 'somemacro((a,b,c))'. Instead he will ask that the same syntax be applied to both. You seem to feel this is wrong and that someone invoking a macro should realize that it is a macro ( and normally does because it is capital letters ) and therefore be prepared to use a different syntax, but I think that regularity in this respect is to be valued.
Most C/C++ developers perceive macro expansion mechanics to be similar to function call mechanics. I.e. where a user "calls" a macro A, and that macro "calls" the macro B, the macro B "returns" something, which is, in turn "returned" by A. That is fundamentally *not* how macro expansion behaves. The perceived similarity, where there is none (going all the way back to way before preprocessor metaprogramming) is how developers have gotten into so much trouble on account of macros.
I take serious issue with anything that intentionally perpetuates this mentality. It is one thing if the syntax required is the same by coincidence. It's another thing altogether when something is done to intentionally make it so.
It might be useful to discuss this topic using the Boost.Local `PARAMS` macro as an example.
IMO, the following syntax is better from a pp metaprogramming prospective because the arguments are a proper pp data structure (i.e., a sequence) and it is very clear from the syntax that you are invoking a macro:
int BOOST_LOCAL_FUNCTION_PARAMS( (int x) (const bind this) ) { // [1] ... } BOOST_LOCAL_FUNCTION_NAME(l) l(-1);
However, from the users' prospective the following syntax is preferred because it looks more like a C++ function parameter declaration so they are more familiar with it:
int BOOST_LOCAL_FUNCTION_PARAMS(int x, const bind this) { // [2] ... } BOOST_LOCAL_FUNCTION_NAME(l) l(-1);
Therefore, as a pp metaprogrammer I'd prefer [1] but as the Boost.Local library developer I must keep in mind my users' preference and also provide [2] when variadics are available.
On Mon, Feb 21, 2011 at 2:37 PM, Paul Mensonides <pmenso57@comcast.net> wrote:
I'm not terribly opposed to just BOOST_PP_TO_TUPLE(...), etc..
#define BOOST_PP_TO_TUPLE(...) (__VA_ARGS__) #define BOOST_PP_TO_ARRAY(...) \ (BOOST_PP_VARIADIC_SIZE(__VA_ARGS__), BOOST_PP_TO_TUPLE(__VA_ARGS__) \ /**/ // BTW, an "array" is a pointless data structure // when you have variadics, but whatever #define BOOST_PP_TO_LIST(...) \ BOOST_PP_TUPLE_TO_LIST((__VA_ARGS__)) \ /**/ #define BOOST_PP_TO_SEQ(...) \ BOOST_PP_TUPLE_TO_SEQ((__VA_ARGS__)) \ /**/
I'm a lot more opposed to going back from a proper data structure to an "argument list".
IMO, it would be nice if Boost.Preprocessor supported variadics to make metaprogramming [2] easy. However, that does not necessarily mean providing:
BOOST_PP_VARIADIC_TUPLE(...)
I would find having these two macros just as useful (and perhaps more correct):
#define BOOST_PP_TO_TUPLE(...) (__VA_ARGS__) BOOST_PP_TUPLE((...))
Then at some point in my pp metaprogram, I will have `BOOST_PP_TUPLE(BOOST_PP_TO_TUPLE(__VA_ARGS__))` which would be as convenient for me (a pp metaprogrammer) to use as `BOOST_PP_TUPLE(__VA_ARGS__)` directly. Of course, the `BOOST_PP_TO_TUPLE(__VA_ARGS__)` invocation will be hidden inside `BOOST_LOCAL_FUNCTION_PARAMS(...)` expansion to respect my library users' request that the `PARAMS` macro invocation should look like a normal C++ function parameter declaration as much as possible.
In summary, I would think that providing `BOOST_PP_TO_TUPLE(...)` and `BOOST_PP_TUPLE((...))` is a good approach.
Sorry, when I said `BOOST_PP_VARIADIC_TUPLE(...)` and `BOOST_PP_TUPLE((...))` I meant of course `BOOST_PP_VARIADIC_TUPLE_TO_SEQ(...)` and `BOOST_PP_TUPLE_TO_SEQ((...))` (or some other pp tuple macro but without the size argument). -- Lorenzo

On 2/21/2011 6:47 PM, Lorenzo Caminiti wrote:
On Mon, Feb 21, 2011 at 6:40 PM, Lorenzo Caminiti<lorcaminiti@gmail.com> wrote:
IMO, it would be nice if Boost.Preprocessor supported variadics to make metaprogramming [2] easy. However, that does not necessarily mean providing:
BOOST_PP_VARIADIC_TUPLE(...)
I would find having these two macros just as useful (and perhaps more correct):
#define BOOST_PP_TO_TUPLE(...) (__VA_ARGS__) BOOST_PP_TUPLE((...))
Then at some point in my pp metaprogram, I will have `BOOST_PP_TUPLE(BOOST_PP_TO_TUPLE(__VA_ARGS__))` which would be as convenient for me (a pp metaprogrammer) to use as `BOOST_PP_TUPLE(__VA_ARGS__)` directly. Of course, the `BOOST_PP_TO_TUPLE(__VA_ARGS__)` invocation will be hidden inside `BOOST_LOCAL_FUNCTION_PARAMS(...)` expansion to respect my library users' request that the `PARAMS` macro invocation should look like a normal C++ function parameter declaration as much as possible.
In summary, I would think that providing `BOOST_PP_TO_TUPLE(...)` and `BOOST_PP_TUPLE((...))` is a good approach.
Sorry, when I said `BOOST_PP_VARIADIC_TUPLE(...)` and `BOOST_PP_TUPLE((...))` I meant of course `BOOST_PP_VARIADIC_TUPLE_TO_SEQ(...)` and `BOOST_PP_TUPLE_TO_SEQ((...))` (or some other pp tuple macro but without the size argument).
I am oppose to such redundancy. I think BOOST_PP_TUPLE_TO_SEQ(...) is enough.

On Mon, 21 Feb 2011 18:40:17 -0500, Lorenzo Caminiti wrote:
On Mon, Feb 21, 2011 at 2:37 PM, Paul Mensonides <pmenso57@comcast.net>
I take serious issue with anything that intentionally perpetuates this mentality. It is one thing if the syntax required is the same by coincidence. It's another thing altogether when something is done to intentionally make it so.
It might be useful to discuss this topic using the Boost.Local `PARAMS` macro as an example.
IMO, the following syntax is better from a pp metaprogramming prospective because the arguments are a proper pp data structure (i.e., a sequence) and it is very clear from the syntax that you are invoking a macro:
int BOOST_LOCAL_FUNCTION_PARAMS( (int x) (const bind this) ) { // [1] ... } BOOST_LOCAL_FUNCTION_NAME(l) l(-1);
However, from the users' prospective the following syntax is preferred because it looks more like a C++ function parameter declaration so they are more familiar with it:
int BOOST_LOCAL_FUNCTION_PARAMS(int x, const bind this) { // [2] ... } BOOST_LOCAL_FUNCTION_NAME(l) l(-1);
Therefore, as a pp metaprogrammer I'd prefer [1] but as the Boost.Local library developer I must keep in mind my users' preference and also provide [2] when variadics are available.
There is such a thing as good/bad regardless of preference. In the second case above, you've traded compilation efficiency for a minor syntactic difference. As a library developer, particularly in the case of a general purpose library, that's a bad call to make. If you provide both of the above, at least you still have the efficient version that doesn't waste time doing a format conversion, but now you've introduced bloat into the interface. You have two ways to do the same thing--one of which is (apparently, given the existence of the former) non-portable as well as brittle as far as revision is concerned. What happens when you want to add some option--LOCAL_FUNCTION_PARAMS(option, (int x)...)? In the latter case, because the preprocessor has no type system, that change is radical. In the former case, you can do what I've been talking about with BOOST_PP_TUPLE_ELEM (et al) and produce a smooth, non-breaking revision path. Instead, you end up with yet another interface point LOCAL_FUNCTION_PARAMS_WITH_OPTION that avoids the problem while adding yet more interface bloat. Perhaps this LOCAL_FUNCTION_PARAMS is a bad example for potential revision, but the above is referring the general case. As I said previously, the only way that I would ever provide a macro interface (in a non-preprocessor metaprogramming library) such as the latter above, is if I was absolutely certain that no arguments would be added or removed.
IMO, it would be nice if Boost.Preprocessor supported variadics to make metaprogramming [2] easy. However, that does not necessarily mean providing:
BOOST_PP_VARIADIC_TUPLE(...)
I would find having these two macros just as useful (and perhaps more correct):
#define BOOST_PP_TO_TUPLE(...) (__VA_ARGS__) BOOST_PP_TUPLE((...))
What is BOOST_PP_TUPLE((...))?
Then at some point in my pp metaprogram, I will have `BOOST_PP_TUPLE(BOOST_PP_TO_TUPLE(__VA_ARGS__))` which would be as convenient for me (a pp metaprogrammer) to use as `BOOST_PP_TUPLE(__VA_ARGS__)` directly. Of course, the `BOOST_PP_TO_TUPLE(__VA_ARGS__)` invocation will be hidden inside `BOOST_LOCAL_FUNCTION_PARAMS(...)` expansion to respect my library users' request that the `PARAMS` macro invocation should look like a normal C++ function parameter declaration as much as possible.
It is a DSEL. By definition it has its own syntax--as it obviously already does.
In summary, I would think that providing `BOOST_PP_TO_TUPLE(...)` and `BOOST_PP_TUPLE((...))` is a good approach.
In the case of a tuple, why would any of the above be better than simply (__VA_ARGS__)? Again, though I don't know what you're referring to by BOOST_PP_TUPLE((...)). What is that macro supposed to do? Regards, Paul Mensonides

On Mon, Feb 21, 2011 at 7:18 PM, Paul Mensonides <pmenso57@comcast.net> wrote:
On Mon, 21 Feb 2011 18:40:17 -0500, Lorenzo Caminiti wrote:
On Mon, Feb 21, 2011 at 2:37 PM, Paul Mensonides <pmenso57@comcast.net>
I take serious issue with anything that intentionally perpetuates this mentality. It is one thing if the syntax required is the same by coincidence. It's another thing altogether when something is done to intentionally make it so.
It might be useful to discuss this topic using the Boost.Local `PARAMS` macro as an example.
IMO, the following syntax is better from a pp metaprogramming prospective because the arguments are a proper pp data structure (i.e., a sequence) and it is very clear from the syntax that you are invoking a macro:
int BOOST_LOCAL_FUNCTION_PARAMS( (int x) (const bind this) ) { // [1] ... } BOOST_LOCAL_FUNCTION_NAME(l) l(-1);
However, from the users' prospective the following syntax is preferred because it looks more like a C++ function parameter declaration so they are more familiar with it:
int BOOST_LOCAL_FUNCTION_PARAMS(int x, const bind this) { // [2] ... } BOOST_LOCAL_FUNCTION_NAME(l) l(-1);
Therefore, as a pp metaprogrammer I'd prefer [1] but as the Boost.Local library developer I must keep in mind my users' preference and also provide [2] when variadics are available.
There is such a thing as good/bad regardless of preference. In the second case above, you've traded compilation efficiency for a minor syntactic difference. As a library developer, particularly in the case of a general purpose library, that's a bad call to make. If you provide both of the above, at least you still have the efficient version that doesn't waste time doing a format conversion, but now you've introduced bloat into the interface. You have two ways to do the same thing--one of which is (apparently, given the existence of the former) non-portable as well as brittle as far as revision is concerned. What happens when you want to add some option--LOCAL_FUNCTION_PARAMS(option, (int x)...)? In the latter case, because the preprocessor has no type system, that change is radical. In the former case, you can do what I've been talking about with BOOST_PP_TUPLE_ELEM (et al) and produce a smooth, non-breaking revision path. Instead, you end up with yet another interface point LOCAL_FUNCTION_PARAMS_WITH_OPTION that avoids the problem while adding yet more interface bloat.
I do understand your arguments about flexibility and extensibility of the pp-sequence interface but my library's users want the variadic-tuple syntax and that's essentially a fact for me as the library developer... The argument that the sequence is more flexible, etc just didn't work because at the end of the day the extra sequence parenthesis look ugly and people don't even want to try to type them (I wonder if that simply means that local functions are not really a wanted feature because I would type the extra parenthesis if I needed the functionality...).
Perhaps this LOCAL_FUNCTION_PARAMS is a bad example for potential revision, but the above is referring the general case. As I said previously, the only way that I would ever provide a macro interface (in a non-preprocessor metaprogramming library) such as the latter above, is if I was absolutely certain that no arguments would be added or removed.
IMO, it would be nice if Boost.Preprocessor supported variadics to make metaprogramming [2] easy. However, that does not necessarily mean providing:
BOOST_PP_VARIADIC_TUPLE(...)
I would find having these two macros just as useful (and perhaps more correct):
#define BOOST_PP_TO_TUPLE(...) (__VA_ARGS__) BOOST_PP_TUPLE((...))
What is BOOST_PP_TUPLE((...))?
Yes, sorry for the confusion... with BOOST_PP_TUPLE((...)) I meant BOOST_PP_TUPLE_TO_SEQ((...)). And with BOOST_PP_VARIADIC_TUPLE(...), I meant BOOST_PP_VARIADIC_TUPLE_TO_SEQ(...).
Then at some point in my pp metaprogram, I will have `BOOST_PP_TUPLE(BOOST_PP_TO_TUPLE(__VA_ARGS__))` which would be as convenient for me (a pp metaprogrammer) to use as `BOOST_PP_TUPLE(__VA_ARGS__)` directly. Of course, the `BOOST_PP_TO_TUPLE(__VA_ARGS__)` invocation will be hidden inside `BOOST_LOCAL_FUNCTION_PARAMS(...)` expansion to respect my library users' request that the `PARAMS` macro invocation should look like a normal C++ function parameter declaration as much as possible.
It is a DSEL. By definition it has its own syntax--as it obviously already does.
In summary, I would think that providing `BOOST_PP_TO_TUPLE(...)` and `BOOST_PP_TUPLE((...))` is a good approach.
In the case of a tuple, why would any of the above be better than simply (__VA_ARGS__)? Again, though I don't know what you're referring to by BOOST_PP_TUPLE((...)). What is that macro supposed to do?
So: In summary, I would think that providing `BOOST_PP_TO_TUPLE(...)` and `BOOST_PP_TUPLE_TO_SEQ((...))` is a good approach. And I could personally also live with doing `(__VA_ARGS__)` directly before invoking `BOOST_PP_TUPLE_TO_SEQ((...))` instead of using `BOOST_PP_TO_TUPLE(...)`. On Mon, Feb 21, 2011 at 7:18 PM, Edward Diener <eldiener@tropicsoft.com> wrote:
On 2/21/2011 6:47 PM, Lorenzo Caminiti wrote:
In summary, I would think that providing `BOOST_PP_TO_TUPLE(...)` and `BOOST_PP_TUPLE((...))` is a good approach.
Sorry, when I said `BOOST_PP_VARIADIC_TUPLE(...)` and `BOOST_PP_TUPLE((...))` I meant of course `BOOST_PP_VARIADIC_TUPLE_TO_SEQ(...)` and `BOOST_PP_TUPLE_TO_SEQ((...))` (or some other pp tuple macro but without the size argument).
I am oppose to such redundancy. I think BOOST_PP_TUPLE_TO_SEQ(...) is enough.
-- Lorenzo

On 2/19/2011 10:48 AM, Lorenzo Caminiti wrote:
On Fri, Feb 18, 2011 at 9:58 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
On 2/18/2011 7:27 PM, Lorenzo Caminiti wrote:
On Thu, Feb 17, 2011 at 5:13 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
I am requesting that my library, the Variadic Macro Data library, which is in the sandbox in the variadic_macro_data directory, be reviewed for inclusion into Boost.
The variadic_macro_data library adds support and functionality for variadic macros to Boost as well as integrating variadic macros with the Boost PP library without changing the latter library in any way.
I believe others have used my library, can attest to its quality and that it does what it is supposed to do. and have found it useful when using variadic macros with Boost PP. I myself have used its functionality in my own TTI library in the sandbox. Support for variadic macros is implemented in nearly all modern C++ compilers and the syntax is natural for an end-user. The library is finalized as far as functionality is concerned and I would like to see it in Boost and am willing to maintain it as a Boost library.
Is it possible to use variadic macros to detect empty parameters?
My understanding of variadic macro data is that at least one parameter must be specified.
For example:
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib.
VMD_DATA_SIZE(1, 2) // 2 VMD_DATA_SIZE(1) // 1 VMD_DATA_SIZE() // 1 not 0 :((
But I would like to the last size to expand to 0 (or have a different macro that would expand to 0 in that case).
With a real C99 preprocessor (e.g., GCC) I can do the following because empty macro parameters are supported:
Are they for variadic macro data in C++ ?
I think variadics and empty macro parameters are different things. C99 preprocessor (e.g., GCC) supports both while MSVC only supports variadics. That is why I was wondering if variadics can be used to detect empty macro parameters so I can do so also on MSVC.
On Mon, Sep 6, 2010 at 3:29 PM, Paul Mensonides<pmenso57@comcast.net> wrote:
... However, IS_EMPTY is _not_ a macro for general-purpose emptiness detection. Its implementation requires the concatenation of an identifier to the front of the argument which rules out all arguments for which that isn't valid. For example, IS_EMPTY(+) is undefined behavior according to all revisions of both the C and C++ standards (including the forthcoming C++0x). Thus, at minimum, the argument must be an identifier (or keyword--same thing at this point) or a numeric literal that doesn't contain a decimal point.
It is valid (and has been since C90) to pass something that expands to nothing as an argument to a macro. However, it is not valid to pass nothing. E.g.
See http://lists.boost.org/Archives/boost/2010/09/170639.php
I will look at that and see what I can come up with. If variadic macros support an empty parameter list, I should provide a correct size of 0. If it does not I should indicate an error. So either way I will look to make a correction. Thanks for pointing this out.
This works on both MSVC and GCC :) Does it work on other preprocessors? Can anyone please check?
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib in Boost's sandbox. #include<boost/preprocessor.hpp> #include<boost/preprocessor/facilities/is_empty.hpp>
VMD_DATA_SIZE(1, 2) // 2 VMD_DATA_SIZE(1) // 1 VMD_DATA_SIZE() // 1 not 0 :((
#define PP_VA_EAT(...) /* must expand to nothing */
#define PP_VA_SIZE_1OR0_(maybe_empty) \ BOOST_PP_IIF(BOOST_PP_IS_EMPTY(maybe_empty (/*exapnd empty */) ), 0, 1)
#define PP_VA_SIZE_(size, ...) \ BOOST_PP_IIF(BOOST_PP_EQUAL(size, 1), \ PP_VA_SIZE_1OR0_ \ , \ size PP_VA_EAT \ )(__VA_ARGS__ BOOST_PP_EMPTY)
#define PP_VA_SIZE(...) PP_VA_SIZE_(VMD_DATA_SIZE(__VA_ARGS__), __VA_ARGS__)
PP_VA_SIZE(1, 2) // 2 PP_VA_SIZE(1) // 1 PP_VA_SIZE() // 0 :))
This does not work under gcc or msvc: #include <boost/variadic_macro_data/vmd.hpp> #include <boost/preprocessor.hpp> #include <boost/preprocessor/facilities/is_empty.hpp> #if !defined(BOOST_NO_VARIADIC_MACROS) #define PP_VA_EAT(...) /* must expand to nothing */ #define PP_VA_SIZE_1OR0_(maybe_empty) \ BOOST_PP_IIF(BOOST_PP_IS_EMPTY(maybe_empty (/*exapnd empty */) ), 0, 1) #define PP_VA_SIZE_(size, ...) \ BOOST_PP_IIF(BOOST_PP_EQUAL(size, 1), \ PP_VA_SIZE_1OR0_ \ , \ size PP_VA_EAT \ )(__VA_ARGS__ BOOST_PP_EMPTY) #define PP_VA_SIZE(...) \ PP_VA_SIZE_(VMD_DATA_SIZE(__VA_ARGS__), __VA_ARGS__) #endif int main() { int j = PP_VA_SIZE(2); return 0; } gcc: "...patience... ...found 319 targets... ...updating 4 targets... gcc.compile.c++ ..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o test_data_try.cpp: In function 'int main()': test_data_try.cpp:23:11: error: 'BOOST_PP_IIF_BOOST_PP_COMPL_BOOST_PP_NOT_EQUAL_CHECK_BOOST_PP_NOT_EQUAL_VMD_DATA_SIZE' was not declared in this scope test_data_try.cpp:23:1: error: 'BOOST_PP_NOT_EQUAL_1' was not declared in this scope test_data_try.cpp:12:9: error: 'PP_VA_SIZE_1OR0_' was not declared in this scope test_data_try.cpp:23:11: error: 'VMD_DATA_SIZE' was not declared in this scope test_data_try.cpp:23:11: error: expected ')' before 'BOOST_PP_EMPTY' test_data_try.cpp:23:7: warning: unused variable 'j' "g++" -ftemplate-depth-128 -O0 -fno-inline -Wall -pedantic -g -Wno-variadic-macros -I"..\..\.." -I"C:\Programming\VersionControl\boost" -c -o "..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o" "test_data_try.cpp" ...failed gcc.compile.c++ ..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o... msvc: "...patience... ...found 331 targets... ...updating 5 targets... compile-c-c++ ..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\msvc-10.0\debug\threading-multi\test_data_try.obj test_data_try.cpp test_data_try.cpp(23) : error C2065: 'BOOST_PP_NOT_EQUAL_1' : undeclared identifier test_data_try.cpp(23) : error C2065: 'PP_VA_SIZE_1OR0_' : undeclared identifier test_data_try.cpp(23) : error C2146: syntax error : missing ')' before identifier 'PP_VA_EAT' test_data_try.cpp(23) : error C3861: 'BOOST_PP_IIF_BOOST_PP_COMPL_BOOST_PP_NOT_EQUAL_CHECK_BOOST_PP_NOT_EQUAL_VMD_DATA_SIZE': identifier not found test_data_try.cpp(23) : error C3861: 'VMD_DATA_SIZE': identifier not found test_data_try.cpp(23) : error C2059: syntax error : ')' call "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\vcvarsall.bat" x86 >nul cl /Zm800 -nologo @"..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\msvc-10.0\debug\threading-multi\test_data_try.obj.rsp" ...failed compile-c-c++ ..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\msvc-10.0\debug\threading-multi\test_data_try.obj... My own best try at a CHECK_EMPTY macro so far is: #include <boost/variadic_macro_data/vmd.hpp> #include <boost/preprocessor.hpp> #if !defined(BOOST_NO_VARIADIC_MACROS) #define CAT_ONE(...) VMD_DETAIL_CAT(1,__VA_ARGS__) #define CHECK_EMPTY(...) \ BOOST_PP_IIF \ ( \ BOOST_PP_EQUAL(1,BOOST_VMD_DATA_SIZE(CAT_ONE(__VA_ARGS__))), \ BOOST_PP_EQUAL(1,BOOST_VMD_DATA_ELEM(0,CAT_ONE(__VA_ARGS__))), \ 0 \ ) \ /**/ #endif int main() { int i = CHECK_EMPTY(); int j = CHECK_EMPTY(1); int k = CHECK_EMPTY(a,b,c); // int m = CHECK_EMPTY(a); return 0; } The idea is to paste '1' the front of the variadic macro data sequence and if there is just '1' as the size and the token is '1', the sequence must have been empty to begin with. This works correctly returning 1 for i and a 0 for j or k until I uncomment the 'int m' line. Evidently one can not do BOOST_PP_EQUAL if one of the parameters is not a number. I get on msvc: ...patience... ...found 329 targets... ...updating 5 targets... compile-c-c++ ..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\msvc-10.0\debug\threading-multi\test_data_try.obj test_data_try.cpp test_data_try.cpp(24) : error C2065: 'BOOST_PP_NIL' : undeclared identifier test_data_try.cpp(24) : error C3861: 'BOOST_PP_COMPL_BOOST_PP_NOT_EQUAL_CHECK_BOOST_PP_NOT_EQUAL_1a': identifier not found call "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\vcvarsall.bat" x86 >nul cl /Zm800 -nologo @"..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\msvc-10.0\debug\threading-multi\test_data_try.obj.rsp" ...failed compile-c-c++ ..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\msvc-10.0\debug\threading-multi\test_data_try.obj... and on gcc: ...patience... ...found 317 targets... ...updating 4 targets... gcc.compile.c++ ..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o test_data_try.cpp:21:23: warning: invoking macro CHECK_EMPTY argument 1: empty macro arguments are undefined in ISO C90 and ISO C++98 test_data_try.cpp:21:23: warning: invoking macro CHECK_EMPTY argument 1: empty macro arguments are undefined in ISO C90 and ISO C++98 test_data_try.cpp:21:23: warning: invoking macro CAT_ONE argument 1: empty macro arguments are undefined in ISO C90 and ISO C++98 test_data_try.cpp:21:23: warning: invoking macro VMD_DETAIL_CAT argument 2: empty macro arguments are undefined in ISO C90 and ISO C++98 test_data_try.cpp:21:23: warning: invoking macro VMD_DETAIL_PRIMITIVE_CAT argument 2: empty macro arguments are undefined in ISO C90 and ISO C++98 test_data_try.cpp:21:1: warning: invoking macro CAT_ONE argument 1: empty macro arguments are undefined in ISO C90 and ISO C++98 test_data_try.cpp:21:1: warning: invoking macro VMD_DETAIL_CAT argument 2: empty macro arguments are undefined in ISO C90 and ISO C++98 test_data_try.cpp:21:1: warning: invoking macro VMD_DETAIL_PRIMITIVE_CAT argument 2: empty macro arguments are undefined in ISO C90 and ISO C++98 test_data_try.cpp: In function 'int main()': test_data_try.cpp:24:1: error: 'BOOST_PP_NIL' was not declared in this scope test_data_try.cpp:24:1: error: 'BOOST_PP_COMPL_BOOST_PP_NOT_EQUAL_CHECK_BOOST_PP_NOT_EQUAL_1a' was not declared in this scope test_data_try.cpp:21:7: warning: unused variable 'i' test_data_try.cpp:22:7: warning: unused variable 'j' test_data_try.cpp:23:7: warning: unused variable 'k' test_data_try.cpp:24:7: warning: unused variable 'l' "g++" -ftemplate-depth-128 -O0 -fno-inline -Wall -pedantic -g -Wno-variadic-macros -I"..\..\.." -I"C:\Programming\VersionControl\boost" -c -o "..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o" "test_data_try.cpp" ...failed gcc.compile.c++ ..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o..." I think Paul Mensonides may be right and there is no foolproof way to check for a completely empty parameter list even using variadic macros. Further ideas ?

On Sun, 20 Feb 2011 20:25:43 -0500, Edward Diener wrote:
I think Paul Mensonides may be right and there is no foolproof way to check for a completely empty parameter list even using variadic macros. Further ideas ?
Trust me, I am right. About the best you can do is prohibit input that terminates in a function-like macro name. You can generally detect emptiness *except* for that case. However, DATA_SIZE() => 0 is ill-conceived. An empty argument is still an argument to the preprocessor. A better correlation is: DATA_SIZE(,,) => 3 DATA_SIZE(,) => 2 DATA_SIZE() => 1 This is yet another reason why variadic content doesn't make for a good data structure. There is no way to denote the empty sequence. For all others, there are ways: (,,) // 3-element tuple (,) // 2-element tuple () // 1-element tuple // 0-element tuple (blank) ()()() // 3-element seq ()() // 2-element seq () // 1-element seq // 0-element seq (blank) Regards, Paul Mensonides

On 2/21/2011 4:05 AM, Paul Mensonides wrote:
On Sun, 20 Feb 2011 20:25:43 -0500, Edward Diener wrote:
I think Paul Mensonides may be right and there is no foolproof way to check for a completely empty parameter list even using variadic macros. Further ideas ?
Trust me, I am right. About the best you can do is prohibit input that terminates in a function-like macro name. You can generally detect emptiness *except* for that case.
However, DATA_SIZE() => 0 is ill-conceived. An empty argument is still an argument to the preprocessor. A better correlation is:
DATA_SIZE(,,) => 3 DATA_SIZE(,) => 2 DATA_SIZE() => 1
Thanks ! I will just have to further document that the data size returned can never be 0, even when the variadic macro is invoked with an empty argument.
This is yet another reason why variadic content doesn't make for a good data structure.
Agreed.
There is no way to denote the empty sequence. For all others, there are ways:
(,,) // 3-element tuple (,) // 2-element tuple () // 1-element tuple // 0-element tuple (blank)
()()() // 3-element seq ()() // 2-element seq () // 1-element seq // 0-element seq (blank)

On Mon, Feb 21, 2011 at 9:19 AM, Edward Diener <eldiener@tropicsoft.com> wrote:
On 2/21/2011 4:05 AM, Paul Mensonides wrote:
On Sun, 20 Feb 2011 20:25:43 -0500, Edward Diener wrote:
I think Paul Mensonides may be right and there is no foolproof way to check for a completely empty parameter list even using variadic macros. Further ideas ?
Trust me, I am right. About the best you can do is prohibit input that terminates in a function-like macro name. You can generally detect emptiness *except* for that case.
However, DATA_SIZE() => 0 is ill-conceived. An empty argument is still an argument to the preprocessor. A better correlation is:
DATA_SIZE(,,) => 3 DATA_SIZE(,) => 2 DATA_SIZE() => 1
Thanks ! I will just have to further document that the data size returned can never be 0, even when the variadic macro is invoked with an empty argument.
IMO, that makes sense -- so documentation is a good option. However, I still don't understand why MSVC accepts this DATA_SIZE() invocation: #define DATA_SIZE(...) DATA_SIZE(1) DATA_SIZE() // No error -- why?? #define SIZE(x) SIZE(1) SIZE() // Error -- as it should! E:\sandbox\boost-sandbox\local\libs\local\example>cl /EHs /I"c:\Program Files\bo ost\boost_1_45_0" /I..\..\.. 01.cpp /EP Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.42 for 80x86 Copyright (C) Microsoft Corporation. All rights reserved. 01.cpp 01.cpp(8) : warning C4003: not enough actual parameters for macro 'SIZE' I would think that DATA_SIZE() should error (like SIZE() correctly does) because it is invoked with an empty macro parameter... Why does the variadic DATA_SIZE() not error on MSVC given that MSVC does not support empty macro parameters? -- Lorenzo

On Mon, 21 Feb 2011 12:12:15 -0500, Lorenzo Caminiti wrote:
On Mon, Feb 21, 2011 at 9:19 AM, Edward Diener <eldiener@tropicsoft.com> wrote:
However, DATA_SIZE() => 0 is ill-conceived. An empty argument is still an argument to the preprocessor. A better correlation is:
DATA_SIZE(,,) => 3 DATA_SIZE(,) => 2 DATA_SIZE() => 1
Thanks ! I will just have to further document that the data size returned can never be 0, even when the variadic macro is invoked with an empty argument.
IMO, that makes sense -- so documentation is a good option.
However, I still don't understand why MSVC accepts this DATA_SIZE() invocation:
#define DATA_SIZE(...) DATA_SIZE(1) DATA_SIZE() // No error -- why??
#define SIZE(x) SIZE(1) SIZE() // Error -- as it should!
Neither case should be an error. According to the C99 standard (and C+ +0x), the sequence of tokens (and whitespace separations) that makes up a macro argument may be empty or contain no tokens. If the sequence of tokens (and whitespace separations) contains no tokens, the formal parameters in the replacement list are replaced by a "placemarker" which is a sort of virtual token. So, in the following scenarios: #define MACRO(x) [x] MACRO(123) // [123] MACRO( 123 ) // [ 123 ] MACRO( 123 ) // [ 123 ] // adjacent whitespace is combined // in an earlier phase of translation MACRO( ) // [<placemarker>] -> [] MACRO() // [<placemarker>] -> [] For variadics, for the purposes of substitution, token-pasting, and stringizing, all of the variadic arguments act as one argument: #define MACRO(...) [__VA_ARGS__] MACRO() // [<placemarker>] -> [] MACRO( ) // [<placemarker>] -> [] MACRO(a) // [a] MACRO(a,b) // [a,b] MACRO(a, b) // [a, b] MACRO( a,b, c ) // [ a,b, c ] * Note that I don't know of a single preprocessor that actually handles whitespace correctly in all cases. Regards, Paul Mensonides

So, in the following scenarios:
#define MACRO(x) [x]
MACRO(123) // [123] MACRO( 123 ) // [ 123 ] MACRO( 123 ) // [ 123 ] // adjacent whitespace is combined // in an earlier phase of translation
MACRO( ) // [<placemarker>] -> [] MACRO() // [<placemarker>] -> []
For variadics, for the purposes of substitution, token-pasting, and stringizing, all of the variadic arguments act as one argument:
#define MACRO(...) [__VA_ARGS__]
MACRO() // [<placemarker>] -> [] MACRO( ) // [<placemarker>] -> [] MACRO(a) // [a] MACRO(a,b) // [a,b] MACRO(a, b) // [a, b] MACRO( a,b, c ) // [ a,b, c ]
* Note that I don't know of a single preprocessor that actually handles whitespace correctly in all cases.
Damn, I know I had that right in Wave at some point. Must have broken it later... And apparently there are no tests in Wave verifying this is functioning as expected. I'll fix both things asap. Regards Hartmut --------------- http://boost-spirit.com

On Mon, 21 Feb 2011 13:22:41 -0600, Hartmut Kaiser wrote:
#define MACRO(...) [__VA_ARGS__]
MACRO() // [<placemarker>] -> [] MACRO( ) // [<placemarker>] -> [] MACRO(a) // [a] MACRO(a,b) // [a,b] MACRO(a, b) // [a, b] MACRO( a,b, c ) // [ a,b, c ]
* Note that I don't know of a single preprocessor that actually handles whitespace correctly in all cases.
Damn, I know I had that right in Wave at some point. Must have broken it later... And apparently there are no tests in Wave verifying this is functioning as expected.
I'll fix both things asap.
Note that there is only one way that the lack of proper handling can effect the semantics of a program: stringizing. Further, it must internal whitespace because stringizing removes leading and trailing whitespace and condenses adjacent internal whitespace by definition-- which you can get because the normal whitespace condensation happens in an earlier separate phase: #define A(x) 1 x 3 A(2) // 1 2 3 A( 2 ) // 1 2 3 When stringized these both result in "1 2 3". However, something like this is where things change: #define B(x) (1)x(3) STRINGIZE(B(3)) // "(1)3(2)" STRINGIZE(B( 3 )) // "(1) 3 (2)" -- aha! different semantics Other than scenarios like that, it makes no effective difference. Regards, Paul Mensonides

* Note that I don't know of a single preprocessor that actually handles whitespace correctly in all cases.
Damn, I know I had that right in Wave at some point. Must have broken it later... And apparently there are no tests in Wave verifying this is functioning as expected.
I'll fix both things asap.
Note that there is only one way that the lack of proper handling can effect the semantics of a program: stringizing. Further, it must internal whitespace because stringizing removes leading and trailing whitespace and condenses adjacent internal whitespace by definition-- which you can get because the normal whitespace condensation happens in an earlier separate phase:
#define A(x) 1 x 3
A(2) // 1 2 3 A( 2 ) // 1 2 3
This is the only place in Wave I found so far where the whitespace is not preserved, I get A( 2 ) // 1 2 3 I'll see what I can do, though.
When stringized these both result in "1 2 3". However, something like this is where things change:
#define B(x) (1)x(3)
STRINGIZE(B(3)) // "(1)3(2)" STRINGIZE(B( 3 )) // "(1) 3 (2)" -- aha! different semantics
Well, that depends on how you define the STRINGIZE() macro. It might not expand the B() at all :-P In any case, this is handled properly in Wave.
Other than scenarios like that, it makes no effective difference.
Sure. Regards Hartmut --------------- http://boost-spirit.com

On Sun, Feb 20, 2011 at 8:25 PM, Edward Diener <eldiener@tropicsoft.com> wrote:
On 2/19/2011 10:48 AM, Lorenzo Caminiti wrote:
On Fri, Feb 18, 2011 at 9:58 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
On 2/18/2011 7:27 PM, Lorenzo Caminiti wrote:
On Thu, Feb 17, 2011 at 5:13 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
I am requesting that my library, the Variadic Macro Data library, which is in the sandbox in the variadic_macro_data directory, be reviewed for inclusion into Boost.
The variadic_macro_data library adds support and functionality for variadic macros to Boost as well as integrating variadic macros with the Boost PP library without changing the latter library in any way.
I believe others have used my library, can attest to its quality and that it does what it is supposed to do. and have found it useful when using variadic macros with Boost PP. I myself have used its functionality in my own TTI library in the sandbox. Support for variadic macros is implemented in nearly all modern C++ compilers and the syntax is natural for an end-user. The library is finalized as far as functionality is concerned and I would like to see it in Boost and am willing to maintain it as a Boost library.
Is it possible to use variadic macros to detect empty parameters?
My understanding of variadic macro data is that at least one parameter must be specified.
For example:
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib.
VMD_DATA_SIZE(1, 2) // 2 VMD_DATA_SIZE(1) // 1 VMD_DATA_SIZE() // 1 not 0 :((
But I would like to the last size to expand to 0 (or have a different macro that would expand to 0 in that case).
With a real C99 preprocessor (e.g., GCC) I can do the following because empty macro parameters are supported:
Are they for variadic macro data in C++ ?
I think variadics and empty macro parameters are different things. C99 preprocessor (e.g., GCC) supports both while MSVC only supports variadics. That is why I was wondering if variadics can be used to detect empty macro parameters so I can do so also on MSVC.
On Mon, Sep 6, 2010 at 3:29 PM, Paul Mensonides<pmenso57@comcast.net> wrote:
... However, IS_EMPTY is _not_ a macro for general-purpose emptiness detection. Its implementation requires the concatenation of an identifier to the front of the argument which rules out all arguments for which that isn't valid. For example, IS_EMPTY(+) is undefined behavior according to all revisions of both the C and C++ standards (including the forthcoming C++0x). Thus, at minimum, the argument must be an identifier (or keyword--same thing at this point) or a numeric literal that doesn't contain a decimal point.
It is valid (and has been since C90) to pass something that expands to nothing as an argument to a macro. However, it is not valid to pass nothing. E.g.
See http://lists.boost.org/Archives/boost/2010/09/170639.php
I will look at that and see what I can come up with. If variadic macros support an empty parameter list, I should provide a correct size of 0. If it does not I should indicate an error. So either way I will look to make a correction. Thanks for pointing this out.
This works on both MSVC and GCC :) Does it work on other preprocessors? Can anyone please check?
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib in Boost's sandbox. #include<boost/preprocessor.hpp> #include<boost/preprocessor/facilities/is_empty.hpp>
VMD_DATA_SIZE(1, 2) // 2 VMD_DATA_SIZE(1) // 1 VMD_DATA_SIZE() // 1 not 0 :((
#define PP_VA_EAT(...) /* must expand to nothing */
#define PP_VA_SIZE_1OR0_(maybe_empty) \ BOOST_PP_IIF(BOOST_PP_IS_EMPTY(maybe_empty (/*exapnd empty */) ), 0, 1)
#define PP_VA_SIZE_(size, ...) \ BOOST_PP_IIF(BOOST_PP_EQUAL(size, 1), \ PP_VA_SIZE_1OR0_ \ , \ size PP_VA_EAT \ )(__VA_ARGS__ BOOST_PP_EMPTY)
#define PP_VA_SIZE(...) PP_VA_SIZE_(VMD_DATA_SIZE(__VA_ARGS__), __VA_ARGS__)
PP_VA_SIZE(1, 2) // 2 PP_VA_SIZE(1) // 1 PP_VA_SIZE() // 0 :))
This does not work under gcc or msvc:
As I said in my prev email, please try again after adding the BOOST_ prefixes to your lib macros given that you are probably using the later rev of your lib where you added such prefixes. I did test this code on both GCC and MVSC revs below: E:\sandbox\boost-sandbox\local\libs\local\example>cl /? Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.42 for 80x86 Copyright (C) Microsoft Corporation. All rights reserved. ... $ g++ --version g++ (GCC) 4.3.4 20090804 (release) 1 ... -- Lorenzo

On 2/21/2011 11:32 AM, Lorenzo Caminiti wrote:
On Sun, Feb 20, 2011 at 8:25 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
On 2/19/2011 10:48 AM, Lorenzo Caminiti wrote:
On Fri, Feb 18, 2011 at 9:58 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
On 2/18/2011 7:27 PM, Lorenzo Caminiti wrote:
On Thu, Feb 17, 2011 at 5:13 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
I am requesting that my library, the Variadic Macro Data library, which is in the sandbox in the variadic_macro_data directory, be reviewed for inclusion into Boost.
The variadic_macro_data library adds support and functionality for variadic macros to Boost as well as integrating variadic macros with the Boost PP library without changing the latter library in any way.
I believe others have used my library, can attest to its quality and that it does what it is supposed to do. and have found it useful when using variadic macros with Boost PP. I myself have used its functionality in my own TTI library in the sandbox. Support for variadic macros is implemented in nearly all modern C++ compilers and the syntax is natural for an end-user. The library is finalized as far as functionality is concerned and I would like to see it in Boost and am willing to maintain it as a Boost library.
Is it possible to use variadic macros to detect empty parameters?
My understanding of variadic macro data is that at least one parameter must be specified.
For example:
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib.
VMD_DATA_SIZE(1, 2) // 2 VMD_DATA_SIZE(1) // 1 VMD_DATA_SIZE() // 1 not 0 :((
But I would like to the last size to expand to 0 (or have a different macro that would expand to 0 in that case).
With a real C99 preprocessor (e.g., GCC) I can do the following because empty macro parameters are supported:
Are they for variadic macro data in C++ ?
I think variadics and empty macro parameters are different things. C99 preprocessor (e.g., GCC) supports both while MSVC only supports variadics. That is why I was wondering if variadics can be used to detect empty macro parameters so I can do so also on MSVC.
On Mon, Sep 6, 2010 at 3:29 PM, Paul Mensonides<pmenso57@comcast.net> wrote:
... However, IS_EMPTY is _not_ a macro for general-purpose emptiness detection. Its implementation requires the concatenation of an identifier to the front of the argument which rules out all arguments for which that isn't valid. For example, IS_EMPTY(+) is undefined behavior according to all revisions of both the C and C++ standards (including the forthcoming C++0x). Thus, at minimum, the argument must be an identifier (or keyword--same thing at this point) or a numeric literal that doesn't contain a decimal point.
It is valid (and has been since C90) to pass something that expands to nothing as an argument to a macro. However, it is not valid to pass nothing. E.g.
See http://lists.boost.org/Archives/boost/2010/09/170639.php
I will look at that and see what I can come up with. If variadic macros support an empty parameter list, I should provide a correct size of 0. If it does not I should indicate an error. So either way I will look to make a correction. Thanks for pointing this out.
This works on both MSVC and GCC :) Does it work on other preprocessors? Can anyone please check?
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib in Boost's sandbox. #include<boost/preprocessor.hpp> #include<boost/preprocessor/facilities/is_empty.hpp>
VMD_DATA_SIZE(1, 2) // 2 VMD_DATA_SIZE(1) // 1 VMD_DATA_SIZE() // 1 not 0 :((
#define PP_VA_EAT(...) /* must expand to nothing */
#define PP_VA_SIZE_1OR0_(maybe_empty) \ BOOST_PP_IIF(BOOST_PP_IS_EMPTY(maybe_empty (/*exapnd empty */) ), 0, 1)
#define PP_VA_SIZE_(size, ...) \ BOOST_PP_IIF(BOOST_PP_EQUAL(size, 1), \ PP_VA_SIZE_1OR0_ \ , \ size PP_VA_EAT \ )(__VA_ARGS__ BOOST_PP_EMPTY)
#define PP_VA_SIZE(...) PP_VA_SIZE_(VMD_DATA_SIZE(__VA_ARGS__), __VA_ARGS__)
PP_VA_SIZE(1, 2) // 2 PP_VA_SIZE(1) // 1 PP_VA_SIZE() // 0 :))
This does not work under gcc or msvc:
As I said in my prev email, please try again after adding the BOOST_ prefixes to your lib macros given that you are probably using the later rev of your lib where you added such prefixes.
I did test this code on both GCC and MVSC revs below:
E:\sandbox\boost-sandbox\local\libs\local\example>cl /? Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.42 for 80x86 Copyright (C) Microsoft Corporation. All rights reserved. ...
$ g++ --version g++ (GCC) 4.3.4 20090804 (release) 1
Yes, it was my error. If I try: #include <boost/variadic_macro_data/vmd.hpp> #include <boost/preprocessor.hpp> #include <boost/preprocessor/facilities/is_empty.hpp> #if !defined(BOOST_NO_VARIADIC_MACROS) #define PP_VA_EAT(...) /* must expand to nothing */ #define PP_VA_SIZE_1OR0_(maybe_empty) \ BOOST_PP_IIF(BOOST_PP_IS_EMPTY(maybe_empty (/*exapnd empty */) ),0, 1) #define PP_VA_SIZE_(size, ...) \ BOOST_PP_IIF(BOOST_PP_EQUAL(size, 1), \ PP_VA_SIZE_1OR0_ \ , \ size PP_VA_EAT \ )(__VA_ARGS__ BOOST_PP_EMPTY) #define PP_VA_SIZE(...) PP_VA_SIZE_(BOOST_VMD_DATA_SIZE(__VA_ARGS__),__VA_ARGS__) #endif int main() { int z = PP_VA_SIZE(+); return 0; } On gcc: "gcc.compile.c++ ..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o test_data_try.cpp:22:1: error: pasting "BOOST_PP_IS_EMPTY_DEF_" and "+" does not give a valid preprocessing token test_data_try.cpp: In function 'int main()': test_data_try.cpp:22:7: warning: unused variable 'z' "g++" -ftemplate-depth-128 -O0 -fno-inline -Wall -pedantic -g -Wno-variadic-macros -I"..\..\.." -I"C:\Programming\VersionControl\boost" -c -o "..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o" "test_data_try.cpp" ...failed gcc.compile.c++ ..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o..." On MSVC it passes.

On 2/18/2011 7:27 PM, Lorenzo Caminiti wrote:
On Thu, Feb 17, 2011 at 5:13 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
I am requesting that my library, the Variadic Macro Data library, which is in the sandbox in the variadic_macro_data directory, be reviewed for inclusion into Boost.
The variadic_macro_data library adds support and functionality for variadic macros to Boost as well as integrating variadic macros with the Boost PP library without changing the latter library in any way.
I believe others have used my library, can attest to its quality and that it does what it is supposed to do. and have found it useful when using variadic macros with Boost PP. I myself have used its functionality in my own TTI library in the sandbox. Support for variadic macros is implemented in nearly all modern C++ compilers and the syntax is natural for an end-user. The library is finalized as far as functionality is concerned and I would like to see it in Boost and am willing to maintain it as a Boost library.
Is it possible to use variadic macros to detect empty parameters?
For example:
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib.
VMD_DATA_SIZE(1, 2) // 2 VMD_DATA_SIZE(1) // 1 VMD_DATA_SIZE() // 1 not 0 :((
But I would like to the last size to expand to 0 (or have a different macro that would expand to 0 in that case).
With a real C99 preprocessor (e.g., GCC) I can do the following because empty macro parameters are supported:
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib. #include<boost/preprocessor.hpp> #include<boost/preprocessor/facilities/is_empty.hpp>
#define PP_VA_EAT(...) /* must expand to nothing */
#define PP_VA_SIZE_1OR0_(x) BOOST_PP_IIF(BOOST_PP_IS_EMPTY(x), 0, 1)
#define PP_VA_SIZE_(size, ...) \ BOOST_PP_IIF(BOOST_PP_EQUAL(size, 1), \ PP_VA_SIZE_1OR0_ \ , \ size PP_VA_EAT \ )(__VA_ARGS__)
#define PP_VA_SIZE(...) PP_VA_SIZE_(VMD_DATA_SIZE(__VA_ARGS__), __VA_ARGS__)
PP_VA_SIZE(1, 2) // 2 PP_VA_SIZE(1) // 1 PP_VA_SIZE() // 0 :))
This does not work for me under gcc. #include <boost/variadic_macro_data/vmd.hpp> #include <boost/preprocessor.hpp> #include <boost/preprocessor/facilities/is_empty.hpp> #if !defined(BOOST_NO_VARIADIC_MACROS) #define PP_VA_EAT(...) /* must expand to nothing */ #define PP_VA_SIZE_1OR0_(x) BOOST_PP_IIF(BOOST_PP_IS_EMPTY(x), 0, 1) #define PP_VA_SIZE_(size, ...) \ BOOST_PP_IIF(BOOST_PP_EQUAL(size, 1), \ PP_VA_SIZE_1OR0_ \ , \ size PP_VA_EAT \ )(__VA_ARGS__) #define PP_VA_SIZE(...) PP_VA_SIZE_(VMD_DATA_SIZE(__VA_ARGS__), __VA_ARGS__) #endif int main() { int j = PP_VA_SIZE(2); return 0; } Gcc: "...patience... ...found 319 targets... ...updating 4 targets... gcc.compile.c++ ..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o test_data_try.cpp: In function 'int main()': test_data_try.cpp:25:11: error: 'BOOST_PP_IIF_BOOST_PP_COMPL_BOOST_PP_NOT_EQUAL_CHECK_BOOST_PP_NOT_EQUAL_VMD_DATA_SIZE' was not declared in this scope test_data_try.cpp:25:1: error: 'BOOST_PP_NOT_EQUAL_1' was not declared in this scope test_data_try.cpp:13:9: error: 'PP_VA_SIZE_1OR0_' was not declared in this scope test_data_try.cpp:25:11: error: 'VMD_DATA_SIZE' was not declared in this scope test_data_try.cpp:25:7: warning: unused variable 'j' "g++" -ftemplate-depth-128 -O0 -fno-inline -Wall -pedantic -g -Wno-variadic-macros -I"..\..\.." -I"C:\Programming\VersionControl\boost" -c -o "..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o" "test_data_try.cpp" ...failed gcc.compile.c++ ..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o... ...skipped <p..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug>test_data_try.exe for lack of <p..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug>test_data_try.o... ...skipped <p..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug>test_data_try.run for lack of <p..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug>test_data_try.exe... ...failed updating 1 target... ...skipped 3 targets..."

On Sun, Feb 20, 2011 at 7:56 PM, Edward Diener <eldiener@tropicsoft.com> wrote:
On 2/18/2011 7:27 PM, Lorenzo Caminiti wrote:
On Thu, Feb 17, 2011 at 5:13 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
I am requesting that my library, the Variadic Macro Data library, which is in the sandbox in the variadic_macro_data directory, be reviewed for inclusion into Boost.
The variadic_macro_data library adds support and functionality for variadic macros to Boost as well as integrating variadic macros with the Boost PP library without changing the latter library in any way.
I believe others have used my library, can attest to its quality and that it does what it is supposed to do. and have found it useful when using variadic macros with Boost PP. I myself have used its functionality in my own TTI library in the sandbox. Support for variadic macros is implemented in nearly all modern C++ compilers and the syntax is natural for an end-user. The library is finalized as far as functionality is concerned and I would like to see it in Boost and am willing to maintain it as a Boost library.
Is it possible to use variadic macros to detect empty parameters?
For example:
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib.
VMD_DATA_SIZE(1, 2) // 2 VMD_DATA_SIZE(1) // 1 VMD_DATA_SIZE() // 1 not 0 :((
But I would like to the last size to expand to 0 (or have a different macro that would expand to 0 in that case).
With a real C99 preprocessor (e.g., GCC) I can do the following because empty macro parameters are supported:
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib. #include<boost/preprocessor.hpp> #include<boost/preprocessor/facilities/is_empty.hpp>
#define PP_VA_EAT(...) /* must expand to nothing */
#define PP_VA_SIZE_1OR0_(x) BOOST_PP_IIF(BOOST_PP_IS_EMPTY(x), 0, 1)
#define PP_VA_SIZE_(size, ...) \ BOOST_PP_IIF(BOOST_PP_EQUAL(size, 1), \ PP_VA_SIZE_1OR0_ \ , \ size PP_VA_EAT \ )(__VA_ARGS__)
#define PP_VA_SIZE(...) PP_VA_SIZE_(VMD_DATA_SIZE(__VA_ARGS__),
Note that I am using the rev of your lib before you added the BOOST_ prefix (see VMD_... instead of BOOST_VMD_...).
__VA_ARGS__)
PP_VA_SIZE(1, 2) // 2 PP_VA_SIZE(1) // 1 PP_VA_SIZE() // 0 :))
This does not work for me under gcc.
It should work if either you add the BOOST_ prefix to your lib macros of you use the older rev of your lib without such prefixes.
#include <boost/variadic_macro_data/vmd.hpp> #include <boost/preprocessor.hpp> #include <boost/preprocessor/facilities/is_empty.hpp>
#if !defined(BOOST_NO_VARIADIC_MACROS) #define PP_VA_EAT(...) /* must expand to nothing */ #define PP_VA_SIZE_1OR0_(x) BOOST_PP_IIF(BOOST_PP_IS_EMPTY(x), 0, 1) #define PP_VA_SIZE_(size, ...) \ BOOST_PP_IIF(BOOST_PP_EQUAL(size, 1), \ PP_VA_SIZE_1OR0_ \ , \ size PP_VA_EAT \ )(__VA_ARGS__) #define PP_VA_SIZE(...) PP_VA_SIZE_(VMD_DATA_SIZE(__VA_ARGS__), __VA_ARGS__) #endif
int main() { int j = PP_VA_SIZE(2); return 0; }
Gcc:
"...patience... ...found 319 targets... ...updating 4 targets... gcc.compile.c++ ..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o test_data_try.cpp: In function 'int main()': test_data_try.cpp:25:11: error: 'BOOST_PP_IIF_BOOST_PP_COMPL_BOOST_PP_NOT_EQUAL_CHECK_BOOST_PP_NOT_EQUAL_VMD_DATA_SIZE'
Again, see the error is that the PP_CAT expanded in the #undefined symbol ...EQUAL_VMD_DATA_SIZE because VMD_DATA_SIZE did not expand but BOOST_MVD_DATA_SIZE will expand here.
was not declared in this scope test_data_try.cpp:25:1: error: 'BOOST_PP_NOT_EQUAL_1' was not declared in this scope test_data_try.cpp:13:9: error: 'PP_VA_SIZE_1OR0_' was not declared in this scope test_data_try.cpp:25:11: error: 'VMD_DATA_SIZE' was not declared in this scope test_data_try.cpp:25:7: warning: unused variable 'j'
"g++" -ftemplate-depth-128 -O0 -fno-inline -Wall -pedantic -g -Wno-variadic-macros -I"..\..\.." -I"C:\Programming\VersionControl\boost" -c -o "..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o" "test_data_try.cpp"
...failed gcc.compile.c++ ..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o... ...skipped <p..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug>test_data_try.exe for lack of <p..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug>test_data_try.o... ...skipped <p..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug>test_data_try.run for lack of <p..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug>test_data_try.exe... ...failed updating 1 target... ...skipped 3 targets..."
-- Lorenzo

On 2/21/2011 11:27 AM, Lorenzo Caminiti wrote:
On Sun, Feb 20, 2011 at 7:56 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
On 2/18/2011 7:27 PM, Lorenzo Caminiti wrote:
On Thu, Feb 17, 2011 at 5:13 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
I am requesting that my library, the Variadic Macro Data library, which is in the sandbox in the variadic_macro_data directory, be reviewed for inclusion into Boost.
The variadic_macro_data library adds support and functionality for variadic macros to Boost as well as integrating variadic macros with the Boost PP library without changing the latter library in any way.
I believe others have used my library, can attest to its quality and that it does what it is supposed to do. and have found it useful when using variadic macros with Boost PP. I myself have used its functionality in my own TTI library in the sandbox. Support for variadic macros is implemented in nearly all modern C++ compilers and the syntax is natural for an end-user. The library is finalized as far as functionality is concerned and I would like to see it in Boost and am willing to maintain it as a Boost library.
Is it possible to use variadic macros to detect empty parameters?
For example:
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib.
VMD_DATA_SIZE(1, 2) // 2 VMD_DATA_SIZE(1) // 1 VMD_DATA_SIZE() // 1 not 0 :((
But I would like to the last size to expand to 0 (or have a different macro that would expand to 0 in that case).
With a real C99 preprocessor (e.g., GCC) I can do the following because empty macro parameters are supported:
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib. #include<boost/preprocessor.hpp> #include<boost/preprocessor/facilities/is_empty.hpp>
#define PP_VA_EAT(...) /* must expand to nothing */
#define PP_VA_SIZE_1OR0_(x) BOOST_PP_IIF(BOOST_PP_IS_EMPTY(x), 0, 1)
#define PP_VA_SIZE_(size, ...) \ BOOST_PP_IIF(BOOST_PP_EQUAL(size, 1), \ PP_VA_SIZE_1OR0_ \ , \ size PP_VA_EAT \ )(__VA_ARGS__)
#define PP_VA_SIZE(...) PP_VA_SIZE_(VMD_DATA_SIZE(__VA_ARGS__),
Note that I am using the rev of your lib before you added the BOOST_ prefix (see VMD_... instead of BOOST_VMD_...).
__VA_ARGS__)
PP_VA_SIZE(1, 2) // 2 PP_VA_SIZE(1) // 1 PP_VA_SIZE() // 0 :))
This does not work for me under gcc.
It should work if either you add the BOOST_ prefix to your lib macros of you use the older rev of your lib without such prefixes.
Yes, that was my error. Your example now works for gcc. But if you try: PP_VA_SIZE(+) it does not work. "gcc.compile.c++ ..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o test_data_try.cpp:36:1: error: pasting "BOOST_PP_IS_EMPTY_DEF_" and "+" does not give a valid preprocessing token test_data_try.cpp: In function 'int main()': test_data_try.cpp:36:7: warning: unused variable 'z' "g++" -ftemplate-depth-128 -O0 -fno-inline -Wall -pedantic -g -Wno-variadic-macros -I"..\..\.." -I"C:\Programming\VersionControl\boost" -c -o "..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o" "test_data_try.cpp" ...failed gcc.compile.c++ ..\..\..\bin.v2\libs\variadic_macro_data\test\test_data_try.test\gcc-mingw-4.5.2\debug\test_data_try.o..."

On Mon, Feb 21, 2011 at 1:13 PM, Edward Diener <eldiener@tropicsoft.com> wrote:
On 2/21/2011 11:27 AM, Lorenzo Caminiti wrote:
On Sun, Feb 20, 2011 at 7:56 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
On 2/18/2011 7:27 PM, Lorenzo Caminiti wrote:
On Thu, Feb 17, 2011 at 5:13 PM, Edward Diener<eldiener@tropicsoft.com> wrote:
I am requesting that my library, the Variadic Macro Data library, which is in the sandbox in the variadic_macro_data directory, be reviewed for inclusion into Boost.
The variadic_macro_data library adds support and functionality for variadic macros to Boost as well as integrating variadic macros with the Boost PP library without changing the latter library in any way.
I believe others have used my library, can attest to its quality and that it does what it is supposed to do. and have found it useful when using variadic macros with Boost PP. I myself have used its functionality in my own TTI library in the sandbox. Support for variadic macros is implemented in nearly all modern C++ compilers and the syntax is natural for an end-user. The library is finalized as far as functionality is concerned and I would like to see it in Boost and am willing to maintain it as a Boost library.
Is it possible to use variadic macros to detect empty parameters?
For example:
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib.
VMD_DATA_SIZE(1, 2) // 2 VMD_DATA_SIZE(1) // 1 VMD_DATA_SIZE() // 1 not 0 :((
But I would like to the last size to expand to 0 (or have a different macro that would expand to 0 in that case).
With a real C99 preprocessor (e.g., GCC) I can do the following because empty macro parameters are supported:
#include<boost/variadic_macro_data/VariadicMacroData.hpp> // Proposed lib. #include<boost/preprocessor.hpp> #include<boost/preprocessor/facilities/is_empty.hpp>
#define PP_VA_EAT(...) /* must expand to nothing */
#define PP_VA_SIZE_1OR0_(x) BOOST_PP_IIF(BOOST_PP_IS_EMPTY(x), 0, 1)
#define PP_VA_SIZE_(size, ...) \ BOOST_PP_IIF(BOOST_PP_EQUAL(size, 1), \ PP_VA_SIZE_1OR0_ \ , \ size PP_VA_EAT \ )(__VA_ARGS__)
#define PP_VA_SIZE(...) PP_VA_SIZE_(VMD_DATA_SIZE(__VA_ARGS__),
Note that I am using the rev of your lib before you added the BOOST_ prefix (see VMD_... instead of BOOST_VMD_...).
__VA_ARGS__)
PP_VA_SIZE(1, 2) // 2 PP_VA_SIZE(1) // 1 PP_VA_SIZE() // 0 :))
This does not work for me under gcc.
It should work if either you add the BOOST_ prefix to your lib macros of you use the older rev of your lib without such prefixes.
Yes, that was my error. Your example now works for gcc. But if you try:
PP_VA_SIZE(+)
it does not work.
Yep, as I mentioned in my original email Paul Mensonides indicated this a while back: On Sat, Feb 19, 2011 at 10:48 AM, Lorenzo Caminiti <lorcaminiti@gmail.com> wrote:
On Mon, Sep 6, 2010 at 3:29 PM, Paul Mensonides <pmenso57@comcast.net> wrote:
... However, IS_EMPTY is _not_ a macro for general-purpose emptiness detection. Its implementation requires the concatenation of an identifier to the front of the argument which rules out all arguments for which that isn't valid. For example, IS_EMPTY(+) is undefined behavior according to all revisions of both the C and C++ standards (including the forthcoming C++0x). Thus, at minimum, the argument must be an identifier (or keyword--same thing at this point) or a numeric literal that doesn't contain a decimal point.
It is valid (and has been since C90) to pass something that expands to nothing as an argument to a macro. However, it is not valid to pass nothing. E.g.
See http://lists.boost.org/Archives/boost/2010/09/170639.php
I was just asking if: On Fri, Feb 18, 2011 at 7:27 PM, Lorenzo Caminiti <lorcaminiti@gmail.com> wrote:
Is it possible to use variadic macros to detect empty parameters?
I know my code has limitations like the "+", and that is why I was asking to see if with variadics all these issues/limitations in detecting empty macro params could be worked around... From this email thread: On Mon, Feb 21, 2011 at 4:05 AM, Paul Mensonides <pmenso57@comcast.net> wrote:
On Sun, 20 Feb 2011 20:25:43 -0500, Edward Diener wrote:
I think Paul Mensonides may be right and there is no foolproof way to check for a completely empty parameter list even using variadic macros. Further ideas ?
Trust me, I am right. About the best you can do is prohibit input that terminates in a function-like macro name. You can generally detect emptiness *except* for that case.
It appears that the answer is sadly "not" :(( But thanks for looking into this! -- Lorenzo
participants (10)
-
Daniel Larimer
-
Edward Diener
-
Gordon Woodhull
-
Hartmut Kaiser
-
Joachim Faulhaber
-
Lorenzo Caminiti
-
Paul Mensonides
-
Robert Ramey
-
Ronald Garcia
-
Steven Watanabe