[fusion, phoenix, spirit2] C++ orientation
Hello, It's been a little while since I've gone into much depth in C++. Been in the C# world for recent years and have a fairly advanced view of that world, through concepts like push-oriented dependency injection, pull-oriented inversion of control (less preferred), functional style programming using anonymous actions, functions, lambdas, and so on. I am working on a project now where we are adopting C++ into the mix, and I would like to get a much better sense for Spirit (likely Spirit2) for purposes of how to run sequences of complex data through various calculations, finds, counts, etc, probably needing to dynamically parse these calculations. Our data will be a series of spiked tuples (one or more) of n-dimensional packed values, and we'll probably need to dynamically parse calculations on the n-dimensions to determine did we satisfy the solution. I don't know if that makes any sense to anyone. It's all very conceptual, very abstract, at this stage of the game. I know, or have a sense, exactly how I might do it in C#, .NET 4, and so on. A little less so what's available in C++, much less boost, spirit, phoenix, fusion, etc. Any insight where to begin would be helpful. Regards, Michael
On 24/02/13 04:56, Michael Powell wrote:
I don't know if that makes any sense to anyone. It's all very conceptual, very abstract, at this stage of the game. I know, or have a sense, exactly how I might do it in C#, .NET 4, and so on. A little less so what's available in C++, much less boost, spirit, phoenix, fusion, etc.
Boost is a collection of C++ libraries. Albeit there are libraries on various subjects, what it covers is fairly little when compared to .NET, and it isn't as consistent. Spirit is a LL(k) parser generator where you define the grammar directly with C++ expressions, in a way that resembles EBNF syntax. Phoenix is a mechanism to define lambda functions with very concise syntax. Fusion is a set of algorithms that operate on tuples. Spirit uses Fusion in its data structures and you may use Phoenix to embed actions in the grammar.
On 2/24/2013 5:27 AM, Mathias Gaunard wrote:
On 24/02/13 04:56, Michael Powell wrote:
I don't know if that makes any sense to anyone. It's all very conceptual, very abstract, at this stage of the game. I know, or have a sense, exactly how I might do it in C#, .NET 4, and so on. A little less so what's available in C++, much less boost, spirit, phoenix, fusion, etc.
Boost is a collection of C++ libraries. Albeit there are libraries on various subjects, what it covers is fairly little when compared to .NET, and it isn't as consistent.
Are you implying this of Boost libraries in general ? If so I beg to differ. Boost libraries cover more than .Net libraries do in the realm of general purpose programming. The .Net API very nicely covers APIs specific to Windows programming and some general purpose APIs, but not nearly so much as Boost does of the latter. I do not mean to start an opinion war but I think your reply above gives the OP the wrong idea about the strengths of Boost libraries, especially as he is new to Boost. And yes I have programmed in .Net ( both C# and C++/CLI ) pretty extensively.
Perhaps I should clarify too.
On Sun, Feb 24, 2013 at 10:26 AM, Edward Diener
On 2/24/2013 5:27 AM, Mathias Gaunard wrote:
On 24/02/13 04:56, Michael Powell wrote:
I don't know if that makes any sense to anyone. It's all very
conceptual, very abstract, at this stage of the game. I know, or have a sense, exactly how I might do it in C#, .NET 4, and so on. A little less so what's available in C++, much less boost, spirit, phoenix, fusion, etc.
Boost is a collection of C++ libraries. Albeit there are libraries on various subjects, what it covers is fairly little when compared to .NET, and it isn't as consistent.
Are you implying this of Boost libraries in general ? If so I beg to differ. Boost libraries cover more than .Net libraries do in the realm of general purpose programming. The .Net API very nicely covers APIs specific to Windows programming and some general purpose APIs, but not nearly so much as Boost does of the latter.
There's no opinion war I'm looking for here. Have had exposure to C++/CLI as well. There are a couple of potential problems I am charged with solving to do with .NET CodeDom type issues, dynamic compilation, pluggable interfaces, that sort of thing, potentially addressing those issues. However, the decision was made outside my control to go with Linux / C++, so my response to that was, "let's utilize Boost for the model's engine where the dynamic (and other) stuff is concerned." So my questions were too broad, I'll concede that. I am learning what's available, can/will ask more informed questions later.
I do not mean to start an opinion war but I think your reply above gives the OP the wrong idea about the strengths of Boost libraries, especially as he is new to Boost. And yes I have programmed in .Net ( both C# and C++/CLI ) pretty extensively.
I do appreciate the feedback in the meantime, from all interested folks.
______________________________**_________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/**mailman/listinfo.cgi/boost-**usershttp://lists.boost.org/mailman/listinfo.cgi/boost-users
On 02/23/13 21:56, Michael Powell wrote: Hi Michael, I've a few questions. [snip]
Our data will be a series of spiked tuples (one or more) of
What's a "spiked tuple"? Googling the term didn't help:(
n-dimensional packed values, and we'll probably need to dynamically
By "packed" do you mean what's described here: http://en.wikipedia.org/wiki/Densely_packed_decimal
parse calculations on the n-dimensions to determine did we satisfy the solution.
What does "dynamically parse" mean? Is your problem a numerical one, for example solving a system of differential equations? -regards, Larry
Hi Michael, I have to agree. To me this sounds more like a description of a numerical problem in a n-dimensional vector space, and not a problem which uses heterogeneous-data transformations (as fusion does) or parsing(as spirit does). uBLAS could be a starting point for a linear algebra library. Odeint could help you integrate ODEs, but i don't know your problem. However it is likely that for most mathematical problems there are better libraries out there than the ones provided by boost. Greetings, Oswin On 24.02.2013 16:06, Larry Evans wrote:
On 02/23/13 21:56, Michael Powell wrote: Hi Michael, I've a few questions. [snip]
Our data will be a series of spiked tuples (one or more) of What's a "spiked tuple"? Googling the term didn't help:(
n-dimensional packed values, and we'll probably need to dynamically By "packed" do you mean what's described here:
http://en.wikipedia.org/wiki/Densely_packed_decimal
parse calculations on the n-dimensions to determine did we satisfy the solution. What does "dynamically parse" mean? Is your problem a numerical one, for example solving a system of differential equations?
-regards, Larry
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
Will try to be more specific what we're using boost to accomplish.
On Sun, Feb 24, 2013 at 9:06 AM, Larry Evans
On 02/23/13 21:56, Michael Powell wrote: Hi Michael, I've a few questions. [snip]
Our data will be a series of spiked tuples (one or more) of
What's a "spiked tuple"? Googling the term didn't help:(
I expect I will have a tuple of different values, RGB color fields and perhaps one or two other fields of interest. There will be a sequence of these tuple sequences more than likely. But I haven't completely figure that part out yet. So each item (sequence) in the main sequence could be spiked. Sorry if that wasn't clearer.
n-dimensional packed values, and we'll probably need to dynamically
By "packed" do you mean what's described here:
No nothing like BCD. I don't know if packed is quite the right term. Each item in the spiked sequence will be a base RGB.
parse calculations on the n-dimensions to determine did we satisfy the solution.
What does "dynamically parse" mean? Is your problem a numerical one, for example solving a system of differential equations?
Yes, I expect there will be different sets of equations, questions we'll want to be asking the tuples, depending on the circumstances. This could be something that is specified by a parser grammar acting on the different tuple elements themselves, over the sequence perhaps, and so on. Again, hasn't been fully thought through.
-regards, Larry
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
On Feb 24, 2013 2:50 PM, "Michael Powell"
Will try to be more specific what we're using boost to accomplish.
On Sun, Feb 24, 2013 at 9:06 AM, Larry Evans
On 02/23/13 21:56, Michael Powell wrote: Hi Michael, I've a few questions. [snip]
Our data will be a series of spiked tuples (one or more) of
What's a "spiked tuple"? Googling the term didn't help:(
I expect I will have a tuple of different values, RGB color fields and
wrote: perhaps one or two other fields of interest. There will be a sequence of these tuple sequences more than likely. But I haven't completely figure that part out yet. So each item (sequence) in the main sequence could be spiked. Sorry if that wasn't clearer.
n-dimensional packed values, and we'll probably need to dynamically
By "packed" do you mean what's described here:
No nothing like BCD. I don't know if packed is quite the right term. Each
item in the spiked sequence will be a base RGB.
parse calculations on the n-dimensions to determine did we satisfy the solution.
What does "dynamically parse" mean? Is your problem a numerical one, for example solving a system of differential equations?
Yes, I expect there will be different sets of equations, questions we'll
want to be asking the tuples, depending on the circumstances. This could be something that is specified by a parser grammar acting on the different tuple elements themselves, over the sequence perhaps, and so on.
I think I still do not understand "spiked". Also where is the data coming from? Maybe serialization could be useful for writing out these sequences and reading them in binary. If the writing app is not c++, Google proto buffers could potentially be used. Unless you have a good reason for human readable data, I would advocate binary. Brian
On Mon, Feb 25, 2013 at 8:54 AM, Brian Budge
On Feb 24, 2013 2:50 PM, "Michael Powell"
wrote: Will try to be more specific what we're using boost to accomplish.
On Sun, Feb 24, 2013 at 9:06 AM, Larry Evans
On 02/23/13 21:56, Michael Powell wrote: Hi Michael, I've a few questions. [snip]
Our data will be a series of spiked tuples (one or more) of
What's a "spiked tuple"? Googling the term didn't help:(
I expect I will have a tuple of different values, RGB color fields and
wrote: perhaps one or two other fields of interest. There will be a sequence of these tuple sequences more than likely. But I haven't completely figure that part out yet. So each item (sequence) in the main sequence could be spiked. Sorry if that wasn't clearer.
n-dimensional packed values, and we'll probably need to dynamically
By "packed" do you mean what's described here:
No nothing like BCD. I don't know if packed is quite the right term.
Each item in the spiked sequence will be a base RGB.
parse calculations on the n-dimensions to determine did we satisfy the solution.
What does "dynamically parse" mean? Is your problem a numerical one, for example solving a system of differential equations?
Yes, I expect there will be different sets of equations, questions we'll
want to be asking the tuples, depending on the circumstances. This could be something that is specified by a parser grammar acting on the different tuple elements themselves, over the sequence perhaps, and so on.
I think I still do not understand "spiked". Also where is the data coming from? Maybe serialization could be useful for writing out these sequences and reading them in binary. If the writing app is not c++, Google proto buffers could potentially be used. Unless you have a good reason for human readable data, I would advocate binary.
Like a spiked array: visually, something like this: {{#}, {##}, {#}, {},
{#####}}, where we have a sequence of sequences, each # represents one of the RGB-based tuples. Not especially germane to this thread, although, yes, we will be serializing the data in some form. What I am focused on here is how to expose these details into the domain model for processing. For instance, potentially we have a parser facet as part of the processing algorithm that can run parsed calculations on each
Brian
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
Michael,
Since it looks like you are writing image-processing code, maybe Boost Gil
would be worth looking at.
Regards,
Rodrigo
On Mon, Feb 25, 2013 at 12:09 PM, Michael Powell
On Mon, Feb 25, 2013 at 8:54 AM, Brian Budge
wrote: On Feb 24, 2013 2:50 PM, "Michael Powell"
wrote: Will try to be more specific what we're using boost to accomplish.
On Sun, Feb 24, 2013 at 9:06 AM, Larry Evans
On 02/23/13 21:56, Michael Powell wrote: Hi Michael, I've a few questions. [snip]
Our data will be a series of spiked tuples (one or more) of
What's a "spiked tuple"? Googling the term didn't help:(
I expect I will have a tuple of different values, RGB color fields and
wrote: perhaps one or two other fields of interest. There will be a sequence of these tuple sequences more than likely. But I haven't completely figure that part out yet. So each item (sequence) in the main sequence could be spiked. Sorry if that wasn't clearer.
n-dimensional packed values, and we'll probably need to dynamically
By "packed" do you mean what's described here:
No nothing like BCD. I don't know if packed is quite the right term.
Each item in the spiked sequence will be a base RGB.
parse calculations on the n-dimensions to determine did we satisfy
the
solution.
What does "dynamically parse" mean? Is your problem a numerical one, for example solving a system of differential equations?
Yes, I expect there will be different sets of equations, questions we'll want to be asking the tuples, depending on the circumstances. This could be something that is specified by a parser grammar acting on the different tuple elements themselves, over the sequence perhaps, and so on.
I think I still do not understand "spiked". Also where is the data coming from? Maybe serialization could be useful for writing out these sequences and reading them in binary. If the writing app is not c++, Google proto buffers could potentially be used. Unless you have a good reason for human readable data, I would advocate binary.
Like a spiked array: visually, something like this: {{#}, {##}, {#}, {},
{#####}}, where we have a sequence of sequences, each # represents one of the RGB-based tuples.
Not especially germane to this thread, although, yes, we will be serializing the data in some form. What I am focused on here is how to expose these details into the domain model for processing. For instance, potentially we have a parser facet as part of the processing algorithm that can run parsed calculations on each
Brian
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
Like a spiked array: visually, something like this: {{#}, {##}, {#}, {}, {#####}}, where we have a sequence of sequences, each # represents one of the RGB-based tuples.
Not especially germane to this thread, although, yes, we will be serializing the data in some form. What I am focused on here is how to expose these details into the domain model for processing. For instance, potentially we have a parser facet as part of the processing algorithm that can run parsed calculations on each
Okay, so between the link in another post in the thread and your description above, I'm thinking something like typedef std::vector< std::vector<RGB> > spiked_array; A type like this is likely trivially serializable using the serialization library. If the data is coming out of a C# program, you might want to try another binary representation. It's probably also easy to write out a representation that can be parsed pretty simply with spirit, and you could supply semantic actions if you want; however, if you are dumping and parsing text, you'll need to be careful not to lose precision if you are using floating point, and if you care about performance, I would shy away from text-based representations. I think there may be better ways to apply different "facet" calculators for different spiked_arrays by running over the data in a deserialized (in-data-structure) representation. Perhaps it would help if you gave a bit more information on what kinds of different spiked arrays you expect to see, and what kinds of different compute you will want to run on the different spiked_arrays. Brian
On Mon, Feb 25, 2013 at 12:00 PM, Brian Budge
Like a spiked array: visually, something like this: {{#}, {##}, {#}, {}, {#####}}, where we have a sequence of sequences, each # represents one of the RGB-based tuples.
Not especially germane to this thread, although, yes, we will be serializing the data in some form. What I am focused on here is how to expose these details into the domain model for processing. For instance, potentially we have a parser facet as part of the processing algorithm that can run parsed calculations on each
Okay, so between the link in another post in the thread and your description above, I'm thinking something like typedef std::vector< std::vector<RGB> > spiked_array;
A type like this is likely trivially serializable using the serialization library. If the data is coming out of a C# program, you might want to try another binary representation. It's probably also easy to write out a representation that can be parsed pretty simply with spirit, and you could supply semantic actions if you want; however, if you are dumping and parsing text, you'll need to be careful not to lose precision if you are using floating point, and if you care about performance, I would shy away from text-based representations. I think there may be better ways to apply different "facet" calculators for different spiked_arrays by running over the data in a deserialized (in-data-structure) representation.
Perhaps I should be clearer, I am not interested to parse the serialization per se, although we may need to do that if core Xml serialization is inadequate. Not as concerned about that right now. The spiked arrays themselves, not germane to discussion. I am interested in doing parser-generated calculations on the R-G-B fields themselves, derivatives of them, or perhaps over the range of them, vector, or vector of vectors, etc. That's the vertical seam along which our design needs to be extensible. The connection I am wanting to make is hooking into a pluggable framework, the data collections themselves, and some nominal run time environment to capture parser-generated questions on the data. Perhaps it would help if you gave a bit more information on what kinds
of different spiked arrays you expect to see, and what kinds of different compute you will want to run on the different spiked_arrays.
Brian _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
Not especially germane to this thread, although, yes, we will be serializing the data in some form. What I am focused on here is how to expose these details into the domain model for processing. For instance, potentially we have a parser facet as part of the processing algorithm that can run parsed calculations on each
Aha. I think I finally understand what you want. You will have some domain specific language (DSL) with which you will be describing what to do to the data. This is like a simplified matlab-type interface? You load some set of data, and then you want to describe how to process the data. You're interested in parsing via Spirit, and then executing some kinds of instructions on all the data. If this is what you want, you can indeed use Spirit, though there is also boost.proto which is specifically for DSLs. I can't comment much on proto, as I have not used the library. Can you confirm if I am on the right track? Brian
On Mon, Feb 25, 2013 at 1:30 PM, Brian Budge
Not especially germane to this thread, although, yes, we will be serializing the data in some form. What I am focused on here is how to expose these details into the domain model for processing. For instance, potentially we have a parser facet as part of the processing algorithm that can run parsed calculations on each
Aha. I think I finally understand what you want. You will have some domain specific language (DSL) with which you will be describing what to do to the data. This is like a simplified matlab-type interface? You load some set of data, and then you want to describe how to process the data. You're interested in parsing via Spirit, and then executing some kinds of instructions on all the data. If this is what you want, you can indeed use Spirit, though there is also boost.proto which is specifically for DSLs. I can't comment much on proto, as I have not used the library. Can you confirm if I am on the right track?
Brian
Note that Proto is for compile-time DSL. ie C++ code that looks like a DSL. So the queries would be hard-coded into the C++ code. Spirit is for parsing text. Assuming the end-users will be building the queries on the fly, Spirit is probably the right choice. Tony
Note that Proto is for compile-time DSL. ie C++ code that looks like a DSL. So the queries would be hard-coded into the C++ code. Spirit is for parsing text.
Assuming the end-users will be building the queries on the fly, Spirit is probably the right choice.
Tony
Thanks Tony for the info.
On Mon, Feb 25, 2013 at 12:54 PM, Brian Budge
Note that Proto is for compile-time DSL. ie C++ code that looks like a DSL. So the queries would be hard-coded into the C++ code. Spirit is for parsing text.
Assuming the end-users will be building the queries on the fly, Spirit is probably the right choice.
Tony
Don't know Proto, although possibly, perhaps yes, if necessary. I expect that we'll have some species that are configuration-driven, some of that configuration is for things like parser-generated calibration, and testing. Basically, testing bottom line says: did this series of data match the species question? Yes, potentially, enter Spirit. We do something akin to that using Lunula in the C# .NET world today, a crude DSL that yields those type of answers. Now we're migrating to a next generation, Linux / C++ based solution, so my interest in Boost, Spirit, etc.
Thanks Tony for the info. _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users
On 2/26/13 2:20 AM, Michael Powell wrote:
I am interested in doing parser-generated calculations on the R-G-B fields themselves, derivatives of them, or perhaps over the range of them, vector, or vector of vectors, etc. That's the vertical seam along which our design needs to be extensible.
The connection I am wanting to make is hooking into a pluggable framework, the data collections themselves, and some nominal run time environment to capture parser-generated questions on the data.
Michael, there's always the Spirit list if you need anything. Seems like a good match for Spirit indeed. Regards, -- Joel de Guzman http://www.ciere.com http://boost-spirit.com http://www.cycfi.com/
On 02/24/13 16:48, Michael Powell wrote:
Will try to be more specific what we're using boost to accomplish.
On Sun, Feb 24, 2013 at 9:06 AM, Larry Evans
mailto:cppljevans@suddenlink.net> wrote: On 02/23/13 21:56, Michael Powell wrote: Hi Michael, I've a few questions. [snip] > Our data will be a series of spiked tuples (one or more) of
What's a "spiked tuple"? Googling the term didn't help:(
I expect I will have a tuple of different values, RGB color fields and perhaps one or two other fields of interest. There will be a sequence of these tuple sequences more than likely. But I haven't completely figure that part out yet. So each item (sequence) in the main sequence could be spiked. Sorry if that wasn't clearer. Hi Michael,
Sorry, but it still isn't clear what "spiked" means. Apparently this term is so familiar to you that it seems easy to explain; however, so far, none of your explanations are turning on any lights in my brain :( I've googled for "spiked array" and got several hits; however, I didn't find a definition (although I just looked at the first page of hits). Could you please give a definition of "spiked array" for "dummies"? [snip] -Regards, Larry
On 2013-02-25 16:51, Larry Evans wrote:
On 02/24/13 16:48, Michael Powell wrote:
I expect I will have a tuple of different values, RGB color fields and perhaps one or two other fields of interest. There will be a sequence of these tuple sequences more than likely. But I haven't completely figure that part out yet. So each item (sequence) in the main sequence could be spiked. Sorry if that wasn't clearer. Hi Michael,
Sorry, but it still isn't clear what "spiked" means. Apparently this term is so familiar to you that it seems easy to explain; however, so far, none of your explanations are turning on any lights in my brain :( I've googled for "spiked array" and got several hits; however, I didn't find a definition (although I just looked at the first page of hits).
Could you please give a definition of "spiked array" for "dummies"? [snip]
Sounds to me like he means "jagged" arrays. Sebastian
On 02/25/13 10:21, Sebastian Redl wrote:
On 2013-02-25 16:51, Larry Evans wrote:
On 02/24/13 16:48, Michael Powell wrote:
I expect I will have a tuple of different values, RGB color fields and perhaps one or two other fields of interest. There will be a sequence of these tuple sequences more than likely. But I haven't completely figure that part out yet. So each item (sequence) in the main sequence could be spiked. Sorry if that wasn't clearer. Hi Michael,
Sorry, but it still isn't clear what "spiked" means. Apparently this term is so familiar to you that it seems easy to explain; however, so far, none of your explanations are turning on any lights in my brain :( I've googled for "spiked array" and got several hits; however, I didn't find a definition (although I just looked at the first page of hits).
Could you please give a definition of "spiked array" for "dummies"? [snip]
Sounds to me like he means "jagged" arrays.
Sebastian Ah, thanks Sebastian. Googling that lead me to:
http://stackoverflow.com/questions/2576759/what-is-a-jagged-array which turned on the light in my head. Thanks. -Larry
participants (10)
-
Brian Budge
-
Edward Diener
-
Gottlob Frege
-
Joel de Guzman
-
Larry Evans
-
Mathias Gaunard
-
Michael Powell
-
oswin krause
-
Rodrigo Madera
-
Sebastian Redl