On Mon, Feb 25, 2013 at 12:00 PM, Brian Budge
<brian.budge@gmail.com> wrote:
> Like a spiked array: visually, something like this: {{#}, {##}, {#}, {},
> {#####}}, where we have a sequence of sequences, each # represents one of
> the RGB-based tuples.
>
> Not especially germane to this thread, although, yes, we will be serializing
> the data in some form. What I am focused on here is how to expose these
> details into the domain model for processing. For instance, potentially we
> have a parser facet as part of the processing algorithm that can run parsed
> calculations on each
>
Okay, so between the link in another post in the thread and your
description above, I'm thinking something like
typedef std::vector< std::vector<RGB> > spiked_array;
A type like this is likely trivially serializable using the
serialization library. If the data is coming out of a C# program,
you might want to try another binary representation.
It's probably also easy to write out a representation that can be
parsed pretty simply with spirit, and you could supply semantic
actions if you want; however, if you are dumping and parsing text,
you'll need to be careful not to lose precision if you are using
floating point, and if you care about performance, I would shy away
from text-based representations. I think there may be better ways to
apply different "facet" calculators for different spiked_arrays by
running over the data in a deserialized (in-data-structure)
representation.
Perhaps I should be clearer, I am not interested to parse the serialization per se, although we may need to do that if core Xml serialization is inadequate. Not as concerned about that right now. The spiked arrays themselves, not germane to discussion.
I am interested in doing parser-generated calculations on the R-G-B fields themselves, derivatives of them, or perhaps over the range of them, vector, or vector of vectors, etc. That's the vertical seam along which our design needs to be extensible.
The connection I am wanting to make is hooking into a pluggable framework, the data collections themselves, and some nominal run time environment to capture parser-generated questions on the data.
Perhaps it would help if you gave a bit more information on what kinds
of different spiked arrays you expect to see, and what kinds of
different compute you will want to run on the different spiked_arrays.