Like a spiked array: visually, something like this: {{#}, {##}, {#}, {}, {#####}}, where we have a sequence of sequences, each # represents one of the RGB-based tuples.
Not especially germane to this thread, although, yes, we will be serializing the data in some form. What I am focused on here is how to expose these details into the domain model for processing. For instance, potentially we have a parser facet as part of the processing algorithm that can run parsed calculations on each
Okay, so between the link in another post in the thread and your description above, I'm thinking something like typedef std::vector< std::vector<RGB> > spiked_array; A type like this is likely trivially serializable using the serialization library. If the data is coming out of a C# program, you might want to try another binary representation. It's probably also easy to write out a representation that can be parsed pretty simply with spirit, and you could supply semantic actions if you want; however, if you are dumping and parsing text, you'll need to be careful not to lose precision if you are using floating point, and if you care about performance, I would shy away from text-based representations. I think there may be better ways to apply different "facet" calculators for different spiked_arrays by running over the data in a deserialized (in-data-structure) representation. Perhaps it would help if you gave a bit more information on what kinds of different spiked arrays you expect to see, and what kinds of different compute you will want to run on the different spiked_arrays. Brian