On Mon, Feb 25, 2013 at 8:54 AM, Brian Budge
On Feb 24, 2013 2:50 PM, "Michael Powell"
wrote: Will try to be more specific what we're using boost to accomplish.
On Sun, Feb 24, 2013 at 9:06 AM, Larry Evans
On 02/23/13 21:56, Michael Powell wrote: Hi Michael, I've a few questions. [snip]
Our data will be a series of spiked tuples (one or more) of
What's a "spiked tuple"? Googling the term didn't help:(
I expect I will have a tuple of different values, RGB color fields and
wrote: perhaps one or two other fields of interest. There will be a sequence of these tuple sequences more than likely. But I haven't completely figure that part out yet. So each item (sequence) in the main sequence could be spiked. Sorry if that wasn't clearer.
n-dimensional packed values, and we'll probably need to dynamically
By "packed" do you mean what's described here:
No nothing like BCD. I don't know if packed is quite the right term.
Each item in the spiked sequence will be a base RGB.
parse calculations on the n-dimensions to determine did we satisfy the solution.
What does "dynamically parse" mean? Is your problem a numerical one, for example solving a system of differential equations?
Yes, I expect there will be different sets of equations, questions we'll
want to be asking the tuples, depending on the circumstances. This could be something that is specified by a parser grammar acting on the different tuple elements themselves, over the sequence perhaps, and so on.
I think I still do not understand "spiked". Also where is the data coming from? Maybe serialization could be useful for writing out these sequences and reading them in binary. If the writing app is not c++, Google proto buffers could potentially be used. Unless you have a good reason for human readable data, I would advocate binary.
Like a spiked array: visually, something like this: {{#}, {##}, {#}, {},
{#####}}, where we have a sequence of sequences, each # represents one of the RGB-based tuples. Not especially germane to this thread, although, yes, we will be serializing the data in some form. What I am focused on here is how to expose these details into the domain model for processing. For instance, potentially we have a parser facet as part of the processing algorithm that can run parsed calculations on each
Brian
_______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users