
Yes, I was thinking along those lines when I discovered variadic templates would be included in the new standard. The library as is was started before that discovery and progressed without attempting to incorporate them. From what I've seen in array_recur it looks like you could easily provide such an implementation. Regards Brian On 6/23/11, Larry Evans <cppljevans@suddenlink.net> wrote:
On 06/23/11 16:30, Brian Smith wrote:
On 6/23/11, Larry Evans <cppljevans@suddenlink.net> wrote:
On 06/23/11 09:09, Larry Evans wrote:
On 06/23/11 08:08, Frank Mori Hess wrote:
On Wednesday, June 22, 2011, Brian Smith wrote:
> > There is also Boost.MultiArray, has the overlap with that library > been > addressed already?
In terms of the way in which the two libraries create and manipulate storage there's no overlap. Ultimately they serve the same purpose, however, Boost.MultiArray doesn't provide statically allocated arrays
I'm not sure that's true. I've never tried it, but multi_array_ref "provides the MultiArray interface over any contiguous block of elements" so it seems like you could create one of those which uses statically allocated storage.
and generally provides poorer performance.
Is that assertion based on actual testing (with NDEBUG defined), or is there some by-design reason your performance would be better?
IIUC, Brian's templates provide something akin to:
template < typename T , std::size_t... Lengths
struct array ;
corresponding to multi_array's:
template < typename T , std::size_t NumDims
struct multi_array ;
where T serves the same purpose as multi_array's T and sizeof...(Lengths) = multi_array's NumDims and the Lengths, instead of being passed to the CTOR, as in multiarray and being runtime values, are specified as compile-time constants.
I should have been more explicit about why that *might* result in a speedup. Because Lengths... are compile time constants, the strides and the num_elements could be calculated at compile time. Since num_elements is a compile time constant, no dynamic allocation would be needed; hence array construction would be faster IOW, the data could be a member variable like:
T array<T, Lengths>::data[NumElements];
where NumElements is the compile time constant calculated from Lengths.... Likewise, there could be a strides_t such as that shown here:
template < typename T , std::size_t... Lengths
struct array {... typedef boost::mpl::vector_c < std::size_t , std::size_t... Strides > strides_t; ... };
Where Strides... is calculated, using mpl::fold or some such. Thus, since array indexing uses the strides to get at the indexed T, and since, in this case, the strides are compile-time constants, I'm guessing that array indexing would be faster.
Does that sound right?
-regards, Larry
From the above definition of the array, namely,
template < typename T , std::size_t... Lengths
struct array ;
the generated member type, data say, in the static case, is T[Lengths1][Lengths2]...[LengthsN], where N is the number of Lengths supplied. The indexing operator accepting a single std::size_t type, s say, for data element access returns an internally defined type, i.e., a T[Lengths2]...[LengthsN], via data[s], and simply relies on compiler generated code on any remaining indexes.
Regards Brian
From reading your must recent reply to Frank and your above reply, I confess I was completely wrong about how the code worked. Hmmm. This part of your reply to Frank:
use recursion on a supplementary array supplied set of indexes
made me think of the attached, which, surprisingly, seems to work on the simple code provided; yet, it doesn't use anything like what's described here:
the number of pointer symbols appended being determined from the dimensionality argument passed in the template declaration,
from your reply to Frank. Maybe array_recur can spur some more implementation ideas. Hope so.
-regards Larry
-- www.maidsafe.net