Interest in multi-dimensional array class templates.

The file maps.zip in the containers directory in Boost's vault contains a library with classes for the development of multi-dimensional array applications. Also included are scalar and fixed-size vector and matrix class templates with an expression template implementation of operators. The multi-dimensional array's can be fixed-size statically or dynamically allocated, or dynamically allocated and resizeable. Vectors and matrices can be statically or dynamically allocated and written in block form. The notation follows that provided by the STL and other Boost libraries. The classes lack tests using Boost's test framework and the documentation is rather sparse. The intention would be to remedy both issues if interest is sufficient to warrant it. Regards Brian

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Brian Smith Sent: Saturday, June 11, 2011 8:06 PM To: boost@lists.boost.org Subject: [boost] Interest in multi-dimensional array class templates.
The file maps.zip in the containers directory in Boost's vault contains a library with classes for the development of multi-dimensional array applications. Also included are scalar and fixed-size vector and matrix class templates with an expression template implementation of operators.
The multi-dimensional array's can be fixed-size statically or dynamically allocated, or dynamically allocated and resizeable. Vectors and matrices can be statically or dynamically allocated and written in block form. The notation follows that provided by the STL and other Boost libraries.
The classes lack tests using Boost's test framework and the documentation is rather sparse. The intention would be to remedy both issues if interest is sufficient to warrant it.
You'd obviously put a lot of work into this and I'm sure some people will be interested in this (since array are such second or third class citizens in C++). But things that lurk in the vault rarely get to see much of the light of day ;-) I suggest you might like to put it in the Boost sandbox in standard boost folder structure so that more people can see it and get it using SVN. jamfile useful? (Ask a moderator for Boost sandbox write access). Docs are looking quite good already, but obviously more could be done. And more examples will help in persuading people of its usefulness. How it is related to/different from existing matrix stuff (uBLAS) will also be useful. HTH Paul --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

Hi Paul Thanks for the suggestion and advice, I'll make a request for Boost sandbox write access. Kind Regards Brian On 6/17/11, Paul A. Bristow <pbristow@hetp.u-net.com> wrote:
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Brian Smith Sent: Saturday, June 11, 2011 8:06 PM To: boost@lists.boost.org Subject: [boost] Interest in multi-dimensional array class templates.
The file maps.zip in the containers directory in Boost's vault contains a library with classes for the development of multi-dimensional array applications. Also included are scalar and fixed-size vector and matrix class templates with an expression template implementation of operators.
The multi-dimensional array's can be fixed-size statically or dynamically allocated, or dynamically allocated and resizeable. Vectors and matrices can be statically or dynamically allocated and written in block form. The notation follows that provided by the STL and other Boost libraries.
The classes lack tests using Boost's test framework and the documentation is rather sparse. The intention would be to remedy both issues if interest is sufficient to warrant it.
You'd obviously put a lot of work into this and I'm sure some people will be interested in this (since array are such second or third class citizens in C++).
But things that lurk in the vault rarely get to see much of the light of day ;-)
I suggest you might like to put it in the Boost sandbox in standard boost folder structure so that more people can see it and get it using SVN. jamfile useful?
(Ask a moderator for Boost sandbox write access).
Docs are looking quite good already, but obviously more could be done.
And more examples will help in persuading people of its usefulness.
How it is related to/different from existing matrix stuff (uBLAS) will also be useful.
HTH
Paul
--- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- www.maidsafe.net

Some hints : - using proto should simplifying the boilerplate of expression templates and give you flexible semantic over different type of containers. - I strongly recommend to *not* rewrite linear algebra algorithm but to rely on matching patterns around them to feed them to the proper BLAS/LAPACK implmeentation calls. Matrix/vector is non trivial amount of work (see our talk at boost'con 2010) and you'll have to consider a lot of side stuff if you want it to have decent performances.

The boilerplate expression templates have already been implemented for scalars, vectors and matrices. The original idea was to include the various matrix types and supply algorithms for the algebra that did not rely on LAPACK since Fortran layout or adaptation of it has not been implemented. At some point collaboration would probably have been necessary, either that or integration of a suitable library written in C++. Other commitments however have slowed progress and your observation about the work involved is well received. At the moment I'm not sure of the best way forward for the development of this part of the library. As far as the multi-dimensional arrays are concerned expression templates have not been implemented the rudiments were put in place for anyone to roll their own. In general though that will not be necessary since the intention was for them to be useful as is. Regards Brian On 6/18/11, Joel falcou <joel.falcou@gmail.com> wrote:
Some hints :
- using proto should simplifying the boilerplate of expression templates and give you flexible semantic over different type of containers.
- I strongly recommend to *not* rewrite linear algebra algorithm but to rely on matching patterns around them to feed them to the proper BLAS/LAPACK implmeentation calls.
Matrix/vector is non trivial amount of work (see our talk at boost'con 2010) and you'll have to consider a lot of side stuff if you want it to have decent performances.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- www.maidsafe.net

On 18/06/11 17:38, Brian Smith wrote:
The boilerplate expression templates have already been implemented for scalars, vectors and matrices. The original idea was to include the various matrix types and supply algorithms for the algebra that did not rely on LAPACK since Fortran layout or adaptation of it has not been implemented.
Not using BLAS/LAPAKC when you can is really shootign your own foot. No way you can beat those. The architecture where they are not avialable ar nowadays very scarce. As for FORTRAn adaptation, beign able to specify a custom storage order should be possible and is usually trivial once you get a proper abstraction of data block management whicvh is *not* the stupid old T* with polynomial index access.
At some point collaboration would probably have been necessary, either that or integration of a suitable library written in C++.
Without wanting to derails the thread, we have a large effort going on nt2 atm.
Other commitments however have slowed progress and your observation about the work involved is well received. At the moment I'm not sure of the best way forward for the development of this part of the library. As far as the multi-dimensional arrays are concerned expression templates have not been implemented the rudiments were put in place for anyone to roll their own. In general though that will not be necessary since the intention was for them to be useful as is.
What i wanted to say is that using proto makes you concentrate on the real deal. No need to reivnetn ET for the 23458th times.

On 6/19/11, Joel falcou <joel.falcou@gmail.com> wrote:
On 18/06/11 17:38, Brian Smith wrote:
The boilerplate expression templates have already been implemented for scalars, vectors and matrices. The original idea was to include the various matrix types and supply algorithms for the algebra that did not rely on LAPACK since Fortran layout or adaptation of it has not been implemented.
Not using BLAS/LAPAKC when you can is really shootign your own foot. No way you can beat those. The architecture where they are not avialable ar nowadays very scarce. As for FORTRAn adaptation, beign able to specify a custom storage order should be possible and is usually trivial once you get a proper abstraction of data block management whicvh is *not* the stupid old T* with polynomial index access.
Nothing was completely ruled out.
At some point collaboration would probably have been necessary, either that or integration of a suitable library written in C++.
Without wanting to derails the thread, we have a large effort going on nt2 atm.
Having had a brief look at nt2 my calculations suggest that in the main it is a result of the 23343rd reinvention of the wheel. Still worth the effort though.
Other commitments however have slowed progress and your observation about the work involved is well received. At the moment I'm not sure of the best way forward for the development of this part of the library. As far as the multi-dimensional arrays are concerned expression templates have not been implemented the rudiments were put in place for anyone to roll their own. In general though that will not be necessary since the intention was for them to be useful as is.
What i wanted to say is that using proto makes you concentrate on the real deal. No need to reivnetn ET for the 23458th times.
The main focus of the library was supposed to be the multi-dimensional arrays and there really was never any intention to implement expression templates on top of them, since I've got no idea what an operator on a multi-dimensional array means to someone who uses them. While a fair bit of effort went into the matrices etc., and they will no doubt be useful in some applications, I realise a more significant effort would be needed in order to compete with nt2 say. We'll call it a work in progress that at the moment is not making much progress. While I don't think this renders them ineffectual as far as inclusion to Boost is concerned if the communities' decision is to remove them until such times that they outperform nt2 say, then so be it.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- www.maidsafe.net

On 19/06/11 05:42, Brian Smith wrote:
Nothing was completely ruled out.
Good then :)
The main focus of the library was supposed to be the multi-dimensional arrays and there really was never any intention to implement expression templates on top of them, since I've got no idea what an operator on a multi-dimensional array means to someone who uses them.
? this sentence doesn't parse.
While a fair bit of effort went into the matrices etc., and they will no doubt be useful in some applications, I realise a more significant effort would be needed in order to compete with nt2 say. We'll call it a work in progress that at the moment is not making much progress. While I don't think this renders them ineffectual as far as inclusion to Boost is concerned if the communities' decision is to remove them until such times that they outperform nt2 say, then so be it.
Uh no, somethign came out wrong here ... I waspiointing otu the fact that we have some stuff ready to use if needed and that *instead* of goign through figuring it out everything again, you can salvage whatever design solution we found instead of having to work on them yourself. Everything in nt2 is potentially able to be proposed as a boost component at some point. That what we are doing at the moment with boost.simd for example. So, I was merely pointing out the fact that such a library is non trivial to be set-up and that I knowing advance that you will face a very finite set of problem we solved already adn that if you wanted to have a look at our solution, then no problem.

On 6/19/11, Joel falcou <joel.falcou@gmail.com> wrote:
On 19/06/11 05:42, Brian Smith wrote:
The main focus of the library was supposed to be the multi-dimensional arrays and there really was never any intention to implement expression templates on top of them, since I've got no idea what an operator on a multi-dimensional array means to someone who uses them.
? this sentence doesn't parse.
As you point out in your next posting a matrix that doesn't do linear algebra is a 2D-array, the arrays in the library are ND-arrays. If somebody writes a tensor class using them, for example, then its probably a good idea to provide some relevant operations using expression templates or otherwise.
While a fair bit of effort went into the matrices etc., and they will no doubt be useful in some applications, I realise a more significant effort would be needed in order to compete with nt2 say. We'll call it a work in progress that at the moment is not making much progress. While I don't think this renders them ineffectual as far as inclusion to Boost is concerned if the communities' decision is to remove them until such times that they outperform nt2 say, then so be it.
Uh no, somethign came out wrong here ... I waspiointing otu the fact that we have some stuff ready to use if needed and that *instead* of goign through figuring it out everything again, you can salvage whatever design solution we found instead of having to work on them yourself.
Everything in nt2 is potentially able to be proposed as a boost component at some point. That what we are doing at the moment with boost.simd for example. So, I was merely pointing out the fact that such a library is non trivial to be set-up and that I knowing advance that you will face a very finite set of problem we solved already adn that if you wanted to have a look at our solution, then no problem.
Ok, we agree a non-trivial amount of work is involved.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- www.maidsafe.net

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Joel falcou Sent: Sunday, June 19, 2011 1:52 AM To: boost@lists.boost.org Subject: Re: [boost] Interest in multi-dimensional array class templates.
On 18/06/11 17:38, Brian Smith wrote:
The boilerplate expression templates have already been implemented for scalars, vectors and matrices. The original idea was to include the various matrix types and supply algorithms for the algebra that did not rely on LAPACK since Fortran layout or adaptation of it has not been implemented.
Not using BLAS/LAPAKC when you can is really shootign your own foot. No way you can beat those.
But are there applications where you do not need what LAPACK provides? Is this where this might still be a useful library? LAPACK may be a heavy addition if it is not used? (My previous experience some years ago was that it was troublesome for the non-cognoscenti to set up). Paul --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

On 19/06/11 06:50, Paul A. Bristow wrote:
But are there applications where you do not need what LAPACK provides? Is this where this might still be a useful library?
LAPACK may be a heavy addition if it is not used? (My previous experience some years ago was that it was troublesome for the non-cognoscenti to set up).
Well, you're dealign with matrices and vector to do some linear algebra. At some point it means you'll want to do some algebra operations on them beside adding and multiplying them by scalar. In this case, 99% of what you need exists in BLAS/LAPACK. If you never do a matrix product nor a SVD or a LU decomposition, never try to compute dot or external product, then you dont need a matrix, you need a 2D array without any special semantic. And so this type needs to be called array and not matrix nor vector.

On 19/06/2011 13:50, Paul A. Bristow wrote:
LAPACK may be a heavy addition if it is not used? (My previous experience some years ago was that it was troublesome for the non-cognoscenti to set up).
You mean like doing find_package(LAPACK)? Oh sorry, Boost.Build can't do that. (Sorry for the troll, but I hope it's one more example that shows that build systems in common usage are pretty much plug-and-play, while Boost.Build requires tedious and arcane work to make it work with anything outside of Boost)

On 11/06/2011 21:05, Brian Smith wrote:
The file maps.zip in the containers directory in Boost's vault contains a library with classes for the development of multi-dimensional array applications. Also included are scalar and fixed-size vector and matrix class templates with an expression template implementation of operators.
Can I look at a N-dimensional array as a M-dimensional array, with M > N? With N < M? Can I have a view of part of the array? Can I easily linearize or reshape it? Given another N-dimensional array whose values are positions, can I obtain the array of the values accessed at the given positions? Can this be a lazy view? How does it integrate with range and iterators? The standard syntax for those operations is that of Matlab (which is also what SciPy uses). Do you provide something similar?

On 6/19/11, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
On 11/06/2011 21:05, Brian Smith wrote:
The file maps.zip in the containers directory in Boost's vault contains a library with classes for the development of multi-dimensional array applications. Also included are scalar and fixed-size vector and matrix class templates with an expression template implementation of operators.
Can I look at a N-dimensional array as a M-dimensional array, with M > N? With N < M?
An M-dimensional array is an N-dimensional array, for all M = N.
Can I have a view of part of the array? Can I easily linearize or reshape it?
Yes. If you mean by linearization, can it be allocated with a single memory request then no. I did write such a type and it was pretty efficient but I removed it. Reshaping a view is possible though.
Given another N-dimensional array whose values are positions, can I obtain the array of the values accessed at the given positions?
With some effort on your behalf yes.
Can this be a lazy view?
Probably depends on how your program is structured.
How does it integrate with range and iterators?
Iterators we're also implemented as separate classes then removed in favour of the view. The reason being they proved detremental to performance, maybe an implementation detail, nevertheless for the time being their gone. Ranges are a part of the view and iterators returned by classes are good old fashioned pointers that are owned by class that returns them.
The standard syntax for those operations is that of Matlab (which is also what SciPy uses). Do you provide something similar?
I'm not familiar with Matlab syntax so can't comment.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- www.maidsafe.net

On 6/19/2011 4:57 PM, Brian Smith wrote:
On 6/19/11, Mathias Gaunard<mathias.gaunard@ens-lyon.org> wrote:
How does it integrate with range and iterators?
Iterators we're also implemented as separate classes then removed in favour of the view. The reason being they proved detremental to performance, maybe an implementation detail, nevertheless for the time being their gone. Ranges are a part of the view and iterators returned by classes are good old fashioned pointers that are owned by class that returns them.
How is the iteration implemented? One iterator for each dimension or one general iterator that goes thru all elements sequentially? -Phil

On 6/20/11, Phil Bouchard <philippe@fornux.com> wrote:
On 6/19/2011 4:57 PM, Brian Smith wrote:
On 6/19/11, Mathias Gaunard<mathias.gaunard@ens-lyon.org> wrote:
How does it integrate with range and iterators?
Iterators we're also implemented as separate classes then removed in favour of the view. The reason being they proved detremental to performance, maybe an implementation detail, nevertheless for the time being their gone. Ranges are a part of the view and iterators returned by classes are good old fashioned pointers that are owned by class that returns them.
How is the iteration implemented? One iterator for each dimension or one general iterator that goes thru all elements sequentially?
One general iterator that goes through the data elements sequentially. The begin and end methods return the address of the first and one past the last data element as expected. The idea of the view is that the addresses of arbitrary sequences of data elements from an array can be stored then iterated over sequentially using the iterator associated with the view, also general and accessible using its begin and end methods. Although a view is itself an array of the same dimensionality as the array we want to iterate over, they're relatively easy to set up, cost little in book-keeping, and performed better than the iterator classes I had implemented. Brian
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- www.maidsafe.net

On 19/06/2011 22:57, Brian Smith wrote:
Can I look at a N-dimensional array as a M-dimensional array, with M> N? With N< M?
An M-dimensional array is an N-dimensional array, for all M = N.
That's not what I asked.
Can I have a view of part of the array? Can I easily linearize or reshape it?
Yes. If you mean by linearization, can it be allocated with a single memory request then no. I did write such a type and it was pretty efficient but I removed it. Reshaping a view is possible though.
No, I mean an operation that linearizes the element in a 1-dimensional structure.
Given another N-dimensional array whose values are positions, can I obtain the array of the values accessed at the given positions?
With some effort on your behalf yes.
It should be easy to use.
Can this be a lazy view?
Probably depends on how your program is structured.
Why would it have to depend on it?
How does it integrate with range and iterators?
Iterators we're also implemented as separate classes then removed in favour of the view. The reason being they proved detremental to performance, maybe an implementation detail, nevertheless for the time being their gone. Ranges are a part of the view and iterators returned by classes are good old fashioned pointers that are owned by class that returns them.
If you had performance problems, maybe you did it wrong. Iterators are important for integration with existing algorithms. Both outer dimension iteration and linear iteration are necessary to have.
I'm not familiar with Matlab syntax so can't comment.
You'll find here a list of a couple of reshaping and slicing operations with their Matlab and SciPy syntax. <http://www.scipy.org/NumPy_for_Matlab_Users#head-13d7391dd7e2c57d293809cff080260b46d8e664>

On 6/20/11, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
On 19/06/2011 22:57, Brian Smith wrote:
Can I have a view of part of the array? Can I easily linearize or reshape it?
Yes. If you mean by linearization, can it be allocated with a single memory request then no. I did write such a type and it was pretty efficient but I removed it. Reshaping a view is possible though.
No, I mean an operation that linearizes the element in a 1-dimensional structure.
No.
Given another N-dimensional array whose values are positions, can I obtain the array of the values accessed at the given positions?
With some effort on your behalf yes.
It should be easy to use.
Ok then with a little bit of effort on your behalf.
How does it integrate with range and iterators?
Iterators we're also implemented as separate classes then removed in favour of the view. The reason being they proved detremental to performance, maybe an implementation detail, nevertheless for the time being their gone. Ranges are a part of the view and iterators returned by classes are good old fashioned pointers that are owned by class that returns them.
If you had performance problems, maybe you did it wrong.
Well deduced Sherlock but I said that.
Both outer dimension iteration and linear iteration are necessary to have.
Why? _______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- www.maidsafe.net

I would like to take the opportunity to re-reply to the questions you posed here. My initial thoughts were that the questions were directed in a rather disinterested manner since a couple of examples using the library would have given you some if not all of the answers you sought. However on reflection I now think that this was probably not the case and hope you will accept my apologies and allow me to give a more measured response. On 6/20/11, Mathias Gaunard <mathias.gaunard@ens-lyon.org> wrote:
On 19/06/2011 22:57, Brian Smith wrote:
Can I look at a N-dimensional array as a M-dimensional array, with M> N? With N< M?
An M-dimensional array is an N-dimensional array, for all M = N.
That's not what I asked.
It is possible to view lower dimensional parts of an N-dimensional array, i.e., for M < N.
Can I have a view of part of the array? Can I easily linearize or reshape it?
Yes. If you mean by linearization, can it be allocated with a single memory request then no. I did write such a type and it was pretty efficient but I removed it. Reshaping a view is possible though.
No, I mean an operation that linearizes the element in a 1-dimensional structure.
I'm still not quite sure if I understand this correctly. It is possible to produce a view containing the addresses of sequences of data elements from an array that will be allocated in a continuous block of memory. The values can accessed and altered via the view using indexing or linear iteration using the begin and end methods of the view. Alternatively instead of addresses, values from an array can be stored in a view and then worked on as desired again via the view. The altered values can then be inserted back into the array, or to another suitably defined array, at the positions corresponding to the view's stored ranges over the array.
Given another N-dimensional array whose values are positions, can I obtain the array of the values accessed at the given positions?
With some effort on your behalf yes.
It should be easy to use.
It is.
Can this be a lazy view?
Probably depends on how your program is structured.
Why would it have to depend on it?
Again I don't entirely get what you mean here, could you elaborate.
How does it integrate with range and iterators?
Iterators we're also implemented as separate classes then removed in favour of the view. The reason being they proved detremental to performance, maybe an implementation detail, nevertheless for the time being their gone. Ranges are a part of the view and iterators returned by classes are good old fashioned pointers that are owned by class that returns them.
If you had performance problems, maybe you did it wrong.
If you look back a few lines you will see that, that is essentially what I said. The operative word though is maybe, since maybe the iterator implementation is inherently inferior to the implementation provided by the view. This was certainly the case when comparing the two. The reason 'maybe' is there follows from the fact that an alternative implementation of iterators may well have improved performance although what I had at the time looked fine.
Iterators are important for integration with existing algorithms. Both outer dimension iteration and linear iteration are necessary to have.
Why?
I'm not familiar with Matlab syntax so can't comment.
You'll find here a list of a couple of reshaping and slicing operations with their Matlab and SciPy syntax.
<http://www.scipy.org/NumPy_for_Matlab_Users#head-13d7391dd7e2c57d293809cff080260b46d8e664>
I've had a quick look and nothing like the colon method of producing slices is available. Some of the other stuff is similar, I would suggest you try the library and see what is available. If any questions remain feel free to ask.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- www.maidsafe.net

On 11/6/2011 21:05, Brian Smith wrote:
The file maps.zip in the containers directory in Boost's vault contains a library with classes for the development of multi-dimensional array applications. Also included are scalar and fixed-size vector and matrix class templates with an expression template implementation of operators.
The multi-dimensional array's can be fixed-size statically or dynamically allocated, or dynamically allocated and resizeable. Vectors and matrices can be statically or dynamically allocated and written in block form. The notation follows that provided by the STL and other Boost libraries.
How does this compare to boost GIL and/or to boost UBLAS? regards Fabio

The arrays are not specifically designed to deal with image data, although image and graphic libraries could make use of them. It's not clear whether this is the case for GIL having never used it myself. Looking through GIL's tutorial though and ignoring image specific terminology there does appear to be some similarity. The main difference between the math arrays in the library and uBLAS is that they're defined as fixed-size arrays. The other differences at the moment is only dense arrays and operations on them are available, and no algebra has been implemented. On 6/21/11, Fabio Fracassi <f.fracassi@gmx.net> wrote:
On 11/6/2011 21:05, Brian Smith wrote:
The file maps.zip in the containers directory in Boost's vault contains a library with classes for the development of multi-dimensional array applications. Also included are scalar and fixed-size vector and matrix class templates with an expression template implementation of operators.
The multi-dimensional array's can be fixed-size statically or dynamically allocated, or dynamically allocated and resizeable. Vectors and matrices can be statically or dynamically allocated and written in block form. The notation follows that provided by the STL and other Boost libraries.
How does this compare to boost GIL and/or to boost UBLAS?
regards
Fabio
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- www.maidsafe.net

On 21/6/2011 22:28, Brian Smith wrote:
The arrays are not specifically designed to deal with image data, although image and graphic libraries could make use of them. It's not clear whether this is the case for GIL having never used it myself. Looking through GIL's tutorial though and ignoring image specific terminology there does appear to be some similarity.
The main difference between the math arrays in the library and uBLAS is that they're defined as fixed-size arrays. The other differences at the moment is only dense arrays and operations on them are available, and no algebra has been implemented.
AFAIK there are fixed size arrays for boost ublas in the most recent version(s?), but I am not entirely sure. I guess the question I am asking is when should I use your arrays over UBLAS or GIL? Can your arrays be integrated? regards Fabio

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Wednesday, June 22, 2011, Fabio Fracassi wrote:
AFAIK there are fixed size arrays for boost ublas in the most recent version(s?), but I am not entirely sure.
I guess the question I am asking is when should I use your arrays over UBLAS or GIL? Can your arrays be integrated?
There is also Boost.MultiArray, has the overlap with that library been addressed already? -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) iEYEARECAAYFAk4CFJgACgkQ5vihyNWuA4WBLACcCz+nT3W71yw5jkT9GzcyCwjW 0G4An32YpZ1vtyOAnQrdZlN/1IosO2ly =AJEN -----END PGP SIGNATURE-----

On 6/22/11, Frank Mori Hess <frank.hess@nist.gov> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Wednesday, June 22, 2011, Fabio Fracassi wrote:
AFAIK there are fixed size arrays for boost ublas in the most recent version(s?), but I am not entirely sure.
I guess the question I am asking is when should I use your arrays over UBLAS or GIL? Can your arrays be integrated?
There is also Boost.MultiArray, has the overlap with that library been addressed already?
In terms of the way in which the two libraries create and manipulate storage there's no overlap. Ultimately they serve the same purpose, however, Boost.MultiArray doesn't provide statically allocated arrays and generally provides poorer performance.
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux)
iEYEARECAAYFAk4CFJgACgkQ5vihyNWuA4WBLACcCz+nT3W71yw5jkT9GzcyCwjW 0G4An32YpZ1vtyOAnQrdZlN/1IosO2ly =AJEN -----END PGP SIGNATURE----- _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- www.maidsafe.net

On Wednesday, June 22, 2011, Brian Smith wrote:
There is also Boost.MultiArray, has the overlap with that library been addressed already?
In terms of the way in which the two libraries create and manipulate storage there's no overlap. Ultimately they serve the same purpose, however, Boost.MultiArray doesn't provide statically allocated arrays
I'm not sure that's true. I've never tried it, but multi_array_ref "provides the MultiArray interface over any contiguous block of elements" so it seems like you could create one of those which uses statically allocated storage.
and generally provides poorer performance.
Is that assertion based on actual testing (with NDEBUG defined), or is there some by-design reason your performance would be better?

On 06/23/11 08:08, Frank Mori Hess wrote:
On Wednesday, June 22, 2011, Brian Smith wrote:
There is also Boost.MultiArray, has the overlap with that library been addressed already?
In terms of the way in which the two libraries create and manipulate storage there's no overlap. Ultimately they serve the same purpose, however, Boost.MultiArray doesn't provide statically allocated arrays
I'm not sure that's true. I've never tried it, but multi_array_ref "provides the MultiArray interface over any contiguous block of elements" so it seems like you could create one of those which uses statically allocated storage.
and generally provides poorer performance.
Is that assertion based on actual testing (with NDEBUG defined), or is there some by-design reason your performance would be better?
IIUC, Brian's templates provide something akin to: template < typename T , std::size_t... Lengths
struct array ; corresponding to multi_array's: template < typename T , std::size_t NumDims
struct multi_array ; where T serves the same purpose as multi_array's T and sizeof...(Lengths) = multi_array's NumDims and the Lengths, instead of being passed to the CTOR, as in multiarray and being runtime values, are specified as compile-time constants. Is that *about* right Brian? -regards, Larry

On 06/23/11 09:09, Larry Evans wrote:
On 06/23/11 08:08, Frank Mori Hess wrote:
On Wednesday, June 22, 2011, Brian Smith wrote:
There is also Boost.MultiArray, has the overlap with that library been addressed already?
In terms of the way in which the two libraries create and manipulate storage there's no overlap. Ultimately they serve the same purpose, however, Boost.MultiArray doesn't provide statically allocated arrays
I'm not sure that's true. I've never tried it, but multi_array_ref "provides the MultiArray interface over any contiguous block of elements" so it seems like you could create one of those which uses statically allocated storage.
and generally provides poorer performance.
Is that assertion based on actual testing (with NDEBUG defined), or is there some by-design reason your performance would be better?
IIUC, Brian's templates provide something akin to:
template < typename T , std::size_t... Lengths
struct array ;
corresponding to multi_array's:
template < typename T , std::size_t NumDims
struct multi_array ;
where T serves the same purpose as multi_array's T and sizeof...(Lengths) = multi_array's NumDims and the Lengths, instead of being passed to the CTOR, as in multiarray and being runtime values, are specified as compile-time constants.
I should have been more explicit about why that *might* result in a speedup. Because Lengths... are compile time constants, the strides and the num_elements could be calculated at compile time. Since num_elements is a compile time constant, no dynamic allocation would be needed; hence array construction would be faster IOW, the data could be a member variable like: T array<T, Lengths>::data[NumElements]; where NumElements is the compile time constant calculated from Lengths.... Likewise, there could be a strides_t such as that shown here: template < typename T , std::size_t... Lengths
struct array {... typedef boost::mpl::vector_c < std::size_t , std::size_t... Strides > strides_t; ... }; Where Strides... is calculated, using mpl::fold or some such. Thus, since array indexing uses the strides to get at the indexed T, and since, in this case, the strides are compile-time constants, I'm guessing that array indexing would be faster. Does that sound right? -regards, Larry

On 6/23/11, Larry Evans <cppljevans@suddenlink.net> wrote:
On 06/23/11 09:09, Larry Evans wrote:
On 06/23/11 08:08, Frank Mori Hess wrote:
On Wednesday, June 22, 2011, Brian Smith wrote:
There is also Boost.MultiArray, has the overlap with that library been addressed already?
In terms of the way in which the two libraries create and manipulate storage there's no overlap. Ultimately they serve the same purpose, however, Boost.MultiArray doesn't provide statically allocated arrays
I'm not sure that's true. I've never tried it, but multi_array_ref "provides the MultiArray interface over any contiguous block of elements" so it seems like you could create one of those which uses statically allocated storage.
and generally provides poorer performance.
Is that assertion based on actual testing (with NDEBUG defined), or is there some by-design reason your performance would be better?
IIUC, Brian's templates provide something akin to:
template < typename T , std::size_t... Lengths
struct array ;
corresponding to multi_array's:
template < typename T , std::size_t NumDims
struct multi_array ;
where T serves the same purpose as multi_array's T and sizeof...(Lengths) = multi_array's NumDims and the Lengths, instead of being passed to the CTOR, as in multiarray and being runtime values, are specified as compile-time constants.
I should have been more explicit about why that *might* result in a speedup. Because Lengths... are compile time constants, the strides and the num_elements could be calculated at compile time. Since num_elements is a compile time constant, no dynamic allocation would be needed; hence array construction would be faster IOW, the data could be a member variable like:
T array<T, Lengths>::data[NumElements];
where NumElements is the compile time constant calculated from Lengths.... Likewise, there could be a strides_t such as that shown here:
template < typename T , std::size_t... Lengths
struct array {... typedef boost::mpl::vector_c < std::size_t , std::size_t... Strides > strides_t; ... };
Where Strides... is calculated, using mpl::fold or some such. Thus, since array indexing uses the strides to get at the indexed T, and since, in this case, the strides are compile-time constants, I'm guessing that array indexing would be faster.
Does that sound right?
-regards, Larry
From the above definition of the array, namely,
template < typename T , std::size_t... Lengths
struct array ; the generated member type, data say, in the static case, is T[Lengths1][Lengths2]...[LengthsN], where N is the number of Lengths supplied. The indexing operator accepting a single std::size_t type, s say, for data element access returns an internally defined type, i.e., a T[Lengths2]...[LengthsN], via data[s], and simply relies on compiler generated code on any remaining indexes. Regards Brian
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- www.maidsafe.net

On 06/23/11 16:30, Brian Smith wrote:
On 6/23/11, Larry Evans <cppljevans@suddenlink.net> wrote:
On 06/23/11 09:09, Larry Evans wrote:
On 06/23/11 08:08, Frank Mori Hess wrote:
On Wednesday, June 22, 2011, Brian Smith wrote:
There is also Boost.MultiArray, has the overlap with that library been addressed already?
In terms of the way in which the two libraries create and manipulate storage there's no overlap. Ultimately they serve the same purpose, however, Boost.MultiArray doesn't provide statically allocated arrays
I'm not sure that's true. I've never tried it, but multi_array_ref "provides the MultiArray interface over any contiguous block of elements" so it seems like you could create one of those which uses statically allocated storage.
and generally provides poorer performance.
Is that assertion based on actual testing (with NDEBUG defined), or is there some by-design reason your performance would be better?
IIUC, Brian's templates provide something akin to:
template < typename T , std::size_t... Lengths
struct array ;
corresponding to multi_array's:
template < typename T , std::size_t NumDims
struct multi_array ;
where T serves the same purpose as multi_array's T and sizeof...(Lengths) = multi_array's NumDims and the Lengths, instead of being passed to the CTOR, as in multiarray and being runtime values, are specified as compile-time constants.
I should have been more explicit about why that *might* result in a speedup. Because Lengths... are compile time constants, the strides and the num_elements could be calculated at compile time. Since num_elements is a compile time constant, no dynamic allocation would be needed; hence array construction would be faster IOW, the data could be a member variable like:
T array<T, Lengths>::data[NumElements];
where NumElements is the compile time constant calculated from Lengths.... Likewise, there could be a strides_t such as that shown here:
template < typename T , std::size_t... Lengths
struct array {... typedef boost::mpl::vector_c < std::size_t , std::size_t... Strides > strides_t; ... };
Where Strides... is calculated, using mpl::fold or some such. Thus, since array indexing uses the strides to get at the indexed T, and since, in this case, the strides are compile-time constants, I'm guessing that array indexing would be faster.
Does that sound right?
-regards, Larry
From the above definition of the array, namely,
template < typename T , std::size_t... Lengths
struct array ;
the generated member type, data say, in the static case, is T[Lengths1][Lengths2]...[LengthsN], where N is the number of Lengths supplied. The indexing operator accepting a single std::size_t type, s say, for data element access returns an internally defined type, i.e., a T[Lengths2]...[LengthsN], via data[s], and simply relies on compiler generated code on any remaining indexes.
Regards Brian
From reading your must recent reply to Frank and your above reply, I confess I was completely wrong about how the code worked. Hmmm. This part of your reply to Frank:
use recursion on a supplementary array supplied set of indexes made me think of the attached, which, surprisingly, seems to work on the simple code provided; yet, it doesn't use anything like what's described here: the number of pointer symbols appended being determined from the dimensionality argument passed in the template declaration, from your reply to Frank. Maybe array_recur can spur some more implementation ideas. Hope so. -regards Larry

Yes, I was thinking along those lines when I discovered variadic templates would be included in the new standard. The library as is was started before that discovery and progressed without attempting to incorporate them. From what I've seen in array_recur it looks like you could easily provide such an implementation. Regards Brian On 6/23/11, Larry Evans <cppljevans@suddenlink.net> wrote:
On 06/23/11 16:30, Brian Smith wrote:
On 6/23/11, Larry Evans <cppljevans@suddenlink.net> wrote:
On 06/23/11 09:09, Larry Evans wrote:
On 06/23/11 08:08, Frank Mori Hess wrote:
On Wednesday, June 22, 2011, Brian Smith wrote:
> > There is also Boost.MultiArray, has the overlap with that library > been > addressed already?
In terms of the way in which the two libraries create and manipulate storage there's no overlap. Ultimately they serve the same purpose, however, Boost.MultiArray doesn't provide statically allocated arrays
I'm not sure that's true. I've never tried it, but multi_array_ref "provides the MultiArray interface over any contiguous block of elements" so it seems like you could create one of those which uses statically allocated storage.
and generally provides poorer performance.
Is that assertion based on actual testing (with NDEBUG defined), or is there some by-design reason your performance would be better?
IIUC, Brian's templates provide something akin to:
template < typename T , std::size_t... Lengths
struct array ;
corresponding to multi_array's:
template < typename T , std::size_t NumDims
struct multi_array ;
where T serves the same purpose as multi_array's T and sizeof...(Lengths) = multi_array's NumDims and the Lengths, instead of being passed to the CTOR, as in multiarray and being runtime values, are specified as compile-time constants.
I should have been more explicit about why that *might* result in a speedup. Because Lengths... are compile time constants, the strides and the num_elements could be calculated at compile time. Since num_elements is a compile time constant, no dynamic allocation would be needed; hence array construction would be faster IOW, the data could be a member variable like:
T array<T, Lengths>::data[NumElements];
where NumElements is the compile time constant calculated from Lengths.... Likewise, there could be a strides_t such as that shown here:
template < typename T , std::size_t... Lengths
struct array {... typedef boost::mpl::vector_c < std::size_t , std::size_t... Strides > strides_t; ... };
Where Strides... is calculated, using mpl::fold or some such. Thus, since array indexing uses the strides to get at the indexed T, and since, in this case, the strides are compile-time constants, I'm guessing that array indexing would be faster.
Does that sound right?
-regards, Larry
From the above definition of the array, namely,
template < typename T , std::size_t... Lengths
struct array ;
the generated member type, data say, in the static case, is T[Lengths1][Lengths2]...[LengthsN], where N is the number of Lengths supplied. The indexing operator accepting a single std::size_t type, s say, for data element access returns an internally defined type, i.e., a T[Lengths2]...[LengthsN], via data[s], and simply relies on compiler generated code on any remaining indexes.
Regards Brian
From reading your must recent reply to Frank and your above reply, I confess I was completely wrong about how the code worked. Hmmm. This part of your reply to Frank:
use recursion on a supplementary array supplied set of indexes
made me think of the attached, which, surprisingly, seems to work on the simple code provided; yet, it doesn't use anything like what's described here:
the number of pointer symbols appended being determined from the dimensionality argument passed in the template declaration,
from your reply to Frank. Maybe array_recur can spur some more implementation ideas. Hope so.
-regards Larry
-- www.maidsafe.net

On 6/23/11, Frank Mori Hess <frank.hess@nist.gov> wrote:
On Wednesday, June 22, 2011, Brian Smith wrote:
There is also Boost.MultiArray, has the overlap with that library been addressed already?
In terms of the way in which the two libraries create and manipulate storage there's no overlap. Ultimately they serve the same purpose, however, Boost.MultiArray doesn't provide statically allocated arrays
I'm not sure that's true. I've never tried it, but multi_array_ref "provides the MultiArray interface over any contiguous block of elements" so it seems like you could create one of those which uses statically allocated storage.
Fair point, the static storage passed would be 1-dimensional and not be owned by the multi_array_ref object.
and generally provides poorer performance.
Is that assertion based on actual testing (with NDEBUG defined), or is there some by-design reason your performance would be better?
The design of the arrays closely follows that provided by the language. For fixed-size static arrays the data member is an intrinsic array whose type is determined from the template arguments passed, the obvious difference for non-static arrays is the data member's type, which for an N-dimensional array is the requested element type with N pointer symbols appended. For these arrays all the information necessary to construct the array is available at compile-time. The resizeable arrays data member is also a pointer type, the number of pointer symbols appended being determined from the dimensionality argument passed in the template declaration, the size of each dimension is subsequently passed in a supplementary array to the constructor or resize method. Given the definition of the data member we can rely on compiler generated pointer-arithmetic for indexing expressions, implement indirection directly, or use recursion on a supplementary array supplied set of indexes if bounds checking is required. The data element access methods is the main contributing factor in terms of performance, overwhelming the N memory requests required to construct a non-static N-dimensional array. During compartive testing against Boost.MultiArray this proves to be the case, with construction times being slightly better for Boost.MultiArray, but when data element access is included the libraries non-static arrays perform significantly better. The performance of the statically allocated arrays is superior to both. -- www.maidsafe.net

On 06/23/11 15:57, Brian Smith wrote: [snip]
The design of the arrays closely follows that provided by the language. For fixed-size static arrays the data member is an intrinsic array whose type is determined from the template arguments passed,
which type, AFAICT, is calculated by boost::maps::detail::array_type in boost/maps/support/generic.hpp (around line 84).
the obvious difference for non-static arrays is the data member's type, which for an N-dimensional array is the requested element type with N pointer symbols appended.
which type, AFAICT, is calculate by boost::maps::pointer_type, again in generic.hpp (around line 66). However, even after reading the docs at: libs/maps/doc/html/maps/concepts.html#maps.concepts.fixed_size_arrays I'm still unable to infer any advantage to using anything other than the default allocator and thereby avoid this: runtime construction requires five memory allocations which occurs with any non-default allocator. Brian, could you explain the advantage of using an allocator other than null::allocator? BTW, could you point out where the "five memory allocations" occur? I've looked at the code some without success so far, but it would be nice if you could just point it out since you know exactly where it is. TIA. [snip] -regards, Larry

If you're program uses a relatively large number of static arrays all in scope at the same time or if a few relatively large static arrays are in scope the stack checker crashes the program when its run, the heap allocated arrays can be present in greater numbers or used when relatively large arrays are required. I suppose the fixed-size arrays could have been implemented as static only but the compact notation, particularly with the underscored array, the naming of which I'm not too keen on, would be lost for large arrays. The assumption was that in most cases there wouldn't be a problem so the null::allocator was set as the default. The memory allocations occur when the function object array_constructor, at line 113 in the header <array.hpp>, calls array_allocate at line 123 on a type with more than one pointer symbol appended. The function object calls itself recursively with the final allocation, or the only one for a one dimensional array, being called at line 139 of the array_constructor that begins on line 130. array_allocate can be found at line 23 of the same file. The last line in the primary template part of array_constructor resets the previously allocated memory to point to the addresses appropriate for the requested dimensions. A similar scheme is present for the resizeable arrays with the function object named pointer_constructor beginning at line 115 in the header <pointer.hpp>. Regards Brian On 6/24/11, Larry Evans <cppljevans@suddenlink.net> wrote:
On 06/23/11 15:57, Brian Smith wrote: [snip]
The design of the arrays closely follows that provided by the language. For fixed-size static arrays the data member is an intrinsic array whose type is determined from the template arguments passed,
which type, AFAICT, is calculated by boost::maps::detail::array_type in boost/maps/support/generic.hpp (around line 84).
the obvious difference for non-static arrays is the data member's type, which for an N-dimensional array is the requested element type with N pointer symbols appended.
which type, AFAICT, is calculate by boost::maps::pointer_type, again in generic.hpp (around line 66).
However, even after reading the docs at:
libs/maps/doc/html/maps/concepts.html#maps.concepts.fixed_size_arrays
I'm still unable to infer any advantage to using anything other than the default allocator and thereby avoid this:
runtime construction requires five memory allocations
which occurs with any non-default allocator.
Brian, could you explain the advantage of using an allocator other than null::allocator?
BTW, could you point out where the "five memory allocations" occur? I've looked at the code some without success so far, but it would be nice if you could just point it out since you know exactly where it is.
TIA.
[snip]
-regards, Larry
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- www.maidsafe.net

On 06/24/11 20:29, Brian Smith wrote:
If you're program uses a relatively large number of static arrays all in scope at the same time or if a few relatively large static arrays are in scope the stack checker crashes the program when its run, the heap allocated arrays can be present in greater numbers or used when relatively large arrays are required.
If stack space is limited, then why not just allocate the static array on the heap with something like: typedef int elem_t ; typedef bounds3<2,3,4> bounds_t ; typedef array < elem_t , bounds_t , true , null::allocator >::type array_static_t ; array_static_t* p_static=new array_static_t; ? You would then have only 1 call to any allocator instead of 3 and save space on the heap besides. -regards, Larry

On 6/25/11, Larry Evans <cppljevans@suddenlink.net> wrote:
On 06/24/11 20:29, Brian Smith wrote:
If you're program uses a relatively large number of static arrays all in scope at the same time or if a few relatively large static arrays are in scope the stack checker crashes the program when its run, the heap allocated arrays can be present in greater numbers or used when relatively large arrays are required.
If stack space is limited, then why not just allocate the static array on the heap with something like:
typedef int elem_t ; typedef bounds3<2,3,4> bounds_t ; typedef array < elem_t , bounds_t , true , null::allocator >::type array_static_t ; array_static_t* p_static=new array_static_t;
? You would then have only 1 call to any allocator instead of 3 and save space on the heap besides.
You're correct. Now for the excuses part. The problem materialised early in the development when I was working on a pretty old computer and to be honest I never thought of setting up the problem the way you've suggested. The computer I have now, although still not too great, could handle the problem without resorting to the above. Initially the static and dynamic arrays were separate and it was not too far in the distant past that I combined them using the base classes. All things considered that might have been a mistake, although the main reason for doing so was, and still is, the fact that the memory allocations are not the determining factor where performance is concerned, albeit it has an effect. And besides it gives you more options. Thank's Larry. Regards Brian
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- www.maidsafe.net

On 6/22/11, Fabio Fracassi <f.fracassi@gmx.net> wrote:
On 21/6/2011 22:28, Brian Smith wrote:
The arrays are not specifically designed to deal with image data, although image and graphic libraries could make use of them. It's not clear whether this is the case for GIL having never used it myself. Looking through GIL's tutorial though and ignoring image specific terminology there does appear to be some similarity.
The main difference between the math arrays in the library and uBLAS is that they're defined as fixed-size arrays. The other differences at the moment is only dense arrays and operations on them are available, and no algebra has been implemented.
AFAIK there are fixed size arrays for boost ublas in the most recent version(s?), but I am not entirely sure.
I guess the question I am asking is when should I use your arrays over UBLAS or GIL? Can your arrays be integrated?
If the array dimensions you'll be using are known at compile time, which they must be to use the libraries math arrays, then the comparative tests I've done show better performance of the libraries' arrays over uBLAS, given the currently implemented operations. Blocked array expressions of the form matrix< matrix < etc., > > work as expected and can reduce the number of cache misses compared to a single matrix< > declaration for relatively large arrays. I've never used GIL so I really couldn't say when to use the libraries' arrays or that library. The storage type of uBLAS arrays, as far as I'm aware, is always a linear array of consecutive memory locations where the type implements special purpose iterators, etc. As such I would presume integration is not possible with that library, however, I've never tried. Whether integration with GIL is possible is also unknown. Regards Brian
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- www.maidsafe.net

On 6/22/11, Brian Smith <bjs3141@gmail.com> wrote:
On 6/22/11, Fabio Fracassi <f.fracassi@gmx.net> wrote:
On 21/6/2011 22:28, Brian Smith wrote:
The arrays are not specifically designed to deal with image data, although image and graphic libraries could make use of them. It's not clear whether this is the case for GIL having never used it myself. Looking through GIL's tutorial though and ignoring image specific terminology there does appear to be some similarity.
The main difference between the math arrays in the library and uBLAS is that they're defined as fixed-size arrays. The other differences at the moment is only dense arrays and operations on them are available, and no algebra has been implemented.
AFAIK there are fixed size arrays for boost ublas in the most recent version(s?), but I am not entirely sure.
I guess the question I am asking is when should I use your arrays over UBLAS or GIL? Can your arrays be integrated?
The storage type of uBLAS arrays, as far as I'm aware, is always a linear array of consecutive memory locations where the type implements special purpose iterators, etc. As such I would presume integration is not possible with that library, however, I've never tried. Whether integration with GIL is possible is also unknown.
What I really meant to say here is... The storage type of uBLAS arrays is always a linear array of consecutive memory locations and can be a std::vector, or one of the storage types supplied by the library. As such I would presume a 1-dimensional array from the library could be used as the storage type but I don't see any benefit in using it that way, and in any case I've not tried. etc,. Regards Brian
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- www.maidsafe.net
-- www.maidsafe.net
participants (8)
-
Brian Smith
-
Fabio Fracassi
-
Frank Mori Hess
-
Joel falcou
-
Larry Evans
-
Mathias Gaunard
-
Paul A. Bristow
-
Phil Bouchard