
Greetings, I have developed a C++ tensor library using template expressions which allows Einstein summation convention to be used to describe mathematical operations on tensors in C++. This allows a great simplification in syntax, while retaining execution speeds identical to hand optimized C code in most cases. An example of the syntax is: // Declare three rank 2 tensors (rank = # of dimensions) of doubles with sizes of // 6 x 5, 6 x 10, and 10 x 5 respectively tensor<double, 2> A(6, 5); tensor<double, 2> B(6, 10); tensor<double, 2> C(10, 5); // Name three index variable to be used in what follows index_variable<0> i; index_variable<1> j; index_variable<2> k; A[i][j] = 1.0; // Initialize A to all 1's B[i][j] = 4.5; // Initalize B to all 4.5 boost::mt19937 rng; C[i][j] = trand(rng); // Initalize C to uniform random numbers 0-1 C[i][2] = 2.1; // Set row 2 to 2.1 C[2][i] = 5.5; // Set col 2 to 5.5 C[2][2] = 0.0; // Zero out 2,2 A[i][j] = B[i][k] * C[k][j]; // Perform a matrix multiply /* The above line translates roughly as: for(size_t i = 0; i < A.size<0>(); i++) { for(size_t j = 0; j < A.size<1>(); j++) { double total = 0.0; for(size_t k = 0; k < B.size<1>(); k++) total += B[i][k] * C[k][j]; A[i][j] = total; } } */ An unlimited number of dimensions are supported, arbitrary operations are allowed, and for cases where you wish to perform other types of cumulative functions, such as finding the maximum element, a syntax such as: A[i] = max(j, B[i][j]); is allowed. There are lots of other interesting features but I don't wish to make this email too long. As far as efficency, Intel's compiler builds every example I've tried so far into code as good as doing the work by hand. GCC does so for expressions of reasonably complexity, and MSVC's optimizer does alright, but not nearly as well as the others. I was interested in knowing what the interest level of providing such a library as part of boost would be. I've tried to adhere to the boost coding guidelines during the construction. I've been using this library for over 6 months myself for scientific computing. However, I've so far avoiding sending a query to the boost mailing lists as the procedures for submission and review seem quite daunting. I'm usually crushingly busy, and I'm also especially bad at documentation. Still, I thought I would float the idea to guage interest, and also ask if anyone would be interested in helping me through the process somewhat. Thank you for you time. -Jeremy Bruestle

Jeremy Bruestle said: (by the date of Fri, 29 Jun 2007 13:09:52 -0700)
I have developed a C++ tensor library
can you briefly summarize differences between your library and Blitz++ ? Of course only in the area od tensor calculations, since blitz++ is not only about tensors. -- Janek Kozicki |

Hello, that seems promising to me. Can I download that library somewhere on the net to have a closer look at it? I have to apologize that I probably wont find much time really soon, though. At least I would like to get a better idea of it. Yours, Martin. -- Dr. Martin Schulz (schulz@synopsys.com) Software Engineer Synopsys GmbH Karl-Hammerschmidt-Str. 34 D-85609 Dornach, Germany Phone: +49 (89) 993-20203 http://www.synopsys.com
-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Jeremy Bruestle Sent: Freitag, 29. Juni 2007 22:10 To: boost@lists.boost.org Subject: [boost] Interest in tensor library?
Greetings,
I have developed a C++ tensor library using template expressions which allows Einstein summation convention to be used to describe mathematical operations on tensors in C++. This allows a great simplification in syntax, while retaining execution speeds identical to hand optimized C code in most cases. An example of the syntax is:
// Declare three rank 2 tensors (rank = # of dimensions) of doubles with sizes of // 6 x 5, 6 x 10, and 10 x 5 respectively
tensor<double, 2> A(6, 5); tensor<double, 2> B(6, 10); tensor<double, 2> C(10, 5);
// Name three index variable to be used in what follows index_variable<0> i; index_variable<1> j; index_variable<2> k;
A[i][j] = 1.0; // Initialize A to all 1's B[i][j] = 4.5; // Initalize B to all 4.5
boost::mt19937 rng; C[i][j] = trand(rng); // Initalize C to uniform random numbers 0-1 C[i][2] = 2.1; // Set row 2 to 2.1 C[2][i] = 5.5; // Set col 2 to 5.5 C[2][2] = 0.0; // Zero out 2,2
A[i][j] = B[i][k] * C[k][j]; // Perform a matrix multiply
/* The above line translates roughly as: for(size_t i = 0; i < A.size<0>(); i++) { for(size_t j = 0; j < A.size<1>(); j++) { double total = 0.0; for(size_t k = 0; k < B.size<1>(); k++) total += B[i][k] * C[k][j]; A[i][j] = total; } } */ An unlimited number of dimensions are supported, arbitrary operations are allowed, and for cases where you wish to perform other types of cumulative functions, such as finding the maximum element, a syntax such as: A[i] = max(j, B[i][j]); is allowed. There are lots of other interesting features but I don't wish to make this email too long.
As far as efficency, Intel's compiler builds every example I've tried so far into code as good as doing the work by hand. GCC does so for expressions of reasonably complexity, and MSVC's optimizer does alright, but not nearly as well as the others.
I was interested in knowing what the interest level of providing such a library as part of boost would be. I've tried to adhere to the boost coding guidelines during the construction. I've been using this library for over 6 months myself for scientific computing. However, I've so far avoiding sending a query to the boost mailing lists as the procedures for submission and review seem quite daunting. I'm usually crushingly busy, and I'm also especially bad at documentation. Still, I thought I would float the idea to guage interest, and also ask if anyone would be interested in helping me through the process somewhat.
Thank you for you time.
-Jeremy Bruestle _______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Jeremy Bruestle wrote:
Greetings,
I have developed a C++ tensor library using template expressions which allows Einstein summation convention to be used to describe mathematical operations on tensors in C++. This allows a great simplification in syntax, while retaining execution speeds identical to hand optimized C code in most cases. An example of the syntax is:
... I might be interested in this, but the application I'm thinking of would require primarily that it is *fast*, especially for large data sets where attention to the cache heirachy is important. ie., it would need to use BLAS as a back-end, while (implicitly) reordering indices in memory as appropriate (perhaps the FLAME library would be useful here? http://www.cs.utexas.edu/users/flame/). This might be a bit beyond what is realistically possible for a generic tensor library, but I thought I'd ask. Regards, Ian McCulloch

-----Original Message----- From: boost-bounces@lists.boost.org [mailto:boost-bounces@lists.boost.org] On Behalf Of Jeremy Bruestle Sent: 29 June 2007 21:10 To: boost@lists.boost.org Subject: [boost] Interest in tensor library?
Greetings,
I have developed a C++ tensor library using template expressions which allows Einstein summation convention to be used to describe mathematical operations on tensors in C++. This allows a great simplification in syntax, while retaining execution speeds identical to hand optimized C code in most cases.
This looks very useful and I would welcome it as a Boost library. (I'm not clear how it fits/conflicts with uBLAS? and how it differes from Blitz?) Before committing yourself to too much work, you could put what you have in the sandbox. Having some enthusiastic users is a big part of the battle of getting through the review process. Of course, some documentation and examples will be vital to selling it ;-) Paul --- Paul A Bristow Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB +44 1539561830 & SMS, Mobile +44 7714 330204 & SMS pbristow@hetp.u-net.com
participants (5)
-
Ian McCulloch
-
Janek Kozicki
-
Jeremy Bruestle
-
Martin Schulz
-
Paul A Bristow