
Am Mi., 6. Jan. 2021 um 23:19 Uhr schrieb Janek Kozicki via Boost < boost@lists.boost.org>:
Cem Bassoy via Boost said: (by the date of Wed, 6 Jan 2021 10:32:15 +0100)
Please consider to use and contribute to *Boost.uBlas*
Hi,
Does it work with boost::multiprecision types? In particular I am interested in types listed here:
It does work with *boost::multiprecision*: Have a look @ https://godbolt.org/z/cMKc9T
Does it support FFT ?
You can get the pointer (like for vector) to the underlying contiguous memory (you do not own the memory): using format_t = boost::numeric::ublas::column_major; using tensor_t = boost::numeric::ublas::tensor<float,format_t>; auto A = tensor_t(shape{3,4,2},2);auto ap = A.data(); You can also use the standard c++ library for convenience: std::for_each(A.begin(), A.end(), [](auto& a){ ++a; }); If you do not want to use the Einstein-notation, you can as well use either the prod function: // C3(i,l1,l2) = A(i,j,k)*T1(l1,j)*T2(l2,k); q = 3u; tensor_t C3 = prod(prod(A,matrix_t(m+1,n[q-2],1),q-1),matrix_t(m+2,n[q-1],1),q); "prod" uses internally a C-like interface which will not allocate memory at all. ttm(m, p, c.data(), c.extents().data(), c.strides().data(), a.data(), a.extents().data(), a.strides().data(), bb, nb.data(), wb.data()); Cheers, Cem
In particular I am interested in replacing my very crude NDimTable class with your code:
https://gitlab.com/cosurgi/trunk/-/blob/addQuantumMechanics_FixEnum_FixRebas...
I am about to start refactoring this part of the code, when I noticed your post.
best regards Janek
-- Janek Kozicki, PhD. DSc. Arch. Assoc. Prof. GdaĆsk University of Technology Faculty of Applied Physics and Mathematics Department of Theoretical Physics and Quantum Information -- http://yade-dem.org/ http://pg.edu.pl/jkozicki (click English flag on top right)
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost