Re: [boost] Reply to questions about FFT template class
Hi, Just to answer the previous questions about my FFT code... I'm afraid it's only a basic FFT implementation. It may not be up to the standard of Boost and I was always prepared for this, so I may then have to submit it to a less prestigious open source forum on the web somewhere. I made a mistake before about the precision. Mine agrees with the FFTW 2048-point results to a max error of 3 parts per million (max diff with FFTW value is 2.53E-06). Most of the outputs show zero or infinitesimal diffs. I've checked by math many times and I can only think it's because of performance-related approximations in FFTW, as the diffs seem too small to be algorithmic errors. Unfortunately, the FFTW implementation is ~20 times faster than mine. FFTW can run a 2048-point in 6ms on my Ubuntu installation whereas mine takes approx. 140ms. On Windows, both implementations are an order of magnitude slower (5secs for mine versus 250ms for FFTW). The boost multi-threading appears to make no difference to the speed, even though I've run it through GDB and with printfs to check that my code automatically spawns the optimal number of threads up to the max value given as the template parameter. Some compilers may be able to optimise the C++ code and multi-threading to increase performance, though I do not know if it could ever approach FFTW's. I have not implemented more advanced FFT features such as multi-dimensional FFTs or real-value-only optimisations, but I think the current API could facilitate users extending it to include forward/reverse FFT, bit-reversal, multi-dimensional FFTS (by manipulating the input and output vectors), etc. I've tried to make the code well organised, structured and commented so that hopefully users could customise it with their own optimisations for specific processor architectures. I'll understand if the consensus is that it is not really good enough for Boost. Alternatively, I would be happy to share my code and share the credit if anyone else wants to help. Kind regards,Nathan
From: boost-request@lists.boost.org Subject: Boost Digest, Vol 4012, Issue 1 To: boost@lists.boost.org Date: Wed, 29 May 2013 05:38:51 -0400
Send Boost mailing list submissions to boost@lists.boost.org
To subscribe or unsubscribe via the World Wide Web, visit http://lists.boost.org/mailman/listinfo.cgi/boost or, via email, send a message with subject or body 'help' to boost-request@lists.boost.org
You can reach the person managing the list at boost-owner@lists.boost.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of Boost digest..."
The boost archives may be found at: http://lists.boost.org/Archives/boost/
Today's Topics:
1. Re: SIMD implementation of uBLAS (Aditya Avinash) 2. Re: SIMD implementation of uBLAS (Karsten Ahnert) 3. Re: Request to contribute boost::FFT (Karsten Ahnert) 4. Re: SIMD implementation of uBLAS (Aditya Avinash) 5. Boost on ARM using NEON (Aditya Avinash) 6. Re: Boost on ARM using NEON (Antony Polukhin) 7. Re: Boost on ARM using NEON (Aditya Avinash) 8. Re: Boost on ARM using NEON (Tim Blechmann) 9. Re: Request to contribute boost::FFT (Paul A. Bristow) 10. Re: Boost on ARM using NEON (Victor Hiairrassary) 11. Re: Boost on ARM using NEON (Andrey Semashev) 12. Re: Boost on ARM using NEON (Tim Blechmann) 13. Re: Boost on ARM using NEON (Andrey Semashev) 14. Re: Boost on ARM using NEON (David Bellot) 15. Re: SIMD implementation of uBLAS (Rob Stewart) 16. Re: SIMD implementation of uBLAS (Mathias Gaunard) 17. Re: SIMD implementation of uBLAS (Aditya Avinash) 18. Re: Request to contribute boost::FFT (Mathias Gaunard)
----------------------------------------------------------------------
Message: 1 Date: Wed, 29 May 2013 11:03:50 +0530 From: Aditya Avinash
To: boost@lists.boost.org Subject: Re: [boost] SIMD implementation of uBLAS Message-ID: Content-Type: text/plain; charset=ISO-8859-1 @Gaetano: Thank you for the comments. I'll change accordingly and post it back. I am using T because, the code need to run double precision float also. @Joel: The Boost.SIMD is generalized. Designing algorithms specific to uBLAS increases the performance. Odeint have their own simd backend.
On Wed, May 29, 2013 at 10:36 AM, Joel Falcou
wrote: On 29/05/2013 06:45, Gaetano Mendola wrote:
On 29/05/2013 06.13, Aditya Avinash wrote:
Hi, i have developed vector addition algorithm which exploits the hardware parallelism (SSE implementation).
A few comments:
- That is not C++ but just C in disguise of C++ code . SSE1 CTOR doesn't use initialization list . SSE1 doesn't have a DTOR and the user has to explicit call the Free method
- const-correctness is not in place - The SSE namespace should have been put in a "detail" namespace - Use memcpy instead of explicit for - Why is SSE1 template when it works only when T is a single-precision, floating-point value ?
Also I believe a nice interface whould have been:
SSE1::vector A(1024); SSE1::vector B(1024); SSE1::vector C(1024);
C = A + B;
Regards Gaetano Mendola
See our work on Boost.SIMD ...
______________________________**_________________ Unsubscribe & other changes: http://lists.boost.org/** mailman/listinfo.cgi/boosthttp://lists.boost.org/mailman/listinfo.cgi/boost
-- ---------------- Atluri Aditya Avinash, India.
------------------------------
Message: 2 Date: Wed, 29 May 2013 08:27:40 +0200 From: Karsten Ahnert
To: boost@lists.boost.org Subject: Re: [boost] SIMD implementation of uBLAS Message-ID: <51A59FDC.5050409@googlemail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed On 05/29/2013 07:33 AM, Aditya Avinash wrote:
@Gaetano: Thank you for the comments. I'll change accordingly and post it back. I am using T because, the code need to run double precision float also. @Joel: The Boost.SIMD is generalized. Designing algorithms specific to uBLAS increases the performance. Odeint have their own simd backend.
odeint has no simd backend, At least i am not aware of an simd backend. Having one would be really great.
On Wed, May 29, 2013 at 10:36 AM, Joel Falcou
wrote: On 29/05/2013 06:45, Gaetano Mendola wrote:
On 29/05/2013 06.13, Aditya Avinash wrote:
Hi, i have developed vector addition algorithm which exploits the hardware parallelism (SSE implementation).
A few comments:
- That is not C++ but just C in disguise of C++ code . SSE1 CTOR doesn't use initialization list . SSE1 doesn't have a DTOR and the user has to explicit call the Free method
- const-correctness is not in place - The SSE namespace should have been put in a "detail" namespace - Use memcpy instead of explicit for - Why is SSE1 template when it works only when T is a single-precision, floating-point value ?
Also I believe a nice interface whould have been:
SSE1::vector A(1024); SSE1::vector B(1024); SSE1::vector C(1024);
C = A + B;
Regards Gaetano Mendola
See our work on Boost.SIMD ...
______________________________**_________________ Unsubscribe & other changes: http://lists.boost.org/** mailman/listinfo.cgi/boosthttp://lists.boost.org/mailman/listinfo.cgi/boost
------------------------------
Message: 3 Date: Wed, 29 May 2013 08:32:15 +0200 From: Karsten Ahnert
To: boost@lists.boost.org Subject: Re: [boost] Request to contribute boost::FFT Message-ID: <51A5A0EF.9000806@googlemail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed On 05/29/2013 12:12 AM, Nathan Bliss wrote:
Dear Boost Community Members, I am writing to ask if I could contribute a C++ FFT implementation to the boost library. I noticed that there is a boost::CRC so I thought it would be a good addition to have boost::FFT as well. MIT have an existing open-source FFTW library, but it is GPL-licensed which is much more restrictive than the Boost license. I should imagine a commercial FFTW license would be very expensive. I have working FFT code which I have tested on Ubuntu Linux GNU gcc and also Visual Studio Express 2012 (MS VC++) for Windows. My code, when run as a 2048-point FFT, agrees with the MIT FFTW one to a max error margin of 6 parts per million.
I would really like to see an FFT implementation in boost. It should of course focus on performance. I also think that interoperability with different vector and storage types would be really great. It prevents that the user has to convert its data types in different formats (which can be really painful if you use different libraries, for example odeint, ublas, fftw, ...).
------------------------------
Message: 4 Date: Wed, 29 May 2013 12:05:24 +0530 From: Aditya Avinash
To: boost@lists.boost.org Subject: Re: [boost] SIMD implementation of uBLAS Message-ID: Content-Type: text/plain; charset=ISO-8859-1 Am sorry. My bad. It's boost.simd. Why isn't it included in boost? I have heard about it recently. Is there a chance that it is added to boost in the near future?
On Wed, May 29, 2013 at 11:57 AM, Karsten Ahnert < karsten.ahnert@googlemail.com> wrote:
On 05/29/2013 07:33 AM, Aditya Avinash wrote:
@Gaetano: Thank you for the comments. I'll change accordingly and post it back. I am using T because, the code need to run double precision float also. @Joel: The Boost.SIMD is generalized. Designing algorithms specific to uBLAS increases the performance. Odeint have their own simd backend.
odeint has no simd backend, At least i am not aware of an simd backend. Having one would be really great.
On Wed, May 29, 2013 at 10:36 AM, Joel Falcou
wrote: On 29/05/2013 06:45, Gaetano Mendola wrote:
On 29/05/2013 06.13, Aditya Avinash wrote:
Hi, i have developed vector addition algorithm which exploits the
hardware parallelism (SSE implementation).
A few comments:
- That is not C++ but just C in disguise of C++ code . SSE1 CTOR doesn't use initialization list . SSE1 doesn't have a DTOR and the user has to explicit call the Free method
- const-correctness is not in place - The SSE namespace should have been put in a "detail" namespace - Use memcpy instead of explicit for - Why is SSE1 template when it works only when T is a single-precision, floating-point value ?
Also I believe a nice interface whould have been:
SSE1::vector A(1024); SSE1::vector B(1024); SSE1::vector C(1024);
C = A + B;
Regards Gaetano Mendola
See our work on Boost.SIMD ...
______________________________****_________________ Unsubscribe & other changes: http://lists.boost.org/** mailman/listinfo.cgi/boost
http://lists.boost.org/mailman/listinfo.cgi/boost> ______________________________**_________________ Unsubscribe & other changes: http://lists.boost.org/** mailman/listinfo.cgi/boosthttp://lists.boost.org/mailman/listinfo.cgi/boost
-- ---------------- Atluri Aditya Avinash, India.
------------------------------
Message: 5 Date: Wed, 29 May 2013 12:19:39 +0530 From: Aditya Avinash
To: boost@lists.boost.org Subject: [boost] Boost on ARM using NEON Message-ID: Content-Type: text/plain; charset=ISO-8859-1 Hi, i want to develop boost.arm or boost.neon library so that boost is implemented on ARM.
-- ---------------- Atluri Aditya Avinash, India.
------------------------------
Message: 6 Date: Wed, 29 May 2013 11:45:12 +0400 From: Antony Polukhin
To: boost@lists.boost.org Subject: Re: [boost] Boost on ARM using NEON Message-ID: Content-Type: text/plain; charset=ISO-8859-1 2013/5/29 Aditya Avinash
: Hi, i want to develop boost.arm or boost.neon library so that boost is implemented on ARM.
Hi,
Boost works well on arm, so nothing should be really developed. But if you ate talking about SIMD for ARM, that you shall take a look at Boost.SIMD and maybe propose library developer your help.
-- Best regards, Antony Polukhin
------------------------------
Message: 7 Date: Wed, 29 May 2013 13:20:15 +0530 From: Aditya Avinash
To: boost@lists.boost.org Subject: Re: [boost] Boost on ARM using NEON Message-ID: Content-Type: text/plain; charset=ISO-8859-1 Thank you! Can i develop a new kernel for uBLAS using NEON?
On Wed, May 29, 2013 at 1:15 PM, Antony Polukhin
wrote: 2013/5/29 Aditya Avinash
: Hi, i want to develop boost.arm or boost.neon library so that boost is implemented on ARM.
Hi,
Boost works well on arm, so nothing should be really developed. But if you ate talking about SIMD for ARM, that you shall take a look at Boost.SIMD and maybe propose library developer your help.
-- Best regards, Antony Polukhin
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- ---------------- Atluri Aditya Avinash, India.
------------------------------
Message: 8 Date: Wed, 29 May 2013 09:52:57 +0200 From: Tim Blechmann
To: boost@lists.boost.org Subject: Re: [boost] Boost on ARM using NEON Message-ID: <51A5B3D9.2010807@klingt.org> Content-Type: text/plain; charset=ISO-8859-1 Hi, i want to develop boost.arm or boost.neon library so that boost is implemented on ARM.
Hi,
Boost works well on arm, so nothing should be really developed. But if you ate talking about SIMD for ARM, that you shall take a look at Boost.SIMD and maybe propose library developer your help.
from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure at this stage whether NEON will be provided as an open-source module"
tim
------------------------------
Message: 9 Date: Wed, 29 May 2013 08:55:41 +0100 From: "Paul A. Bristow"
To: Subject: Re: [boost] Request to contribute boost::FFT Message-ID: <001f01ce5c41$ea88da30$bf9a8e90$@hetp.u-net.com> Content-Type: text/plain; charset="us-ascii" -----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Nathan Bliss Sent: Tuesday, May 28, 2013 11:12 PM To: boost@lists.boost.org Subject: [boost] Request to contribute boost::FFT
I am writing to ask if I could contribute a C++ FFT implementation to the boost library.
Definitely, a good templated C++ FFT would be very welcome.
Would/does it work with Boost.Multiprecision to give much higher precision ? (at a snail's pace of course).
Paul
--- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com
------------------------------
Message: 10 Date: Wed, 29 May 2013 09:15:36 +0200 From: Victor Hiairrassary
To: "boost@lists.boost.org" Cc: "boost@lists.boost.org" Subject: Re: [boost] Boost on ARM using NEON Message-ID: <22F316D0-E311-4BE3-94FB-11DE688C9CAB@gmail.com> Content-Type: text/plain; charset=us-ascii boost already works very well on ARM !
If you want to use Neon extension, look at boost simd (I do not know if Neon is implemented yet, feel free to do it) !
https://github.com/MetaScale/nt2
On 29 mai 2013, at 08:49, Aditya Avinash
wrote: Hi, i want to develop boost.arm or boost.neon library so that boost is implemented on ARM.
-- ---------------- Atluri Aditya Avinash, India.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
------------------------------
Message: 11 Date: Wed, 29 May 2013 12:11:31 +0400 From: Andrey Semashev
To: boost@lists.boost.org Subject: Re: [boost] Boost on ARM using NEON Message-ID: Content-Type: text/plain; charset=UTF-8 On Wed, May 29, 2013 at 11:52 AM, Tim Blechmann
wrote: Hi, i want to develop boost.arm or boost.neon library so that boost is implemented on ARM.
Hi,
Boost works well on arm, so nothing should be really developed. But if you ate talking about SIMD for ARM, that you shall take a look at Boost.SIMD and maybe propose library developer your help.
from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure at this stage whether NEON will be provided as an open-source module"
Even if it's not openly provided by developers of NT2, nothing prevents you from implementing it yourself.
------------------------------
Message: 12 Date: Wed, 29 May 2013 10:37:48 +0200 From: Tim Blechmann
To: boost@lists.boost.org Subject: Re: [boost] Boost on ARM using NEON Message-ID: <51A5BE5C.6000705@klingt.org> Content-Type: text/plain; charset=ISO-8859-1 Hi, i want to develop boost.arm or boost.neon library so that boost is implemented on ARM.
Hi,
Boost works well on arm, so nothing should be really developed. But if you ate talking about SIMD for ARM, that you shall take a look at Boost.SIMD and maybe propose library developer your help.
from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure at this stage whether NEON will be provided as an open-source module"
Even if it's not openly provided by developers of NT2, nothing prevents you from implementing it yourself.
yes and no ... if the nt2 devs submit boost.simd to become an official part of boost, it is the question if they'd merge an independently developed arm/neon support, if it conflicts with their business interests ... the situation is a bit unfortunate ...
tim
------------------------------
Message: 13 Date: Wed, 29 May 2013 12:52:29 +0400 From: Andrey Semashev
To: boost@lists.boost.org Subject: Re: [boost] Boost on ARM using NEON Message-ID: Content-Type: text/plain; charset=UTF-8 On Wed, May 29, 2013 at 12:37 PM, Tim Blechmann
wrote: Hi, i want to develop boost.arm or boost.neon library so that boost is implemented on ARM.
Hi,
Boost works well on arm, so nothing should be really developed. But if you ate talking about SIMD for ARM, that you shall take a look at Boost.SIMD and maybe propose library developer your help.
from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure at this stage whether NEON will be provided as an open-source module"
Even if it's not openly provided by developers of NT2, nothing prevents you from implementing it yourself.
yes and no ... if the nt2 devs submit boost.simd to become an official part of boost, it is the question if they'd merge an independently developed arm/neon support, if it conflicts with their business interests ... the situation is a bit unfortunate ...
I realize that it may be inconvenient for them to expose their implementation of NEON module (if there is one) for various reasons. But as long as Boost.SIMD is licensed under BSL, anyone can use and improve this code if he likes to, even if it means implementing functionality similar to the proprietary solution.
It doesn't necessarily mean that the open solution will take the market share of the proprietary one.
------------------------------
Message: 14 Date: Wed, 29 May 2013 09:59:04 +0100 From: David Bellot
To: boost@lists.boost.org Subject: Re: [boost] Boost on ARM using NEON Message-ID: Content-Type: text/plain; charset=ISO-8859-1 again, as I said, you develop and propose a patch to the ublas mailing-list. We're happy to see contributions from anybody.
Now I must say that we are interested in Neon instructions for ublas. It has been on the todo list for quite a long time too: http://ublas.sf.net
What I want to say is, apart from the 2 GSOC students and myself (general maintenance, official releases), nobody has a specific task assigned to.
So if you want to contribute, you just work on it and talk about it on the mailing list so that people can be involved and help you.
If little by little you contribute with amazing ARM Neon code, then people will naturally take for granted that you are the ARM Neon specialist for ublas. As simple as that.
If someone comes with a better code than you then we will choose the other code. If you come with a better code than someone else, then we will choose your code.
So please, contribute.
Are you testing your code on a specific machine or a virtual one ? What's about things like Raspberry Pi ? I'd like to see benchmark on this little things. Maybe you can start benchmarking ublas on a tiny machine like that and/or an Android device and see how gcc is able to generate auto-vectorized code for this machine. Check the assembly code to see if Neon instructions have been correctly generated.
Best, David
On Wed, May 29, 2013 at 8:50 AM, Aditya Avinash
wrote: Thank you! Can i develop a new kernel for uBLAS using NEON?
On Wed, May 29, 2013 at 1:15 PM, Antony Polukhin
wrote: 2013/5/29 Aditya Avinash
: Hi, i want to develop boost.arm or boost.neon library so that boost is implemented on ARM.
Hi,
Boost works well on arm, so nothing should be really developed. But if you ate talking about SIMD for ARM, that you shall take a look at Boost.SIMD and maybe propose library developer your help.
-- Best regards, Antony Polukhin
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- ---------------- Atluri Aditya Avinash, India.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
------------------------------
Message: 15 Date: Wed, 29 May 2013 05:27:02 -0400 From: Rob Stewart
To: "boost@lists.boost.org" Subject: Re: [boost] SIMD implementation of uBLAS Message-ID: <8ED4CF8D-139A-4232-9C18-968AD38161E7@comcast.net> Content-Type: text/plain; charset=us-ascii On May 29, 2013, at 2:35 AM, Aditya Avinash
wrote: Am sorry. My bad. It's boost.simd. Why isn't it included in boost? I have heard about it recently. Is there a chance that it is added to boost in the near future?
On Wed, May 29, 2013 at 11:57 AM, Karsten Ahnert < karsten.ahnert@googlemail.com> wrote:
On 05/29/2013 07:33 AM, Aditya Avinash wrote:
[snip lots of quoted text]
On Wed, May 29, 2013 at 10:36 AM, Joel Falcou
wrote: On 29/05/2013 06:45, Gaetano Mendola wrote:
On 29/05/2013 06.13, Aditya Avinash wrote:
[snip even more quoted text]
Regards Gaetano Mendola
See our work on Boost.SIMD ...
[snip multiple sigs and ML footers]
Please read http://www.boost.org/community/policy.html#quoting before posting.
___ Rob
(Sent from my portable computation engine)
------------------------------
Message: 16 Date: Wed, 29 May 2013 11:34:14 +0200 From: Mathias Gaunard
To: boost@lists.boost.org Subject: Re: [boost] SIMD implementation of uBLAS Message-ID: <51A5CB96.3040404@ens-lyon.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed On 29/05/13 06:13, Aditya Avinash wrote:
Hi, i have developed vector addition algorithm which exploits the hardware parallelism (SSE implementation).
That's something trivial to do, and unfortunately even that trivial code is broken (it's written for a generic T but clearly does not work for any T beside float). It still has nothing to do with uBLAS.
Bringing SIMD to uBLAS could be fairly difficult. Is this part of the GSoC projects? Who's in charge of this? I'd like to know what the plan is: optimize very specific operations with SIMD or try to provide a framework to use SIMD in expression templates?
The former is better adressed by simply binding BLAS, the latter is certainly not as easy as it sounds.
------------------------------
Message: 17 Date: Wed, 29 May 2013 15:04:51 +0530 From: Aditya Avinash
To: boost@lists.boost.org Subject: Re: [boost] SIMD implementation of uBLAS Message-ID: Content-Type: text/plain; charset=ISO-8859-1 Am sorry. My bad. It's boost.simd. Why isn't it included in boost? I have heard about it recently. Is there a chance that it is added to boost in the near future?
------------------------------
Message: 18 Date: Wed, 29 May 2013 11:38:45 +0200 From: Mathias Gaunard
To: boost@lists.boost.org Subject: Re: [boost] Request to contribute boost::FFT Message-ID: <51A5CCA5.80605@ens-lyon.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed On 29/05/13 00:12, Nathan Bliss wrote:
Dear Boost Community Members, I am writing to ask if I could contribute a C++ FFT implementation to the boost library. I noticed that there is a boost::CRC so I thought it would be a good addition to have boost::FFT as well. MIT have an existing open-source FFTW library, but it is GPL-licensed which is much more restrictive than the Boost license. I should imagine a commercial FFTW license would be very expensive. I have working FFT code which I have tested on Ubuntu Linux GNU gcc and also Visual Studio Express 2012 (MS VC++) for Windows. My code, when run as a 2048-point FFT, agrees with the MIT FFTW one to a max error margin of 6 parts per million. It is implemented using templates in a .hpp file and is very easy to use: ------------------------------------------------------------------------------------------------------------[fft.hpp] template
class FFT;///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////[Invocation example] typedef double My_type;FFT user_fft;user_fft.initialise_FFT();get_input_samples(*user_fft.input_value_array, user_fft.m_fft_size);user_fft.execute_FFT();print_output_csv(user_fft.output_value_array, 2048);------------------------------------------------------------------------------------------------------------ It's uniqueness compared to other FFT implementations is that it fully utilises the boost::thread library, to the extent that the user can give the number of threads they want to use as a parameter to the class template (above case is for 4 parallel threads). It is structured and organised so that users could customise/optimise it to specific processor architectures. I've also tried to develop it in the spirit of Boost in that all class members which users should not access are private, only making public what is necessary to the API. My code is a decimation-in-time radix-2 FFT, and users could in theory use the existing API as a basis to extend it to more complex implementations such as the Reverse/Inverse FFT, bit-reversal of inputs/outputs and multi-dimensional FFTs. I look forward to your reply. Kind regards,Nathan Bliss You may want to give a look to the FFT functions bundled with NT2 courtesy of Domagoj Saric.
They also generate a FFT for a given compile-time size and use Boost.SIMD for vectorization (but no threads). Unfortunately the code is not so generic that it can work with arbitrary vector sizes, so it limits portability somewhat.
https://github.com/MetaScale/nt2/blob/master/modules/core/signal/include/nt2...
------------------------------
Subject: Digest Footer
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
------------------------------
End of Boost Digest, Vol 4012, Issue 1 **************************************
participants (1)
-
Nathan Bliss