Boost
Threads by month
- ----- 2025 -----
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
May 2013
- 173 participants
- 187 discussions
Hi,
Just to answer the previous questions about my FFT code...
I'm afraid it's only a basic FFT implementation. It may not be up to the standard of Boost and I was always prepared for this, so I may then have to submit it to a less prestigious open source forum on the web somewhere.
I made a mistake before about the precision. Mine agrees with the FFTW 2048-point results to a max error of 3 parts per million (max diff with FFTW value is 2.53E-06). Most of the outputs show zero or infinitesimal diffs. I've checked by math many times and I can only think it's because of performance-related approximations in FFTW, as the diffs seem too small to be algorithmic errors.
Unfortunately, the FFTW implementation is ~20 times faster than mine. FFTW can run a 2048-point in 6ms on my Ubuntu installation whereas mine takes approx. 140ms. On Windows, both implementations are an order of magnitude slower (5secs for mine versus 250ms for FFTW).
The boost multi-threading appears to make no difference to the speed, even though I've run it through GDB and with printfs to check that my code automatically spawns the optimal number of threads up to the max value given as the template parameter.
Some compilers may be able to optimise the C++ code and multi-threading to increase performance, though I do not know if it could ever approach FFTW's.
I have not implemented more advanced FFT features such as multi-dimensional FFTs or real-value-only optimisations, but I think the current API could facilitate users extending it to include forward/reverse FFT, bit-reversal, multi-dimensional FFTS (by manipulating the input and output vectors), etc. I've tried to make the code well organised, structured and commented so that hopefully users could customise it with their own optimisations for specific processor architectures.
I'll understand if the consensus is that it is not really good enough for Boost. Alternatively, I would be happy to share my code and share the credit if anyone else wants to help. Kind regards,Nathan
> From: boost-request(a)lists.boost.org
> Subject: Boost Digest, Vol 4012, Issue 1
> To: boost(a)lists.boost.org
> Date: Wed, 29 May 2013 05:38:51 -0400
>
> Send Boost mailing list submissions to
> boost(a)lists.boost.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.boost.org/mailman/listinfo.cgi/boost
> or, via email, send a message with subject or body 'help' to
> boost-request(a)lists.boost.org
>
> You can reach the person managing the list at
> boost-owner(a)lists.boost.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Boost digest..."
>
>
> The boost archives may be found at: http://lists.boost.org/Archives/boost/
>
> Today's Topics:
>
> 1. Re: SIMD implementation of uBLAS (Aditya Avinash)
> 2. Re: SIMD implementation of uBLAS (Karsten Ahnert)
> 3. Re: Request to contribute boost::FFT (Karsten Ahnert)
> 4. Re: SIMD implementation of uBLAS (Aditya Avinash)
> 5. Boost on ARM using NEON (Aditya Avinash)
> 6. Re: Boost on ARM using NEON (Antony Polukhin)
> 7. Re: Boost on ARM using NEON (Aditya Avinash)
> 8. Re: Boost on ARM using NEON (Tim Blechmann)
> 9. Re: Request to contribute boost::FFT (Paul A. Bristow)
> 10. Re: Boost on ARM using NEON (Victor Hiairrassary)
> 11. Re: Boost on ARM using NEON (Andrey Semashev)
> 12. Re: Boost on ARM using NEON (Tim Blechmann)
> 13. Re: Boost on ARM using NEON (Andrey Semashev)
> 14. Re: Boost on ARM using NEON (David Bellot)
> 15. Re: SIMD implementation of uBLAS (Rob Stewart)
> 16. Re: SIMD implementation of uBLAS (Mathias Gaunard)
> 17. Re: SIMD implementation of uBLAS (Aditya Avinash)
> 18. Re: Request to contribute boost::FFT (Mathias Gaunard)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 29 May 2013 11:03:50 +0530
> From: Aditya Avinash <adityaavinash143(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID:
> <CABocMVpiKTED=tPZ4QtXBZAVdbwHMv+26LE+YXx9u2+HyWAFag(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> @Gaetano: Thank you for the comments. I'll change accordingly and post it
> back. I am using T because, the code need to run double precision float
> also.
> @Joel: The Boost.SIMD is generalized. Designing algorithms specific to
> uBLAS increases the performance. Odeint have their own simd backend.
>
> On Wed, May 29, 2013 at 10:36 AM, Joel Falcou <joel.falcou(a)gmail.com> wrote:
>
> > On 29/05/2013 06:45, Gaetano Mendola wrote:
> >
> >> On 29/05/2013 06.13, Aditya Avinash wrote:
> >>
> >>> Hi, i have developed vector addition algorithm which exploits the
> >>> hardware
> >>> parallelism (SSE implementation).
> >>>
> >>
> >> A few comments:
> >>
> >> - That is not C++ but just C in disguise of C++ code
> >> . SSE1 CTOR doesn't use initialization list
> >> . SSE1 doesn't have a DTOR and the user has to
> >> explicit call the Free method
> >>
> >> - const-correctness is not in place
> >> - The SSE namespace should have been put in a "detail"
> >> namespace
> >> - Use memcpy instead of explicit for
> >> - Why is SSE1 template when it works only when T is a
> >> single-precision, floating-point value ?
> >>
> >>
> >> Also I believe a nice interface whould have been:
> >>
> >> SSE1::vector A(1024);
> >> SSE1::vector B(1024);
> >> SSE1::vector C(1024);
> >>
> >> C = A + B;
> >>
> >>
> >> Regards
> >> Gaetano Mendola
> >>
> >>
> > See our work on Boost.SIMD ...
> >
> >
> >
> > ______________________________**_________________
> > Unsubscribe & other changes: http://lists.boost.org/**
> > mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>
> >
>
>
>
> --
> ----------------
> Atluri Aditya Avinash,
> India.
>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 29 May 2013 08:27:40 +0200
> From: Karsten Ahnert <karsten.ahnert(a)googlemail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID: <51A59FDC.5050409(a)googlemail.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 05/29/2013 07:33 AM, Aditya Avinash wrote:
> > @Gaetano: Thank you for the comments. I'll change accordingly and post it
> > back. I am using T because, the code need to run double precision float
> > also.
> > @Joel: The Boost.SIMD is generalized. Designing algorithms specific to
> > uBLAS increases the performance. Odeint have their own simd backend.
>
> odeint has no simd backend, At least i am not aware of an simd backend.
> Having one would be really great.
>
>
> >
> > On Wed, May 29, 2013 at 10:36 AM, Joel Falcou <joel.falcou(a)gmail.com> wrote:
> >
> >> On 29/05/2013 06:45, Gaetano Mendola wrote:
> >>
> >>> On 29/05/2013 06.13, Aditya Avinash wrote:
> >>>
> >>>> Hi, i have developed vector addition algorithm which exploits the
> >>>> hardware
> >>>> parallelism (SSE implementation).
> >>>>
> >>>
> >>> A few comments:
> >>>
> >>> - That is not C++ but just C in disguise of C++ code
> >>> . SSE1 CTOR doesn't use initialization list
> >>> . SSE1 doesn't have a DTOR and the user has to
> >>> explicit call the Free method
> >>>
> >>> - const-correctness is not in place
> >>> - The SSE namespace should have been put in a "detail"
> >>> namespace
> >>> - Use memcpy instead of explicit for
> >>> - Why is SSE1 template when it works only when T is a
> >>> single-precision, floating-point value ?
> >>>
> >>>
> >>> Also I believe a nice interface whould have been:
> >>>
> >>> SSE1::vector A(1024);
> >>> SSE1::vector B(1024);
> >>> SSE1::vector C(1024);
> >>>
> >>> C = A + B;
> >>>
> >>>
> >>> Regards
> >>> Gaetano Mendola
> >>>
> >>>
> >> See our work on Boost.SIMD ...
> >>
> >>
> >>
> >> ______________________________**_________________
> >> Unsubscribe & other changes: http://lists.boost.org/**
> >> mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>
> >>
> >
> >
> >
>
>
>
> ------------------------------
>
> Message: 3
> Date: Wed, 29 May 2013 08:32:15 +0200
> From: Karsten Ahnert <karsten.ahnert(a)googlemail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Request to contribute boost::FFT
> Message-ID: <51A5A0EF.9000806(a)googlemail.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 05/29/2013 12:12 AM, Nathan Bliss wrote:
> > Dear Boost Community Members,
> > I am writing to ask if I could contribute a C++ FFT implementation to the boost library. I noticed that there is a boost::CRC so I thought it would be a good addition to have boost::FFT as well. MIT have an existing open-source FFTW library, but it is GPL-licensed which is much more restrictive than the Boost license. I should imagine a commercial FFTW license would be very expensive.
> > I have working FFT code which I have tested on Ubuntu Linux GNU gcc and also Visual Studio Express 2012 (MS VC++) for Windows. My code, when run as a 2048-point FFT, agrees with the MIT FFTW one to a max error margin of 6 parts per million.
>
> I would really like to see an FFT implementation in boost. It should of
> course focus on performance. I also think that interoperability with
> different vector and storage types would be really great. It prevents
> that the user has to convert its data types in different formats (which
> can be really painful if you use different libraries, for example
> odeint, ublas, fftw, ...).
>
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Wed, 29 May 2013 12:05:24 +0530
> From: Aditya Avinash <adityaavinash143(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID:
> <CABocMVpBesKaSTznHmKACxNpPUwYtTs50vUaWeZo5qUUaK9K+Q(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Am sorry. My bad. It's boost.simd. Why isn't it included in boost? I have
> heard about it recently. Is there a chance that it is added to boost in the
> near future?
>
> On Wed, May 29, 2013 at 11:57 AM, Karsten Ahnert <
> karsten.ahnert(a)googlemail.com> wrote:
>
> > On 05/29/2013 07:33 AM, Aditya Avinash wrote:
> >
> >> @Gaetano: Thank you for the comments. I'll change accordingly and post it
> >> back. I am using T because, the code need to run double precision float
> >> also.
> >> @Joel: The Boost.SIMD is generalized. Designing algorithms specific to
> >> uBLAS increases the performance. Odeint have their own simd backend.
> >>
> >
> > odeint has no simd backend, At least i am not aware of an simd backend.
> > Having one would be really great.
> >
> >
> >
> >> On Wed, May 29, 2013 at 10:36 AM, Joel Falcou <joel.falcou(a)gmail.com>
> >> wrote:
> >>
> >> On 29/05/2013 06:45, Gaetano Mendola wrote:
> >>>
> >>> On 29/05/2013 06.13, Aditya Avinash wrote:
> >>>>
> >>>> Hi, i have developed vector addition algorithm which exploits the
> >>>>> hardware
> >>>>> parallelism (SSE implementation).
> >>>>>
> >>>>>
> >>>> A few comments:
> >>>>
> >>>> - That is not C++ but just C in disguise of C++ code
> >>>> . SSE1 CTOR doesn't use initialization list
> >>>> . SSE1 doesn't have a DTOR and the user has to
> >>>> explicit call the Free method
> >>>>
> >>>> - const-correctness is not in place
> >>>> - The SSE namespace should have been put in a "detail"
> >>>> namespace
> >>>> - Use memcpy instead of explicit for
> >>>> - Why is SSE1 template when it works only when T is a
> >>>> single-precision, floating-point value ?
> >>>>
> >>>>
> >>>> Also I believe a nice interface whould have been:
> >>>>
> >>>> SSE1::vector A(1024);
> >>>> SSE1::vector B(1024);
> >>>> SSE1::vector C(1024);
> >>>>
> >>>> C = A + B;
> >>>>
> >>>>
> >>>> Regards
> >>>> Gaetano Mendola
> >>>>
> >>>>
> >>>> See our work on Boost.SIMD ...
> >>>
> >>>
> >>>
> >>> ______________________________****_________________
> >>> Unsubscribe & other changes: http://lists.boost.org/**
> >>> mailman/listinfo.cgi/boost<htt**p://lists.boost.org/mailman/**
> >>> listinfo.cgi/boost <http://lists.boost.org/mailman/listinfo.cgi/boost>>
> >>>
> >>>
> >>
> >>
> >>
> >
> > ______________________________**_________________
> > Unsubscribe & other changes: http://lists.boost.org/**
> > mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>
> >
>
>
>
> --
> ----------------
> Atluri Aditya Avinash,
> India.
>
>
> ------------------------------
>
> Message: 5
> Date: Wed, 29 May 2013 12:19:39 +0530
> From: Aditya Avinash <adityaavinash143(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: [boost] Boost on ARM using NEON
> Message-ID:
> <CABocMVpQqBGqduE6jj-KG=O5faV08GcoV=TOS_bXFLYWTkzKjA(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi, i want to develop boost.arm or boost.neon library so that boost is
> implemented on ARM.
>
> --
> ----------------
> Atluri Aditya Avinash,
> India.
>
>
> ------------------------------
>
> Message: 6
> Date: Wed, 29 May 2013 11:45:12 +0400
> From: Antony Polukhin <antoshkka(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CAKqmYPbt+wq92BvLLn-5GUhFfmvWiWsAoqfpXK6XOmnRK7EtwQ(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> 2013/5/29 Aditya Avinash <adityaavinash143(a)gmail.com>:
> > Hi, i want to develop boost.arm or boost.neon library so that boost is
> > implemented on ARM.
>
> Hi,
>
> Boost works well on arm, so nothing should be really developed.
> But if you ate talking about SIMD for ARM, that you shall take a look
> at Boost.SIMD and maybe propose library developer your help.
>
>
> --
> Best regards,
> Antony Polukhin
>
>
> ------------------------------
>
> Message: 7
> Date: Wed, 29 May 2013 13:20:15 +0530
> From: Aditya Avinash <adityaavinash143(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CABocMVo_DbCaxRPDOK4_1KMw8m7O54gXKgOqX+cmYRcBndNYDA(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Thank you!
> Can i develop a new kernel for uBLAS using NEON?
>
> On Wed, May 29, 2013 at 1:15 PM, Antony Polukhin <antoshkka(a)gmail.com>wrote:
>
> > 2013/5/29 Aditya Avinash <adityaavinash143(a)gmail.com>:
> > > Hi, i want to develop boost.arm or boost.neon library so that boost is
> > > implemented on ARM.
> >
> > Hi,
> >
> > Boost works well on arm, so nothing should be really developed.
> > But if you ate talking about SIMD for ARM, that you shall take a look
> > at Boost.SIMD and maybe propose library developer your help.
> >
> >
> > --
> > Best regards,
> > Antony Polukhin
> >
> > _______________________________________________
> > Unsubscribe & other changes:
> > http://lists.boost.org/mailman/listinfo.cgi/boost
> >
>
>
>
> --
> ----------------
> Atluri Aditya Avinash,
> India.
>
>
> ------------------------------
>
> Message: 8
> Date: Wed, 29 May 2013 09:52:57 +0200
> From: Tim Blechmann <tim(a)klingt.org>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID: <51A5B3D9.2010807(a)klingt.org>
> Content-Type: text/plain; charset=ISO-8859-1
>
> >> Hi, i want to develop boost.arm or boost.neon library so that boost is
> >> implemented on ARM.
> >
> > Hi,
> >
> > Boost works well on arm, so nothing should be really developed.
> > But if you ate talking about SIMD for ARM, that you shall take a look
> > at Boost.SIMD and maybe propose library developer your help.
>
> from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure at
> this stage whether NEON will be provided as an open-source module"
>
> tim
>
>
>
> ------------------------------
>
> Message: 9
> Date: Wed, 29 May 2013 08:55:41 +0100
> From: "Paul A. Bristow" <pbristow(a)hetp.u-net.com>
> To: <boost(a)lists.boost.org>
> Subject: Re: [boost] Request to contribute boost::FFT
> Message-ID: <001f01ce5c41$ea88da30$bf9a8e90$(a)hetp.u-net.com>
> Content-Type: text/plain; charset="us-ascii"
>
> > -----Original Message-----
> > From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Nathan Bliss
> > Sent: Tuesday, May 28, 2013 11:12 PM
> > To: boost(a)lists.boost.org
> > Subject: [boost] Request to contribute boost::FFT
> >
> > I am writing to ask if I could contribute a C++ FFT implementation to the boost library.
>
> Definitely, a good templated C++ FFT would be very welcome.
>
> Would/does it work with Boost.Multiprecision to give much higher precision ? (at a snail's pace of
> course).
>
> Paul
>
> ---
> Paul A. Bristow,
> Prizet Farmhouse, Kendal LA8 8AB UK
> +44 1539 561830 07714330204
> pbristow(a)hetp.u-net.com
>
>
>
>
>
>
>
>
>
>
>
>
> ------------------------------
>
> Message: 10
> Date: Wed, 29 May 2013 09:15:36 +0200
> From: Victor Hiairrassary <victor.hiairrassary.ml(a)gmail.com>
> To: "boost(a)lists.boost.org" <boost(a)lists.boost.org>
> Cc: "boost(a)lists.boost.org" <boost(a)lists.boost.org>
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID: <22F316D0-E311-4BE3-94FB-11DE688C9CAB(a)gmail.com>
> Content-Type: text/plain; charset=us-ascii
>
> boost already works very well on ARM !
>
> If you want to use Neon extension, look at boost simd (I do not know if Neon is implemented yet, feel free to do it) !
>
> https://github.com/MetaScale/nt2
>
> On 29 mai 2013, at 08:49, Aditya Avinash <adityaavinash143(a)gmail.com> wrote:
>
> > Hi, i want to develop boost.arm or boost.neon library so that boost is
> > implemented on ARM.
> >
> > --
> > ----------------
> > Atluri Aditya Avinash,
> > India.
> >
> > _______________________________________________
> > Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>
>
> ------------------------------
>
> Message: 11
> Date: Wed, 29 May 2013 12:11:31 +0400
> From: Andrey Semashev <andrey.semashev(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CAEhD+6BfavRaGgok9eMbDebCP1Hdg2T3Z3be7BWZTAZ1EDpfzg(a)mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> On Wed, May 29, 2013 at 11:52 AM, Tim Blechmann <tim(a)klingt.org> wrote:
>
> > >> Hi, i want to develop boost.arm or boost.neon library so that boost is
> > >> implemented on ARM.
> > >
> > > Hi,
> > >
> > > Boost works well on arm, so nothing should be really developed.
> > > But if you ate talking about SIMD for ARM, that you shall take a look
> > > at Boost.SIMD and maybe propose library developer your help.
> >
> > from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure at
> > this stage whether NEON will be provided as an open-source module"
> >
>
> Even if it's not openly provided by developers of NT2, nothing prevents you
> from implementing it yourself.
>
>
> ------------------------------
>
> Message: 12
> Date: Wed, 29 May 2013 10:37:48 +0200
> From: Tim Blechmann <tim(a)klingt.org>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID: <51A5BE5C.6000705(a)klingt.org>
> Content-Type: text/plain; charset=ISO-8859-1
>
> >>>> Hi, i want to develop boost.arm or boost.neon library so that boost is
> >>>> implemented on ARM.
> >>>
> >>> Hi,
> >>>
> >>> Boost works well on arm, so nothing should be really developed.
> >>> But if you ate talking about SIMD for ARM, that you shall take a look
> >>> at Boost.SIMD and maybe propose library developer your help.
> >>
> >> from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure at
> >> this stage whether NEON will be provided as an open-source module"
> >
> > Even if it's not openly provided by developers of NT2, nothing prevents you
> > from implementing it yourself.
>
> yes and no ... if the nt2 devs submit boost.simd to become an official
> part of boost, it is the question if they'd merge an independently
> developed arm/neon support, if it conflicts with their business
> interests ... the situation is a bit unfortunate ...
>
> tim
>
>
> ------------------------------
>
> Message: 13
> Date: Wed, 29 May 2013 12:52:29 +0400
> From: Andrey Semashev <andrey.semashev(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CAEhD+6D8h43AyTvg65ntZZm804R9p0fiOtARPp2G8cOJC64zog(a)mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> On Wed, May 29, 2013 at 12:37 PM, Tim Blechmann <tim(a)klingt.org> wrote:
>
> > >>>> Hi, i want to develop boost.arm or boost.neon library so that boost is
> > >>>> implemented on ARM.
> > >>>
> > >>> Hi,
> > >>>
> > >>> Boost works well on arm, so nothing should be really developed.
> > >>> But if you ate talking about SIMD for ARM, that you shall take a look
> > >>> at Boost.SIMD and maybe propose library developer your help.
> > >>
> > >> from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure
> > at
> > >> this stage whether NEON will be provided as an open-source module"
> > >
> > > Even if it's not openly provided by developers of NT2, nothing prevents
> > you
> > > from implementing it yourself.
> >
> > yes and no ... if the nt2 devs submit boost.simd to become an official
> > part of boost, it is the question if they'd merge an independently
> > developed arm/neon support, if it conflicts with their business
> > interests ... the situation is a bit unfortunate ...
> >
>
> I realize that it may be inconvenient for them to expose their
> implementation of NEON module (if there is one) for various reasons. But as
> long as Boost.SIMD is licensed under BSL, anyone can use and improve this
> code if he likes to, even if it means implementing functionality similar to
> the proprietary solution.
>
> It doesn't necessarily mean that the open solution will take the market
> share of the proprietary one.
>
>
> ------------------------------
>
> Message: 14
> Date: Wed, 29 May 2013 09:59:04 +0100
> From: David Bellot <david.bellot(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CAOE6ZJEVebMZF90SHC_yHNxmBWy=uC3K1OFokcPrF2RM7=uYrQ(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> again, as I said, you develop and propose a patch to the ublas
> mailing-list. We're happy to see contributions from anybody.
>
> Now I must say that we are interested in Neon instructions for ublas.
> It has been on the todo list for quite a long time too:
> http://ublas.sf.net
>
> What I want to say is, apart from the 2 GSOC students and myself
> (general maintenance, official releases), nobody has a specific task
> assigned to.
>
> So if you want to contribute, you just work on it and talk about it on
> the mailing list so that people can be involved and help you.
>
> If little by little you contribute with amazing ARM Neon code, then
> people will naturally take for granted that you are the ARM Neon
> specialist for ublas. As simple as that.
>
> If someone comes with a better code than you then we will choose the
> other code. If you come with a better code than someone else, then we
> will choose your code.
>
> So please, contribute.
>
>
> Are you testing your code on a specific machine or a virtual one ?
> What's about things like Raspberry Pi ? I'd like to see benchmark on
> this little things. Maybe you can start benchmarking ublas on a tiny
> machine like that and/or an Android device and see how gcc is able to
> generate auto-vectorized code for this machine. Check the assembly
> code to see if Neon instructions have been correctly generated.
>
> Best,
> David
>
>
>
> On Wed, May 29, 2013 at 8:50 AM, Aditya Avinash
> <adityaavinash143(a)gmail.com> wrote:
> > Thank you!
> > Can i develop a new kernel for uBLAS using NEON?
> >
> > On Wed, May 29, 2013 at 1:15 PM, Antony Polukhin <antoshkka(a)gmail.com>wrote:
> >
> >> 2013/5/29 Aditya Avinash <adityaavinash143(a)gmail.com>:
> >> > Hi, i want to develop boost.arm or boost.neon library so that boost is
> >> > implemented on ARM.
> >>
> >> Hi,
> >>
> >> Boost works well on arm, so nothing should be really developed.
> >> But if you ate talking about SIMD for ARM, that you shall take a look
> >> at Boost.SIMD and maybe propose library developer your help.
> >>
> >>
> >> --
> >> Best regards,
> >> Antony Polukhin
> >>
> >> _______________________________________________
> >> Unsubscribe & other changes:
> >> http://lists.boost.org/mailman/listinfo.cgi/boost
> >>
> >
> >
> >
> > --
> > ----------------
> > Atluri Aditya Avinash,
> > India.
> >
> > _______________________________________________
> > Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>
>
> ------------------------------
>
> Message: 15
> Date: Wed, 29 May 2013 05:27:02 -0400
> From: Rob Stewart <robertstewart(a)comcast.net>
> To: "boost(a)lists.boost.org" <boost(a)lists.boost.org>
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID: <8ED4CF8D-139A-4232-9C18-968AD38161E7(a)comcast.net>
> Content-Type: text/plain; charset=us-ascii
>
> On May 29, 2013, at 2:35 AM, Aditya Avinash <adityaavinash143(a)gmail.com> wrote:
>
> > Am sorry. My bad. It's boost.simd. Why isn't it included in boost? I have heard about it recently. Is there a chance that it is added to boost in the near future?
> >
> > On Wed, May 29, 2013 at 11:57 AM, Karsten Ahnert <
> > karsten.ahnert(a)googlemail.com> wrote:
> >
> >> On 05/29/2013 07:33 AM, Aditya Avinash wrote:
>
> [snip lots of quoted text]
>
> >>> On Wed, May 29, 2013 at 10:36 AM, Joel Falcou <joel.falcou(a)gmail.com>
> >>> wrote:
> >>>
> >>> On 29/05/2013 06:45, Gaetano Mendola wrote:
> >>>>
> >>>> On 29/05/2013 06.13, Aditya Avinash wrote:
>
> [snip even more quoted text]
>
> >>>>> Regards
> >>>>> Gaetano Mendola
> >>>>>
> >>>>>
> >>>>> See our work on Boost.SIMD ...
>
> [snip multiple sigs and ML footers]
>
>
> Please read http://www.boost.org/community/policy.html#quoting before posting.
>
> ___
> Rob
>
> (Sent from my portable computation engine)
>
> ------------------------------
>
> Message: 16
> Date: Wed, 29 May 2013 11:34:14 +0200
> From: Mathias Gaunard <mathias.gaunard(a)ens-lyon.org>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID: <51A5CB96.3040404(a)ens-lyon.org>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 29/05/13 06:13, Aditya Avinash wrote:
> > Hi, i have developed vector addition algorithm which exploits the hardware
> > parallelism (SSE implementation).
>
> That's something trivial to do, and unfortunately even that trivial code
> is broken (it's written for a generic T but clearly does not work for
> any T beside float).
> It still has nothing to do with uBLAS.
>
> Bringing SIMD to uBLAS could be fairly difficult. Is this part of the
> GSoC projects? Who's in charge of this?
> I'd like to know what the plan is: optimize very specific operations
> with SIMD or try to provide a framework to use SIMD in expression templates?
>
> The former is better adressed by simply binding BLAS, the latter is
> certainly not as easy as it sounds.
>
>
> ------------------------------
>
> Message: 17
> Date: Wed, 29 May 2013 15:04:51 +0530
> From: Aditya Avinash <adityaavinash143(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID:
> <CABocMVr1k13N6iBK7VOR4-G8ZPavf6eZL5qd=8cEFLGxDJ_waw(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Am sorry. My bad. It's boost.simd. Why isn't it included in boost? I have
> heard about it recently. Is there a chance that it is added to boost in the
> near future?
>
>
> ------------------------------
>
> Message: 18
> Date: Wed, 29 May 2013 11:38:45 +0200
> From: Mathias Gaunard <mathias.gaunard(a)ens-lyon.org>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Request to contribute boost::FFT
> Message-ID: <51A5CCA5.80605(a)ens-lyon.org>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 29/05/13 00:12, Nathan Bliss wrote:
> > Dear Boost Community Members,
> > I am writing to ask if I could contribute a C++ FFT implementation to the boost library. I noticed that there is a boost::CRC so I thought it would be a good addition to have boost::FFT as well. MIT have an existing open-source FFTW library, but it is GPL-licensed which is much more restrictive than the Boost license. I should imagine a commercial FFTW license would be very expensive.
> > I have working FFT code which I have tested on Ubuntu Linux GNU gcc and also Visual Studio Express 2012 (MS VC++) for Windows. My code, when run as a 2048-point FFT, agrees with the MIT FFTW one to a max error margin of 6 parts per million.
> > It is implemented using templates in a .hpp file and is very easy to use:
> > ------------------------------------------------------------------------------------------------------------[fft.hpp]
> > template<class FLOAT_TYPE, int FFT_SIZE, int NUM_THREADS>class FFT;///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////[Invocation example]
> > typedef double My_type;FFT<My_type, 2048, 4> user_fft;user_fft.initialise_FFT();get_input_samples(*user_fft.input_value_array, user_fft.m_fft_size);user_fft.execute_FFT();print_output_csv(user_fft.output_value_array, 2048);------------------------------------------------------------------------------------------------------------
> > It's uniqueness compared to other FFT implementations is that it fully utilises the boost::thread library, to the extent that the user can give the number of threads they want to use as a parameter to the class template (above case is for 4 parallel threads).
> > It is structured and organised so that users could customise/optimise it to specific processor architectures.
> > I've also tried to develop it in the spirit of Boost in that all class members which users should not access are private, only making public what is necessary to the API. My code is a decimation-in-time radix-2 FFT, and users could in theory use the existing API as a basis to extend it to more complex implementations such as the Reverse/Inverse FFT, bit-reversal of inputs/outputs and multi-dimensional FFTs.
> > I look forward to your reply.
> > Kind regards,Nathan Bliss
>
> You may want to give a look to the FFT functions bundled with NT2
> courtesy of Domagoj Saric.
>
> They also generate a FFT for a given compile-time size and use
> Boost.SIMD for vectorization (but no threads). Unfortunately the code is
> not so generic that it can work with arbitrary vector sizes, so it
> limits portability somewhat.
>
> <https://github.com/MetaScale/nt2/blob/master/modules/core/signal/include/nt…>
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>
> ------------------------------
>
> End of Boost Digest, Vol 4012, Issue 1
> **************************************
1
0
Hi,
Just to answer the previous questions about my FFT code...
I'm afraid it's only a basic FFT implementation. It may not be up to the standard of Boost and I was always prepared for this, so I may then have to submit it to a less prestigious open source forum on the web somewhere.
I made a mistake before about the precision. Mine agrees with the FFTW 2048-point results to a max error of 3 parts per million (max diff with FFTW value is 2.53E-06). Most of the outputs show zero or infinitesimal diffs. I've checked by math many times and I can only think it's because of performance-related approximations in FFTW, as the diffs seem too small to be algorithmic errors.
Unfortunately, the FFTW implementation is ~20 times faster than mine. FFTW can run a 2048-point in 6ms on my Ubuntu installation whereas mine takes approx. 140ms. On Windows, both implementations are an order of magnitude slower (5secs for mine versus 250ms for FFTW).
The boost multi-threading appears to make no difference to the speed, even though I've run it through GDB and with printfs to check that my code automatically spawns the optimal number of threads up to the max value given as the template parameter.
Some compilers may be able to optimise the C++ code and multi-threading to increase performance, though I do not know if it could ever approach FFTW's.
I have not implemented more advanced FFT features such as multi-dimensional FFTs or real-value-only optimisations, but I think the current API could facilitate users extending it to include forward/reverse FFT, bit-reversal, multi-dimensional FFTS (by manipulating the input and output vectors), etc. I've tried to make the code well organised, structured and commented so that hopefully users could customise it with their own optimisations for specific processor architectures.
I'll understand if the consensus is that it is not really good enough for Boost. Alternatively, I would be happy to share my code and share the credit if anyone else wants to help. Kind regards,Nathan
> From: boost-request(a)lists.boost.org
> Subject: Boost Digest, Vol 4012, Issue 1
> To: boost(a)lists.boost.org
> Date: Wed, 29 May 2013 05:38:51 -0400
>
> Send Boost mailing list submissions to
> boost(a)lists.boost.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.boost.org/mailman/listinfo.cgi/boost
> or, via email, send a message with subject or body 'help' to
> boost-request(a)lists.boost.org
>
> You can reach the person managing the list at
> boost-owner(a)lists.boost.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Boost digest..."
>
>
> The boost archives may be found at: http://lists.boost.org/Archives/boost/
>
> Today's Topics:
>
> 1. Re: SIMD implementation of uBLAS (Aditya Avinash)
> 2. Re: SIMD implementation of uBLAS (Karsten Ahnert)
> 3. Re: Request to contribute boost::FFT (Karsten Ahnert)
> 4. Re: SIMD implementation of uBLAS (Aditya Avinash)
> 5. Boost on ARM using NEON (Aditya Avinash)
> 6. Re: Boost on ARM using NEON (Antony Polukhin)
> 7. Re: Boost on ARM using NEON (Aditya Avinash)
> 8. Re: Boost on ARM using NEON (Tim Blechmann)
> 9. Re: Request to contribute boost::FFT (Paul A. Bristow)
> 10. Re: Boost on ARM using NEON (Victor Hiairrassary)
> 11. Re: Boost on ARM using NEON (Andrey Semashev)
> 12. Re: Boost on ARM using NEON (Tim Blechmann)
> 13. Re: Boost on ARM using NEON (Andrey Semashev)
> 14. Re: Boost on ARM using NEON (David Bellot)
> 15. Re: SIMD implementation of uBLAS (Rob Stewart)
> 16. Re: SIMD implementation of uBLAS (Mathias Gaunard)
> 17. Re: SIMD implementation of uBLAS (Aditya Avinash)
> 18. Re: Request to contribute boost::FFT (Mathias Gaunard)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 29 May 2013 11:03:50 +0530
> From: Aditya Avinash <adityaavinash143(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID:
> <CABocMVpiKTED=tPZ4QtXBZAVdbwHMv+26LE+YXx9u2+HyWAFag(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> @Gaetano: Thank you for the comments. I'll change accordingly and post it
> back. I am using T because, the code need to run double precision float
> also.
> @Joel: The Boost.SIMD is generalized. Designing algorithms specific to
> uBLAS increases the performance. Odeint have their own simd backend.
>
> On Wed, May 29, 2013 at 10:36 AM, Joel Falcou <joel.falcou(a)gmail.com> wrote:
>
> > On 29/05/2013 06:45, Gaetano Mendola wrote:
> >
> >> On 29/05/2013 06.13, Aditya Avinash wrote:
> >>
> >>> Hi, i have developed vector addition algorithm which exploits the
> >>> hardware
> >>> parallelism (SSE implementation).
> >>>
> >>
> >> A few comments:
> >>
> >> - That is not C++ but just C in disguise of C++ code
> >> . SSE1 CTOR doesn't use initialization list
> >> . SSE1 doesn't have a DTOR and the user has to
> >> explicit call the Free method
> >>
> >> - const-correctness is not in place
> >> - The SSE namespace should have been put in a "detail"
> >> namespace
> >> - Use memcpy instead of explicit for
> >> - Why is SSE1 template when it works only when T is a
> >> single-precision, floating-point value ?
> >>
> >>
> >> Also I believe a nice interface whould have been:
> >>
> >> SSE1::vector A(1024);
> >> SSE1::vector B(1024);
> >> SSE1::vector C(1024);
> >>
> >> C = A + B;
> >>
> >>
> >> Regards
> >> Gaetano Mendola
> >>
> >>
> > See our work on Boost.SIMD ...
> >
> >
> >
> > ______________________________**_________________
> > Unsubscribe & other changes: http://lists.boost.org/**
> > mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>
> >
>
>
>
> --
> ----------------
> Atluri Aditya Avinash,
> India.
>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 29 May 2013 08:27:40 +0200
> From: Karsten Ahnert <karsten.ahnert(a)googlemail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID: <51A59FDC.5050409(a)googlemail.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 05/29/2013 07:33 AM, Aditya Avinash wrote:
> > @Gaetano: Thank you for the comments. I'll change accordingly and post it
> > back. I am using T because, the code need to run double precision float
> > also.
> > @Joel: The Boost.SIMD is generalized. Designing algorithms specific to
> > uBLAS increases the performance. Odeint have their own simd backend.
>
> odeint has no simd backend, At least i am not aware of an simd backend.
> Having one would be really great.
>
>
> >
> > On Wed, May 29, 2013 at 10:36 AM, Joel Falcou <joel.falcou(a)gmail.com> wrote:
> >
> >> On 29/05/2013 06:45, Gaetano Mendola wrote:
> >>
> >>> On 29/05/2013 06.13, Aditya Avinash wrote:
> >>>
> >>>> Hi, i have developed vector addition algorithm which exploits the
> >>>> hardware
> >>>> parallelism (SSE implementation).
> >>>>
> >>>
> >>> A few comments:
> >>>
> >>> - That is not C++ but just C in disguise of C++ code
> >>> . SSE1 CTOR doesn't use initialization list
> >>> . SSE1 doesn't have a DTOR and the user has to
> >>> explicit call the Free method
> >>>
> >>> - const-correctness is not in place
> >>> - The SSE namespace should have been put in a "detail"
> >>> namespace
> >>> - Use memcpy instead of explicit for
> >>> - Why is SSE1 template when it works only when T is a
> >>> single-precision, floating-point value ?
> >>>
> >>>
> >>> Also I believe a nice interface whould have been:
> >>>
> >>> SSE1::vector A(1024);
> >>> SSE1::vector B(1024);
> >>> SSE1::vector C(1024);
> >>>
> >>> C = A + B;
> >>>
> >>>
> >>> Regards
> >>> Gaetano Mendola
> >>>
> >>>
> >> See our work on Boost.SIMD ...
> >>
> >>
> >>
> >> ______________________________**_________________
> >> Unsubscribe & other changes: http://lists.boost.org/**
> >> mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>
> >>
> >
> >
> >
>
>
>
> ------------------------------
>
> Message: 3
> Date: Wed, 29 May 2013 08:32:15 +0200
> From: Karsten Ahnert <karsten.ahnert(a)googlemail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Request to contribute boost::FFT
> Message-ID: <51A5A0EF.9000806(a)googlemail.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 05/29/2013 12:12 AM, Nathan Bliss wrote:
> > Dear Boost Community Members,
> > I am writing to ask if I could contribute a C++ FFT implementation to the boost library. I noticed that there is a boost::CRC so I thought it would be a good addition to have boost::FFT as well. MIT have an existing open-source FFTW library, but it is GPL-licensed which is much more restrictive than the Boost license. I should imagine a commercial FFTW license would be very expensive.
> > I have working FFT code which I have tested on Ubuntu Linux GNU gcc and also Visual Studio Express 2012 (MS VC++) for Windows. My code, when run as a 2048-point FFT, agrees with the MIT FFTW one to a max error margin of 6 parts per million.
>
> I would really like to see an FFT implementation in boost. It should of
> course focus on performance. I also think that interoperability with
> different vector and storage types would be really great. It prevents
> that the user has to convert its data types in different formats (which
> can be really painful if you use different libraries, for example
> odeint, ublas, fftw, ...).
>
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Wed, 29 May 2013 12:05:24 +0530
> From: Aditya Avinash <adityaavinash143(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID:
> <CABocMVpBesKaSTznHmKACxNpPUwYtTs50vUaWeZo5qUUaK9K+Q(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Am sorry. My bad. It's boost.simd. Why isn't it included in boost? I have
> heard about it recently. Is there a chance that it is added to boost in the
> near future?
>
> On Wed, May 29, 2013 at 11:57 AM, Karsten Ahnert <
> karsten.ahnert(a)googlemail.com> wrote:
>
> > On 05/29/2013 07:33 AM, Aditya Avinash wrote:
> >
> >> @Gaetano: Thank you for the comments. I'll change accordingly and post it
> >> back. I am using T because, the code need to run double precision float
> >> also.
> >> @Joel: The Boost.SIMD is generalized. Designing algorithms specific to
> >> uBLAS increases the performance. Odeint have their own simd backend.
> >>
> >
> > odeint has no simd backend, At least i am not aware of an simd backend.
> > Having one would be really great.
> >
> >
> >
> >> On Wed, May 29, 2013 at 10:36 AM, Joel Falcou <joel.falcou(a)gmail.com>
> >> wrote:
> >>
> >> On 29/05/2013 06:45, Gaetano Mendola wrote:
> >>>
> >>> On 29/05/2013 06.13, Aditya Avinash wrote:
> >>>>
> >>>> Hi, i have developed vector addition algorithm which exploits the
> >>>>> hardware
> >>>>> parallelism (SSE implementation).
> >>>>>
> >>>>>
> >>>> A few comments:
> >>>>
> >>>> - That is not C++ but just C in disguise of C++ code
> >>>> . SSE1 CTOR doesn't use initialization list
> >>>> . SSE1 doesn't have a DTOR and the user has to
> >>>> explicit call the Free method
> >>>>
> >>>> - const-correctness is not in place
> >>>> - The SSE namespace should have been put in a "detail"
> >>>> namespace
> >>>> - Use memcpy instead of explicit for
> >>>> - Why is SSE1 template when it works only when T is a
> >>>> single-precision, floating-point value ?
> >>>>
> >>>>
> >>>> Also I believe a nice interface whould have been:
> >>>>
> >>>> SSE1::vector A(1024);
> >>>> SSE1::vector B(1024);
> >>>> SSE1::vector C(1024);
> >>>>
> >>>> C = A + B;
> >>>>
> >>>>
> >>>> Regards
> >>>> Gaetano Mendola
> >>>>
> >>>>
> >>>> See our work on Boost.SIMD ...
> >>>
> >>>
> >>>
> >>> ______________________________****_________________
> >>> Unsubscribe & other changes: http://lists.boost.org/**
> >>> mailman/listinfo.cgi/boost<htt**p://lists.boost.org/mailman/**
> >>> listinfo.cgi/boost <http://lists.boost.org/mailman/listinfo.cgi/boost>>
> >>>
> >>>
> >>
> >>
> >>
> >
> > ______________________________**_________________
> > Unsubscribe & other changes: http://lists.boost.org/**
> > mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>
> >
>
>
>
> --
> ----------------
> Atluri Aditya Avinash,
> India.
>
>
> ------------------------------
>
> Message: 5
> Date: Wed, 29 May 2013 12:19:39 +0530
> From: Aditya Avinash <adityaavinash143(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: [boost] Boost on ARM using NEON
> Message-ID:
> <CABocMVpQqBGqduE6jj-KG=O5faV08GcoV=TOS_bXFLYWTkzKjA(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi, i want to develop boost.arm or boost.neon library so that boost is
> implemented on ARM.
>
> --
> ----------------
> Atluri Aditya Avinash,
> India.
>
>
> ------------------------------
>
> Message: 6
> Date: Wed, 29 May 2013 11:45:12 +0400
> From: Antony Polukhin <antoshkka(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CAKqmYPbt+wq92BvLLn-5GUhFfmvWiWsAoqfpXK6XOmnRK7EtwQ(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> 2013/5/29 Aditya Avinash <adityaavinash143(a)gmail.com>:
> > Hi, i want to develop boost.arm or boost.neon library so that boost is
> > implemented on ARM.
>
> Hi,
>
> Boost works well on arm, so nothing should be really developed.
> But if you ate talking about SIMD for ARM, that you shall take a look
> at Boost.SIMD and maybe propose library developer your help.
>
>
> --
> Best regards,
> Antony Polukhin
>
>
> ------------------------------
>
> Message: 7
> Date: Wed, 29 May 2013 13:20:15 +0530
> From: Aditya Avinash <adityaavinash143(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CABocMVo_DbCaxRPDOK4_1KMw8m7O54gXKgOqX+cmYRcBndNYDA(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Thank you!
> Can i develop a new kernel for uBLAS using NEON?
>
> On Wed, May 29, 2013 at 1:15 PM, Antony Polukhin <antoshkka(a)gmail.com>wrote:
>
> > 2013/5/29 Aditya Avinash <adityaavinash143(a)gmail.com>:
> > > Hi, i want to develop boost.arm or boost.neon library so that boost is
> > > implemented on ARM.
> >
> > Hi,
> >
> > Boost works well on arm, so nothing should be really developed.
> > But if you ate talking about SIMD for ARM, that you shall take a look
> > at Boost.SIMD and maybe propose library developer your help.
> >
> >
> > --
> > Best regards,
> > Antony Polukhin
> >
> > _______________________________________________
> > Unsubscribe & other changes:
> > http://lists.boost.org/mailman/listinfo.cgi/boost
> >
>
>
>
> --
> ----------------
> Atluri Aditya Avinash,
> India.
>
>
> ------------------------------
>
> Message: 8
> Date: Wed, 29 May 2013 09:52:57 +0200
> From: Tim Blechmann <tim(a)klingt.org>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID: <51A5B3D9.2010807(a)klingt.org>
> Content-Type: text/plain; charset=ISO-8859-1
>
> >> Hi, i want to develop boost.arm or boost.neon library so that boost is
> >> implemented on ARM.
> >
> > Hi,
> >
> > Boost works well on arm, so nothing should be really developed.
> > But if you ate talking about SIMD for ARM, that you shall take a look
> > at Boost.SIMD and maybe propose library developer your help.
>
> from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure at
> this stage whether NEON will be provided as an open-source module"
>
> tim
>
>
>
> ------------------------------
>
> Message: 9
> Date: Wed, 29 May 2013 08:55:41 +0100
> From: "Paul A. Bristow" <pbristow(a)hetp.u-net.com>
> To: <boost(a)lists.boost.org>
> Subject: Re: [boost] Request to contribute boost::FFT
> Message-ID: <001f01ce5c41$ea88da30$bf9a8e90$(a)hetp.u-net.com>
> Content-Type: text/plain; charset="us-ascii"
>
> > -----Original Message-----
> > From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Nathan Bliss
> > Sent: Tuesday, May 28, 2013 11:12 PM
> > To: boost(a)lists.boost.org
> > Subject: [boost] Request to contribute boost::FFT
> >
> > I am writing to ask if I could contribute a C++ FFT implementation to the boost library.
>
> Definitely, a good templated C++ FFT would be very welcome.
>
> Would/does it work with Boost.Multiprecision to give much higher precision ? (at a snail's pace of
> course).
>
> Paul
>
> ---
> Paul A. Bristow,
> Prizet Farmhouse, Kendal LA8 8AB UK
> +44 1539 561830 07714330204
> pbristow(a)hetp.u-net.com
>
>
>
>
>
>
>
>
>
>
>
>
> ------------------------------
>
> Message: 10
> Date: Wed, 29 May 2013 09:15:36 +0200
> From: Victor Hiairrassary <victor.hiairrassary.ml(a)gmail.com>
> To: "boost(a)lists.boost.org" <boost(a)lists.boost.org>
> Cc: "boost(a)lists.boost.org" <boost(a)lists.boost.org>
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID: <22F316D0-E311-4BE3-94FB-11DE688C9CAB(a)gmail.com>
> Content-Type: text/plain; charset=us-ascii
>
> boost already works very well on ARM !
>
> If you want to use Neon extension, look at boost simd (I do not know if Neon is implemented yet, feel free to do it) !
>
> https://github.com/MetaScale/nt2
>
> On 29 mai 2013, at 08:49, Aditya Avinash <adityaavinash143(a)gmail.com> wrote:
>
> > Hi, i want to develop boost.arm or boost.neon library so that boost is
> > implemented on ARM.
> >
> > --
> > ----------------
> > Atluri Aditya Avinash,
> > India.
> >
> > _______________________________________________
> > Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>
>
> ------------------------------
>
> Message: 11
> Date: Wed, 29 May 2013 12:11:31 +0400
> From: Andrey Semashev <andrey.semashev(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CAEhD+6BfavRaGgok9eMbDebCP1Hdg2T3Z3be7BWZTAZ1EDpfzg(a)mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> On Wed, May 29, 2013 at 11:52 AM, Tim Blechmann <tim(a)klingt.org> wrote:
>
> > >> Hi, i want to develop boost.arm or boost.neon library so that boost is
> > >> implemented on ARM.
> > >
> > > Hi,
> > >
> > > Boost works well on arm, so nothing should be really developed.
> > > But if you ate talking about SIMD for ARM, that you shall take a look
> > > at Boost.SIMD and maybe propose library developer your help.
> >
> > from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure at
> > this stage whether NEON will be provided as an open-source module"
> >
>
> Even if it's not openly provided by developers of NT2, nothing prevents you
> from implementing it yourself.
>
>
> ------------------------------
>
> Message: 12
> Date: Wed, 29 May 2013 10:37:48 +0200
> From: Tim Blechmann <tim(a)klingt.org>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID: <51A5BE5C.6000705(a)klingt.org>
> Content-Type: text/plain; charset=ISO-8859-1
>
> >>>> Hi, i want to develop boost.arm or boost.neon library so that boost is
> >>>> implemented on ARM.
> >>>
> >>> Hi,
> >>>
> >>> Boost works well on arm, so nothing should be really developed.
> >>> But if you ate talking about SIMD for ARM, that you shall take a look
> >>> at Boost.SIMD and maybe propose library developer your help.
> >>
> >> from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure at
> >> this stage whether NEON will be provided as an open-source module"
> >
> > Even if it's not openly provided by developers of NT2, nothing prevents you
> > from implementing it yourself.
>
> yes and no ... if the nt2 devs submit boost.simd to become an official
> part of boost, it is the question if they'd merge an independently
> developed arm/neon support, if it conflicts with their business
> interests ... the situation is a bit unfortunate ...
>
> tim
>
>
> ------------------------------
>
> Message: 13
> Date: Wed, 29 May 2013 12:52:29 +0400
> From: Andrey Semashev <andrey.semashev(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CAEhD+6D8h43AyTvg65ntZZm804R9p0fiOtARPp2G8cOJC64zog(a)mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> On Wed, May 29, 2013 at 12:37 PM, Tim Blechmann <tim(a)klingt.org> wrote:
>
> > >>>> Hi, i want to develop boost.arm or boost.neon library so that boost is
> > >>>> implemented on ARM.
> > >>>
> > >>> Hi,
> > >>>
> > >>> Boost works well on arm, so nothing should be really developed.
> > >>> But if you ate talking about SIMD for ARM, that you shall take a look
> > >>> at Boost.SIMD and maybe propose library developer your help.
> > >>
> > >> from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure
> > at
> > >> this stage whether NEON will be provided as an open-source module"
> > >
> > > Even if it's not openly provided by developers of NT2, nothing prevents
> > you
> > > from implementing it yourself.
> >
> > yes and no ... if the nt2 devs submit boost.simd to become an official
> > part of boost, it is the question if they'd merge an independently
> > developed arm/neon support, if it conflicts with their business
> > interests ... the situation is a bit unfortunate ...
> >
>
> I realize that it may be inconvenient for them to expose their
> implementation of NEON module (if there is one) for various reasons. But as
> long as Boost.SIMD is licensed under BSL, anyone can use and improve this
> code if he likes to, even if it means implementing functionality similar to
> the proprietary solution.
>
> It doesn't necessarily mean that the open solution will take the market
> share of the proprietary one.
>
>
> ------------------------------
>
> Message: 14
> Date: Wed, 29 May 2013 09:59:04 +0100
> From: David Bellot <david.bellot(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CAOE6ZJEVebMZF90SHC_yHNxmBWy=uC3K1OFokcPrF2RM7=uYrQ(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> again, as I said, you develop and propose a patch to the ublas
> mailing-list. We're happy to see contributions from anybody.
>
> Now I must say that we are interested in Neon instructions for ublas.
> It has been on the todo list for quite a long time too:
> http://ublas.sf.net
>
> What I want to say is, apart from the 2 GSOC students and myself
> (general maintenance, official releases), nobody has a specific task
> assigned to.
>
> So if you want to contribute, you just work on it and talk about it on
> the mailing list so that people can be involved and help you.
>
> If little by little you contribute with amazing ARM Neon code, then
> people will naturally take for granted that you are the ARM Neon
> specialist for ublas. As simple as that.
>
> If someone comes with a better code than you then we will choose the
> other code. If you come with a better code than someone else, then we
> will choose your code.
>
> So please, contribute.
>
>
> Are you testing your code on a specific machine or a virtual one ?
> What's about things like Raspberry Pi ? I'd like to see benchmark on
> this little things. Maybe you can start benchmarking ublas on a tiny
> machine like that and/or an Android device and see how gcc is able to
> generate auto-vectorized code for this machine. Check the assembly
> code to see if Neon instructions have been correctly generated.
>
> Best,
> David
>
>
>
> On Wed, May 29, 2013 at 8:50 AM, Aditya Avinash
> <adityaavinash143(a)gmail.com> wrote:
> > Thank you!
> > Can i develop a new kernel for uBLAS using NEON?
> >
> > On Wed, May 29, 2013 at 1:15 PM, Antony Polukhin <antoshkka(a)gmail.com>wrote:
> >
> >> 2013/5/29 Aditya Avinash <adityaavinash143(a)gmail.com>:
> >> > Hi, i want to develop boost.arm or boost.neon library so that boost is
> >> > implemented on ARM.
> >>
> >> Hi,
> >>
> >> Boost works well on arm, so nothing should be really developed.
> >> But if you ate talking about SIMD for ARM, that you shall take a look
> >> at Boost.SIMD and maybe propose library developer your help.
> >>
> >>
> >> --
> >> Best regards,
> >> Antony Polukhin
> >>
> >> _______________________________________________
> >> Unsubscribe & other changes:
> >> http://lists.boost.org/mailman/listinfo.cgi/boost
> >>
> >
> >
> >
> > --
> > ----------------
> > Atluri Aditya Avinash,
> > India.
> >
> > _______________________________________________
> > Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>
>
> ------------------------------
>
> Message: 15
> Date: Wed, 29 May 2013 05:27:02 -0400
> From: Rob Stewart <robertstewart(a)comcast.net>
> To: "boost(a)lists.boost.org" <boost(a)lists.boost.org>
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID: <8ED4CF8D-139A-4232-9C18-968AD38161E7(a)comcast.net>
> Content-Type: text/plain; charset=us-ascii
>
> On May 29, 2013, at 2:35 AM, Aditya Avinash <adityaavinash143(a)gmail.com> wrote:
>
> > Am sorry. My bad. It's boost.simd. Why isn't it included in boost? I have heard about it recently. Is there a chance that it is added to boost in the near future?
> >
> > On Wed, May 29, 2013 at 11:57 AM, Karsten Ahnert <
> > karsten.ahnert(a)googlemail.com> wrote:
> >
> >> On 05/29/2013 07:33 AM, Aditya Avinash wrote:
>
> [snip lots of quoted text]
>
> >>> On Wed, May 29, 2013 at 10:36 AM, Joel Falcou <joel.falcou(a)gmail.com>
> >>> wrote:
> >>>
> >>> On 29/05/2013 06:45, Gaetano Mendola wrote:
> >>>>
> >>>> On 29/05/2013 06.13, Aditya Avinash wrote:
>
> [snip even more quoted text]
>
> >>>>> Regards
> >>>>> Gaetano Mendola
> >>>>>
> >>>>>
> >>>>> See our work on Boost.SIMD ...
>
> [snip multiple sigs and ML footers]
>
>
> Please read http://www.boost.org/community/policy.html#quoting before posting.
>
> ___
> Rob
>
> (Sent from my portable computation engine)
>
> ------------------------------
>
> Message: 16
> Date: Wed, 29 May 2013 11:34:14 +0200
> From: Mathias Gaunard <mathias.gaunard(a)ens-lyon.org>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID: <51A5CB96.3040404(a)ens-lyon.org>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 29/05/13 06:13, Aditya Avinash wrote:
> > Hi, i have developed vector addition algorithm which exploits the hardware
> > parallelism (SSE implementation).
>
> That's something trivial to do, and unfortunately even that trivial code
> is broken (it's written for a generic T but clearly does not work for
> any T beside float).
> It still has nothing to do with uBLAS.
>
> Bringing SIMD to uBLAS could be fairly difficult. Is this part of the
> GSoC projects? Who's in charge of this?
> I'd like to know what the plan is: optimize very specific operations
> with SIMD or try to provide a framework to use SIMD in expression templates?
>
> The former is better adressed by simply binding BLAS, the latter is
> certainly not as easy as it sounds.
>
>
> ------------------------------
>
> Message: 17
> Date: Wed, 29 May 2013 15:04:51 +0530
> From: Aditya Avinash <adityaavinash143(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID:
> <CABocMVr1k13N6iBK7VOR4-G8ZPavf6eZL5qd=8cEFLGxDJ_waw(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Am sorry. My bad. It's boost.simd. Why isn't it included in boost? I have
> heard about it recently. Is there a chance that it is added to boost in the
> near future?
>
>
> ------------------------------
>
> Message: 18
> Date: Wed, 29 May 2013 11:38:45 +0200
> From: Mathias Gaunard <mathias.gaunard(a)ens-lyon.org>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Request to contribute boost::FFT
> Message-ID: <51A5CCA5.80605(a)ens-lyon.org>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 29/05/13 00:12, Nathan Bliss wrote:
> > Dear Boost Community Members,
> > I am writing to ask if I could contribute a C++ FFT implementation to the boost library. I noticed that there is a boost::CRC so I thought it would be a good addition to have boost::FFT as well. MIT have an existing open-source FFTW library, but it is GPL-licensed which is much more restrictive than the Boost license. I should imagine a commercial FFTW license would be very expensive.
> > I have working FFT code which I have tested on Ubuntu Linux GNU gcc and also Visual Studio Express 2012 (MS VC++) for Windows. My code, when run as a 2048-point FFT, agrees with the MIT FFTW one to a max error margin of 6 parts per million.
> > It is implemented using templates in a .hpp file and is very easy to use:
> > ------------------------------------------------------------------------------------------------------------[fft.hpp]
> > template<class FLOAT_TYPE, int FFT_SIZE, int NUM_THREADS>class FFT;///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////[Invocation example]
> > typedef double My_type;FFT<My_type, 2048, 4> user_fft;user_fft.initialise_FFT();get_input_samples(*user_fft.input_value_array, user_fft.m_fft_size);user_fft.execute_FFT();print_output_csv(user_fft.output_value_array, 2048);------------------------------------------------------------------------------------------------------------
> > It's uniqueness compared to other FFT implementations is that it fully utilises the boost::thread library, to the extent that the user can give the number of threads they want to use as a parameter to the class template (above case is for 4 parallel threads).
> > It is structured and organised so that users could customise/optimise it to specific processor architectures.
> > I've also tried to develop it in the spirit of Boost in that all class members which users should not access are private, only making public what is necessary to the API. My code is a decimation-in-time radix-2 FFT, and users could in theory use the existing API as a basis to extend it to more complex implementations such as the Reverse/Inverse FFT, bit-reversal of inputs/outputs and multi-dimensional FFTs.
> > I look forward to your reply.
> > Kind regards,Nathan Bliss
>
> You may want to give a look to the FFT functions bundled with NT2
> courtesy of Domagoj Saric.
>
> They also generate a FFT for a given compile-time size and use
> Boost.SIMD for vectorization (but no threads). Unfortunately the code is
> not so generic that it can work with arbitrary vector sizes, so it
> limits portability somewhat.
>
> <https://github.com/MetaScale/nt2/blob/master/modules/core/signal/include/nt…>
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>
> ------------------------------
>
> End of Boost Digest, Vol 4012, Issue 1
> **************************************
1
0
Hi,
Just to answer the previous questions about my FFT code...
I'm afraid it's only a basic FFT implementation. It may not be up to the standard of Boost and I was always prepared for this, so I may then have to submit it to a less prestigious open source forum on the web somewhere.
I made a mistake before about the precision. Mine agrees with the FFTW 2048-point results to a max error of 3 parts per million (max diff with FFTW value is 2.53E-06). Most of the outputs show zero or infinitesimal diffs. I've checked by math many times and I can only think it's because of performance-related approximations in FFTW, as the diffs seem too small to be algorithmic errors.
Unfortunately, the FFTW implementation is ~20 times faster than mine. FFTW can run a 2048-point in 6ms on my Ubuntu installation whereas mine takes approx. 140ms. On Windows, both implementations are an order of magnitude slower (5secs for mine versus 250ms for FFTW).
The boost multi-threading appears to make no difference to the speed, even though I've run it through GDB and with printfs to check that my code automatically spawns the optimal number of threads up to the max value given as the template parameter.
Some compilers may be able to optimise the C++ code and multi-threading to increase performance, though I do not know if it could ever approach FFTW's.
I have not implemented more advanced FFT features such as multi-dimensional FFTs or real-value-only optimisations, but I think the current API could facilitate users extending it to include forward/reverse FFT, bit-reversal, multi-dimensional FFTS (by manipulating the input and output vectors), etc. I've tried to make the code well organised, structured and commented so that hopefully users could customise it with their own optimisations for specific processor architectures.
I'll understand if the consensus is that it is not really good enough for Boost. Alternatively, I would be happy to share my code and share the credit if anyone else wants to help. Kind regards,Nathan
> From: boost-request(a)lists.boost.org
> Subject: Boost Digest, Vol 4012, Issue 1
> To: boost(a)lists.boost.org
> Date: Wed, 29 May 2013 05:38:51 -0400
>
> Send Boost mailing list submissions to
> boost(a)lists.boost.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.boost.org/mailman/listinfo.cgi/boost
> or, via email, send a message with subject or body 'help' to
> boost-request(a)lists.boost.org
>
> You can reach the person managing the list at
> boost-owner(a)lists.boost.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Boost digest..."
>
>
> The boost archives may be found at: http://lists.boost.org/Archives/boost/
>
> Today's Topics:
>
> 1. Re: SIMD implementation of uBLAS (Aditya Avinash)
> 2. Re: SIMD implementation of uBLAS (Karsten Ahnert)
> 3. Re: Request to contribute boost::FFT (Karsten Ahnert)
> 4. Re: SIMD implementation of uBLAS (Aditya Avinash)
> 5. Boost on ARM using NEON (Aditya Avinash)
> 6. Re: Boost on ARM using NEON (Antony Polukhin)
> 7. Re: Boost on ARM using NEON (Aditya Avinash)
> 8. Re: Boost on ARM using NEON (Tim Blechmann)
> 9. Re: Request to contribute boost::FFT (Paul A. Bristow)
> 10. Re: Boost on ARM using NEON (Victor Hiairrassary)
> 11. Re: Boost on ARM using NEON (Andrey Semashev)
> 12. Re: Boost on ARM using NEON (Tim Blechmann)
> 13. Re: Boost on ARM using NEON (Andrey Semashev)
> 14. Re: Boost on ARM using NEON (David Bellot)
> 15. Re: SIMD implementation of uBLAS (Rob Stewart)
> 16. Re: SIMD implementation of uBLAS (Mathias Gaunard)
> 17. Re: SIMD implementation of uBLAS (Aditya Avinash)
> 18. Re: Request to contribute boost::FFT (Mathias Gaunard)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 29 May 2013 11:03:50 +0530
> From: Aditya Avinash <adityaavinash143(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID:
> <CABocMVpiKTED=tPZ4QtXBZAVdbwHMv+26LE+YXx9u2+HyWAFag(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> @Gaetano: Thank you for the comments. I'll change accordingly and post it
> back. I am using T because, the code need to run double precision float
> also.
> @Joel: The Boost.SIMD is generalized. Designing algorithms specific to
> uBLAS increases the performance. Odeint have their own simd backend.
>
> On Wed, May 29, 2013 at 10:36 AM, Joel Falcou <joel.falcou(a)gmail.com> wrote:
>
> > On 29/05/2013 06:45, Gaetano Mendola wrote:
> >
> >> On 29/05/2013 06.13, Aditya Avinash wrote:
> >>
> >>> Hi, i have developed vector addition algorithm which exploits the
> >>> hardware
> >>> parallelism (SSE implementation).
> >>>
> >>
> >> A few comments:
> >>
> >> - That is not C++ but just C in disguise of C++ code
> >> . SSE1 CTOR doesn't use initialization list
> >> . SSE1 doesn't have a DTOR and the user has to
> >> explicit call the Free method
> >>
> >> - const-correctness is not in place
> >> - The SSE namespace should have been put in a "detail"
> >> namespace
> >> - Use memcpy instead of explicit for
> >> - Why is SSE1 template when it works only when T is a
> >> single-precision, floating-point value ?
> >>
> >>
> >> Also I believe a nice interface whould have been:
> >>
> >> SSE1::vector A(1024);
> >> SSE1::vector B(1024);
> >> SSE1::vector C(1024);
> >>
> >> C = A + B;
> >>
> >>
> >> Regards
> >> Gaetano Mendola
> >>
> >>
> > See our work on Boost.SIMD ...
> >
> >
> >
> > ______________________________**_________________
> > Unsubscribe & other changes: http://lists.boost.org/**
> > mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>
> >
>
>
>
> --
> ----------------
> Atluri Aditya Avinash,
> India.
>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 29 May 2013 08:27:40 +0200
> From: Karsten Ahnert <karsten.ahnert(a)googlemail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID: <51A59FDC.5050409(a)googlemail.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 05/29/2013 07:33 AM, Aditya Avinash wrote:
> > @Gaetano: Thank you for the comments. I'll change accordingly and post it
> > back. I am using T because, the code need to run double precision float
> > also.
> > @Joel: The Boost.SIMD is generalized. Designing algorithms specific to
> > uBLAS increases the performance. Odeint have their own simd backend.
>
> odeint has no simd backend, At least i am not aware of an simd backend.
> Having one would be really great.
>
>
> >
> > On Wed, May 29, 2013 at 10:36 AM, Joel Falcou <joel.falcou(a)gmail.com> wrote:
> >
> >> On 29/05/2013 06:45, Gaetano Mendola wrote:
> >>
> >>> On 29/05/2013 06.13, Aditya Avinash wrote:
> >>>
> >>>> Hi, i have developed vector addition algorithm which exploits the
> >>>> hardware
> >>>> parallelism (SSE implementation).
> >>>>
> >>>
> >>> A few comments:
> >>>
> >>> - That is not C++ but just C in disguise of C++ code
> >>> . SSE1 CTOR doesn't use initialization list
> >>> . SSE1 doesn't have a DTOR and the user has to
> >>> explicit call the Free method
> >>>
> >>> - const-correctness is not in place
> >>> - The SSE namespace should have been put in a "detail"
> >>> namespace
> >>> - Use memcpy instead of explicit for
> >>> - Why is SSE1 template when it works only when T is a
> >>> single-precision, floating-point value ?
> >>>
> >>>
> >>> Also I believe a nice interface whould have been:
> >>>
> >>> SSE1::vector A(1024);
> >>> SSE1::vector B(1024);
> >>> SSE1::vector C(1024);
> >>>
> >>> C = A + B;
> >>>
> >>>
> >>> Regards
> >>> Gaetano Mendola
> >>>
> >>>
> >> See our work on Boost.SIMD ...
> >>
> >>
> >>
> >> ______________________________**_________________
> >> Unsubscribe & other changes: http://lists.boost.org/**
> >> mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>
> >>
> >
> >
> >
>
>
>
> ------------------------------
>
> Message: 3
> Date: Wed, 29 May 2013 08:32:15 +0200
> From: Karsten Ahnert <karsten.ahnert(a)googlemail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Request to contribute boost::FFT
> Message-ID: <51A5A0EF.9000806(a)googlemail.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 05/29/2013 12:12 AM, Nathan Bliss wrote:
> > Dear Boost Community Members,
> > I am writing to ask if I could contribute a C++ FFT implementation to the boost library. I noticed that there is a boost::CRC so I thought it would be a good addition to have boost::FFT as well. MIT have an existing open-source FFTW library, but it is GPL-licensed which is much more restrictive than the Boost license. I should imagine a commercial FFTW license would be very expensive.
> > I have working FFT code which I have tested on Ubuntu Linux GNU gcc and also Visual Studio Express 2012 (MS VC++) for Windows. My code, when run as a 2048-point FFT, agrees with the MIT FFTW one to a max error margin of 6 parts per million.
>
> I would really like to see an FFT implementation in boost. It should of
> course focus on performance. I also think that interoperability with
> different vector and storage types would be really great. It prevents
> that the user has to convert its data types in different formats (which
> can be really painful if you use different libraries, for example
> odeint, ublas, fftw, ...).
>
>
>
>
>
> ------------------------------
>
> Message: 4
> Date: Wed, 29 May 2013 12:05:24 +0530
> From: Aditya Avinash <adityaavinash143(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID:
> <CABocMVpBesKaSTznHmKACxNpPUwYtTs50vUaWeZo5qUUaK9K+Q(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Am sorry. My bad. It's boost.simd. Why isn't it included in boost? I have
> heard about it recently. Is there a chance that it is added to boost in the
> near future?
>
> On Wed, May 29, 2013 at 11:57 AM, Karsten Ahnert <
> karsten.ahnert(a)googlemail.com> wrote:
>
> > On 05/29/2013 07:33 AM, Aditya Avinash wrote:
> >
> >> @Gaetano: Thank you for the comments. I'll change accordingly and post it
> >> back. I am using T because, the code need to run double precision float
> >> also.
> >> @Joel: The Boost.SIMD is generalized. Designing algorithms specific to
> >> uBLAS increases the performance. Odeint have their own simd backend.
> >>
> >
> > odeint has no simd backend, At least i am not aware of an simd backend.
> > Having one would be really great.
> >
> >
> >
> >> On Wed, May 29, 2013 at 10:36 AM, Joel Falcou <joel.falcou(a)gmail.com>
> >> wrote:
> >>
> >> On 29/05/2013 06:45, Gaetano Mendola wrote:
> >>>
> >>> On 29/05/2013 06.13, Aditya Avinash wrote:
> >>>>
> >>>> Hi, i have developed vector addition algorithm which exploits the
> >>>>> hardware
> >>>>> parallelism (SSE implementation).
> >>>>>
> >>>>>
> >>>> A few comments:
> >>>>
> >>>> - That is not C++ but just C in disguise of C++ code
> >>>> . SSE1 CTOR doesn't use initialization list
> >>>> . SSE1 doesn't have a DTOR and the user has to
> >>>> explicit call the Free method
> >>>>
> >>>> - const-correctness is not in place
> >>>> - The SSE namespace should have been put in a "detail"
> >>>> namespace
> >>>> - Use memcpy instead of explicit for
> >>>> - Why is SSE1 template when it works only when T is a
> >>>> single-precision, floating-point value ?
> >>>>
> >>>>
> >>>> Also I believe a nice interface whould have been:
> >>>>
> >>>> SSE1::vector A(1024);
> >>>> SSE1::vector B(1024);
> >>>> SSE1::vector C(1024);
> >>>>
> >>>> C = A + B;
> >>>>
> >>>>
> >>>> Regards
> >>>> Gaetano Mendola
> >>>>
> >>>>
> >>>> See our work on Boost.SIMD ...
> >>>
> >>>
> >>>
> >>> ______________________________****_________________
> >>> Unsubscribe & other changes: http://lists.boost.org/**
> >>> mailman/listinfo.cgi/boost<htt**p://lists.boost.org/mailman/**
> >>> listinfo.cgi/boost <http://lists.boost.org/mailman/listinfo.cgi/boost>>
> >>>
> >>>
> >>
> >>
> >>
> >
> > ______________________________**_________________
> > Unsubscribe & other changes: http://lists.boost.org/**
> > mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>
> >
>
>
>
> --
> ----------------
> Atluri Aditya Avinash,
> India.
>
>
> ------------------------------
>
> Message: 5
> Date: Wed, 29 May 2013 12:19:39 +0530
> From: Aditya Avinash <adityaavinash143(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: [boost] Boost on ARM using NEON
> Message-ID:
> <CABocMVpQqBGqduE6jj-KG=O5faV08GcoV=TOS_bXFLYWTkzKjA(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Hi, i want to develop boost.arm or boost.neon library so that boost is
> implemented on ARM.
>
> --
> ----------------
> Atluri Aditya Avinash,
> India.
>
>
> ------------------------------
>
> Message: 6
> Date: Wed, 29 May 2013 11:45:12 +0400
> From: Antony Polukhin <antoshkka(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CAKqmYPbt+wq92BvLLn-5GUhFfmvWiWsAoqfpXK6XOmnRK7EtwQ(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> 2013/5/29 Aditya Avinash <adityaavinash143(a)gmail.com>:
> > Hi, i want to develop boost.arm or boost.neon library so that boost is
> > implemented on ARM.
>
> Hi,
>
> Boost works well on arm, so nothing should be really developed.
> But if you ate talking about SIMD for ARM, that you shall take a look
> at Boost.SIMD and maybe propose library developer your help.
>
>
> --
> Best regards,
> Antony Polukhin
>
>
> ------------------------------
>
> Message: 7
> Date: Wed, 29 May 2013 13:20:15 +0530
> From: Aditya Avinash <adityaavinash143(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CABocMVo_DbCaxRPDOK4_1KMw8m7O54gXKgOqX+cmYRcBndNYDA(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Thank you!
> Can i develop a new kernel for uBLAS using NEON?
>
> On Wed, May 29, 2013 at 1:15 PM, Antony Polukhin <antoshkka(a)gmail.com>wrote:
>
> > 2013/5/29 Aditya Avinash <adityaavinash143(a)gmail.com>:
> > > Hi, i want to develop boost.arm or boost.neon library so that boost is
> > > implemented on ARM.
> >
> > Hi,
> >
> > Boost works well on arm, so nothing should be really developed.
> > But if you ate talking about SIMD for ARM, that you shall take a look
> > at Boost.SIMD and maybe propose library developer your help.
> >
> >
> > --
> > Best regards,
> > Antony Polukhin
> >
> > _______________________________________________
> > Unsubscribe & other changes:
> > http://lists.boost.org/mailman/listinfo.cgi/boost
> >
>
>
>
> --
> ----------------
> Atluri Aditya Avinash,
> India.
>
>
> ------------------------------
>
> Message: 8
> Date: Wed, 29 May 2013 09:52:57 +0200
> From: Tim Blechmann <tim(a)klingt.org>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID: <51A5B3D9.2010807(a)klingt.org>
> Content-Type: text/plain; charset=ISO-8859-1
>
> >> Hi, i want to develop boost.arm or boost.neon library so that boost is
> >> implemented on ARM.
> >
> > Hi,
> >
> > Boost works well on arm, so nothing should be really developed.
> > But if you ate talking about SIMD for ARM, that you shall take a look
> > at Boost.SIMD and maybe propose library developer your help.
>
> from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure at
> this stage whether NEON will be provided as an open-source module"
>
> tim
>
>
>
> ------------------------------
>
> Message: 9
> Date: Wed, 29 May 2013 08:55:41 +0100
> From: "Paul A. Bristow" <pbristow(a)hetp.u-net.com>
> To: <boost(a)lists.boost.org>
> Subject: Re: [boost] Request to contribute boost::FFT
> Message-ID: <001f01ce5c41$ea88da30$bf9a8e90$(a)hetp.u-net.com>
> Content-Type: text/plain; charset="us-ascii"
>
> > -----Original Message-----
> > From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Nathan Bliss
> > Sent: Tuesday, May 28, 2013 11:12 PM
> > To: boost(a)lists.boost.org
> > Subject: [boost] Request to contribute boost::FFT
> >
> > I am writing to ask if I could contribute a C++ FFT implementation to the boost library.
>
> Definitely, a good templated C++ FFT would be very welcome.
>
> Would/does it work with Boost.Multiprecision to give much higher precision ? (at a snail's pace of
> course).
>
> Paul
>
> ---
> Paul A. Bristow,
> Prizet Farmhouse, Kendal LA8 8AB UK
> +44 1539 561830 07714330204
> pbristow(a)hetp.u-net.com
>
>
>
>
>
>
>
>
>
>
>
>
> ------------------------------
>
> Message: 10
> Date: Wed, 29 May 2013 09:15:36 +0200
> From: Victor Hiairrassary <victor.hiairrassary.ml(a)gmail.com>
> To: "boost(a)lists.boost.org" <boost(a)lists.boost.org>
> Cc: "boost(a)lists.boost.org" <boost(a)lists.boost.org>
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID: <22F316D0-E311-4BE3-94FB-11DE688C9CAB(a)gmail.com>
> Content-Type: text/plain; charset=us-ascii
>
> boost already works very well on ARM !
>
> If you want to use Neon extension, look at boost simd (I do not know if Neon is implemented yet, feel free to do it) !
>
> https://github.com/MetaScale/nt2
>
> On 29 mai 2013, at 08:49, Aditya Avinash <adityaavinash143(a)gmail.com> wrote:
>
> > Hi, i want to develop boost.arm or boost.neon library so that boost is
> > implemented on ARM.
> >
> > --
> > ----------------
> > Atluri Aditya Avinash,
> > India.
> >
> > _______________________________________________
> > Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>
>
> ------------------------------
>
> Message: 11
> Date: Wed, 29 May 2013 12:11:31 +0400
> From: Andrey Semashev <andrey.semashev(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CAEhD+6BfavRaGgok9eMbDebCP1Hdg2T3Z3be7BWZTAZ1EDpfzg(a)mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> On Wed, May 29, 2013 at 11:52 AM, Tim Blechmann <tim(a)klingt.org> wrote:
>
> > >> Hi, i want to develop boost.arm or boost.neon library so that boost is
> > >> implemented on ARM.
> > >
> > > Hi,
> > >
> > > Boost works well on arm, so nothing should be really developed.
> > > But if you ate talking about SIMD for ARM, that you shall take a look
> > > at Boost.SIMD and maybe propose library developer your help.
> >
> > from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure at
> > this stage whether NEON will be provided as an open-source module"
> >
>
> Even if it's not openly provided by developers of NT2, nothing prevents you
> from implementing it yourself.
>
>
> ------------------------------
>
> Message: 12
> Date: Wed, 29 May 2013 10:37:48 +0200
> From: Tim Blechmann <tim(a)klingt.org>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID: <51A5BE5C.6000705(a)klingt.org>
> Content-Type: text/plain; charset=ISO-8859-1
>
> >>>> Hi, i want to develop boost.arm or boost.neon library so that boost is
> >>>> implemented on ARM.
> >>>
> >>> Hi,
> >>>
> >>> Boost works well on arm, so nothing should be really developed.
> >>> But if you ate talking about SIMD for ARM, that you shall take a look
> >>> at Boost.SIMD and maybe propose library developer your help.
> >>
> >> from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure at
> >> this stage whether NEON will be provided as an open-source module"
> >
> > Even if it's not openly provided by developers of NT2, nothing prevents you
> > from implementing it yourself.
>
> yes and no ... if the nt2 devs submit boost.simd to become an official
> part of boost, it is the question if they'd merge an independently
> developed arm/neon support, if it conflicts with their business
> interests ... the situation is a bit unfortunate ...
>
> tim
>
>
> ------------------------------
>
> Message: 13
> Date: Wed, 29 May 2013 12:52:29 +0400
> From: Andrey Semashev <andrey.semashev(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CAEhD+6D8h43AyTvg65ntZZm804R9p0fiOtARPp2G8cOJC64zog(a)mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> On Wed, May 29, 2013 at 12:37 PM, Tim Blechmann <tim(a)klingt.org> wrote:
>
> > >>>> Hi, i want to develop boost.arm or boost.neon library so that boost is
> > >>>> implemented on ARM.
> > >>>
> > >>> Hi,
> > >>>
> > >>> Boost works well on arm, so nothing should be really developed.
> > >>> But if you ate talking about SIMD for ARM, that you shall take a look
> > >>> at Boost.SIMD and maybe propose library developer your help.
> > >>
> > >> from https://github.com/MetaScale/nt2/issues/180: "It is quite unsure
> > at
> > >> this stage whether NEON will be provided as an open-source module"
> > >
> > > Even if it's not openly provided by developers of NT2, nothing prevents
> > you
> > > from implementing it yourself.
> >
> > yes and no ... if the nt2 devs submit boost.simd to become an official
> > part of boost, it is the question if they'd merge an independently
> > developed arm/neon support, if it conflicts with their business
> > interests ... the situation is a bit unfortunate ...
> >
>
> I realize that it may be inconvenient for them to expose their
> implementation of NEON module (if there is one) for various reasons. But as
> long as Boost.SIMD is licensed under BSL, anyone can use and improve this
> code if he likes to, even if it means implementing functionality similar to
> the proprietary solution.
>
> It doesn't necessarily mean that the open solution will take the market
> share of the proprietary one.
>
>
> ------------------------------
>
> Message: 14
> Date: Wed, 29 May 2013 09:59:04 +0100
> From: David Bellot <david.bellot(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Boost on ARM using NEON
> Message-ID:
> <CAOE6ZJEVebMZF90SHC_yHNxmBWy=uC3K1OFokcPrF2RM7=uYrQ(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> again, as I said, you develop and propose a patch to the ublas
> mailing-list. We're happy to see contributions from anybody.
>
> Now I must say that we are interested in Neon instructions for ublas.
> It has been on the todo list for quite a long time too:
> http://ublas.sf.net
>
> What I want to say is, apart from the 2 GSOC students and myself
> (general maintenance, official releases), nobody has a specific task
> assigned to.
>
> So if you want to contribute, you just work on it and talk about it on
> the mailing list so that people can be involved and help you.
>
> If little by little you contribute with amazing ARM Neon code, then
> people will naturally take for granted that you are the ARM Neon
> specialist for ublas. As simple as that.
>
> If someone comes with a better code than you then we will choose the
> other code. If you come with a better code than someone else, then we
> will choose your code.
>
> So please, contribute.
>
>
> Are you testing your code on a specific machine or a virtual one ?
> What's about things like Raspberry Pi ? I'd like to see benchmark on
> this little things. Maybe you can start benchmarking ublas on a tiny
> machine like that and/or an Android device and see how gcc is able to
> generate auto-vectorized code for this machine. Check the assembly
> code to see if Neon instructions have been correctly generated.
>
> Best,
> David
>
>
>
> On Wed, May 29, 2013 at 8:50 AM, Aditya Avinash
> <adityaavinash143(a)gmail.com> wrote:
> > Thank you!
> > Can i develop a new kernel for uBLAS using NEON?
> >
> > On Wed, May 29, 2013 at 1:15 PM, Antony Polukhin <antoshkka(a)gmail.com>wrote:
> >
> >> 2013/5/29 Aditya Avinash <adityaavinash143(a)gmail.com>:
> >> > Hi, i want to develop boost.arm or boost.neon library so that boost is
> >> > implemented on ARM.
> >>
> >> Hi,
> >>
> >> Boost works well on arm, so nothing should be really developed.
> >> But if you ate talking about SIMD for ARM, that you shall take a look
> >> at Boost.SIMD and maybe propose library developer your help.
> >>
> >>
> >> --
> >> Best regards,
> >> Antony Polukhin
> >>
> >> _______________________________________________
> >> Unsubscribe & other changes:
> >> http://lists.boost.org/mailman/listinfo.cgi/boost
> >>
> >
> >
> >
> > --
> > ----------------
> > Atluri Aditya Avinash,
> > India.
> >
> > _______________________________________________
> > Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>
>
> ------------------------------
>
> Message: 15
> Date: Wed, 29 May 2013 05:27:02 -0400
> From: Rob Stewart <robertstewart(a)comcast.net>
> To: "boost(a)lists.boost.org" <boost(a)lists.boost.org>
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID: <8ED4CF8D-139A-4232-9C18-968AD38161E7(a)comcast.net>
> Content-Type: text/plain; charset=us-ascii
>
> On May 29, 2013, at 2:35 AM, Aditya Avinash <adityaavinash143(a)gmail.com> wrote:
>
> > Am sorry. My bad. It's boost.simd. Why isn't it included in boost? I have heard about it recently. Is there a chance that it is added to boost in the near future?
> >
> > On Wed, May 29, 2013 at 11:57 AM, Karsten Ahnert <
> > karsten.ahnert(a)googlemail.com> wrote:
> >
> >> On 05/29/2013 07:33 AM, Aditya Avinash wrote:
>
> [snip lots of quoted text]
>
> >>> On Wed, May 29, 2013 at 10:36 AM, Joel Falcou <joel.falcou(a)gmail.com>
> >>> wrote:
> >>>
> >>> On 29/05/2013 06:45, Gaetano Mendola wrote:
> >>>>
> >>>> On 29/05/2013 06.13, Aditya Avinash wrote:
>
> [snip even more quoted text]
>
> >>>>> Regards
> >>>>> Gaetano Mendola
> >>>>>
> >>>>>
> >>>>> See our work on Boost.SIMD ...
>
> [snip multiple sigs and ML footers]
>
>
> Please read http://www.boost.org/community/policy.html#quoting before posting.
>
> ___
> Rob
>
> (Sent from my portable computation engine)
>
> ------------------------------
>
> Message: 16
> Date: Wed, 29 May 2013 11:34:14 +0200
> From: Mathias Gaunard <mathias.gaunard(a)ens-lyon.org>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID: <51A5CB96.3040404(a)ens-lyon.org>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 29/05/13 06:13, Aditya Avinash wrote:
> > Hi, i have developed vector addition algorithm which exploits the hardware
> > parallelism (SSE implementation).
>
> That's something trivial to do, and unfortunately even that trivial code
> is broken (it's written for a generic T but clearly does not work for
> any T beside float).
> It still has nothing to do with uBLAS.
>
> Bringing SIMD to uBLAS could be fairly difficult. Is this part of the
> GSoC projects? Who's in charge of this?
> I'd like to know what the plan is: optimize very specific operations
> with SIMD or try to provide a framework to use SIMD in expression templates?
>
> The former is better adressed by simply binding BLAS, the latter is
> certainly not as easy as it sounds.
>
>
> ------------------------------
>
> Message: 17
> Date: Wed, 29 May 2013 15:04:51 +0530
> From: Aditya Avinash <adityaavinash143(a)gmail.com>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] SIMD implementation of uBLAS
> Message-ID:
> <CABocMVr1k13N6iBK7VOR4-G8ZPavf6eZL5qd=8cEFLGxDJ_waw(a)mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Am sorry. My bad. It's boost.simd. Why isn't it included in boost? I have
> heard about it recently. Is there a chance that it is added to boost in the
> near future?
>
>
> ------------------------------
>
> Message: 18
> Date: Wed, 29 May 2013 11:38:45 +0200
> From: Mathias Gaunard <mathias.gaunard(a)ens-lyon.org>
> To: boost(a)lists.boost.org
> Subject: Re: [boost] Request to contribute boost::FFT
> Message-ID: <51A5CCA5.80605(a)ens-lyon.org>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 29/05/13 00:12, Nathan Bliss wrote:
> > Dear Boost Community Members,
> > I am writing to ask if I could contribute a C++ FFT implementation to the boost library. I noticed that there is a boost::CRC so I thought it would be a good addition to have boost::FFT as well. MIT have an existing open-source FFTW library, but it is GPL-licensed which is much more restrictive than the Boost license. I should imagine a commercial FFTW license would be very expensive.
> > I have working FFT code which I have tested on Ubuntu Linux GNU gcc and also Visual Studio Express 2012 (MS VC++) for Windows. My code, when run as a 2048-point FFT, agrees with the MIT FFTW one to a max error margin of 6 parts per million.
> > It is implemented using templates in a .hpp file and is very easy to use:
> > ------------------------------------------------------------------------------------------------------------[fft.hpp]
> > template<class FLOAT_TYPE, int FFT_SIZE, int NUM_THREADS>class FFT;///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////[Invocation example]
> > typedef double My_type;FFT<My_type, 2048, 4> user_fft;user_fft.initialise_FFT();get_input_samples(*user_fft.input_value_array, user_fft.m_fft_size);user_fft.execute_FFT();print_output_csv(user_fft.output_value_array, 2048);------------------------------------------------------------------------------------------------------------
> > It's uniqueness compared to other FFT implementations is that it fully utilises the boost::thread library, to the extent that the user can give the number of threads they want to use as a parameter to the class template (above case is for 4 parallel threads).
> > It is structured and organised so that users could customise/optimise it to specific processor architectures.
> > I've also tried to develop it in the spirit of Boost in that all class members which users should not access are private, only making public what is necessary to the API. My code is a decimation-in-time radix-2 FFT, and users could in theory use the existing API as a basis to extend it to more complex implementations such as the Reverse/Inverse FFT, bit-reversal of inputs/outputs and multi-dimensional FFTs.
> > I look forward to your reply.
> > Kind regards,Nathan Bliss
>
> You may want to give a look to the FFT functions bundled with NT2
> courtesy of Domagoj Saric.
>
> They also generate a FFT for a given compile-time size and use
> Boost.SIMD for vectorization (but no threads). Unfortunately the code is
> not so generic that it can work with arbitrary vector sizes, so it
> limits portability somewhat.
>
> <https://github.com/MetaScale/nt2/blob/master/modules/core/signal/include/nt…>
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>
> ------------------------------
>
> End of Boost Digest, Vol 4012, Issue 1
> **************************************
1
0
Hi, i want to develop boost.arm or boost.neon library so that boost is
implemented on ARM.
--
----------------
Atluri Aditya Avinash,
India.
7
11
Hi,
I saw that the destructor of ~mutex doesn't have
a BOOST_VERIFY anymore on the return value of
pthread_mutex_destroy. From SVN logs I can see it was
removed in the commit 75882 to manage the EINTR due
to some bugged POSIX implementation.
I will reintroduce the BOOST_VERIFY like this:
~mutex()
{
int ret;
do
{
ret = pthread_mutex_destroy(&m);
} while (ret == EINTR);
BOOST_VERIFY(!ret);
}
while we are at it consider the fact that for the
same reason ~mutex needs to check for that EINTR
return value the same should do timed_mutex.
Regards
Gaetano Mendola
4
11
Hi everybody,
I am writing a prototype for the JSON parser for the GSoC with the support
of Bjorn Reese. When discussing the proposal, Michael Marcin suggested an
interface that allows the user to define a struct reflecting the JSON data
structure. I have been looking at Boost.Fusion to enable the parser to
"fill in" the values in the struct.
It looks like I'm after a solution described in the following post:
http://stackoverflow.com/questions/13830792/simulate-compile-time-reflectio…
Could anyone tell me if this is a good starting point, and/or if there are
other ways of achieving this.
Thank you,
Stephan.
5
4

28 May '13
The function make_fcontext takes a stack pointer. However it is not
clear at all from the documentation that the stack pointer should
actually point to the end of the stack buffer. AFAICT it takes close
examination of the simple_stack_allocator helper in the examples folder
to figure this out. This led to several hours of debugging when I
naively passed in the pointer to the start of the stack buffer and
random application memory got stomped.
I guess you're supposed to glean it from the note at the bottom of the
stack allocation page [1].
This should at least be mentioned in the make_fcontext reference [2].
[1]
http://www.boost.org/doc/libs/1_53_0/libs/context/doc/html/context/stack.ht…
[2]
http://www.boost.org/doc/libs/1_53_0/libs/context/doc/html/context/context/…
3
6
> Then, there's boost.mirror that hasn't given much news recently, but that
> had one thing right: you need something to generate the metadata to avoid
> the pitfalls of writing the macro by hand. I had in mind a tool based on
> clang to do that but never had time to care about it since my very naive
> perl code was enough for my small needs.
Easiest of all is to expose the compiler's internal state via a magic
namespace in the form of pseudo-template definitions. Then one has, at
metaprogramming stage, access to everything the compiler knows at that
point.
I posited that idea to Chandler Carruth for clang at C++ Now. He didn't seem
adamantly opposed.
Third party libraries (e.g. a Boost one) could wrap up each compiler's
internal magic namespace into something portable and therefore useful.
Niall
1
0

[MSM] Compile-time error using SubMachines and events that carries some data
by Fernando Pelliccioni 28 May '13
by Fernando Pelliccioni 28 May '13
28 May '13
Hi,
I have a Main State Machine (state_machine_) and two SubMachines.
I want to start the first SubMachine and when it finished I want to start
the second SubMachine.
Here is the code.
http://pastebin.com/sBdS1wvJ
I have a compile-time error on some States of the second SubMachine.
error: no member named 'element' in 'exit_introduction'
std::cout << "event.element(): " << event.element() <<
std::endl;
~~~~~ ^
This is on state "StaringGame" and according the Transition Table of
GameSM, only the event "character_selected" is used.
I don't know why is trying to use the event "exit_introduction".
Is this a bug of MSM?
Am I doing something wrong?
Thanks and regards,
Fernando Pelliccioni.
3
4
Google just announced the seven proposals which were accepted for Boost.
The (entire) list can be found here:
<http://www.google-melange.com/gsoc/projects/list/google/gsoc2013>
Congratulations to the successful students! Good luck and much fun in this
summer's Boost C++ projects! :)
For those students who were not selected: Feel free to contact me for the
details! We actually asked Google for more than seven slots because we got
more than seven amazing proposals. Unfortunately the number of slots is
just limited. And so we had to make some tough decisions. :/
I also thank everyone who had volunteered to mentor a project! I know that
not everyone could get the project they wanted. But maybe there is still
enough motivation to move forward with some projects outside of the Google
Summer of Code program. :)
Boris
12
15