[Beast] Downloading multiple urls in parallel
Hi, Are there functions within beast to allow me to start downloading multiple url's in parallel and then handle each returned result with a different continuation function? Any help appreciated. Kind regards Sean.
On Thu, Sep 13, 2018 at 10:49 AM Sean Farrow via Boost-users
Are there functions within beast to allow me to start downloading multiple url’s in parallel and then handle each returned result with a different continuation function?
Yes. This is accomplished by calling `boost::beast::http::async_read` on two or more different instances of connected sockets. Regards
Hi,
Are there any examples in the docs as to how to do this?
Does the http client do this under the hood?
Kind regards
Sean.
-----Original Message-----
From: Boost-users
Are there functions within beast to allow me to start downloading multiple url’s in parallel and then handle each returned result with a different continuation function?
Yes. This is accomplished by calling `boost::beast::http::async_read` on two or more different instances of connected sockets. Regards _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org https://lists.boost.org/mailman/listinfo.cgi/boost-users
On Thu, Sep 13, 2018 at 12:13 PM Sean Farrow via Boost-users
Are there any examples in the docs as to how to do this?
This program crawls the list of the 10,000 most popular websites indexed by Alexa and makes many connections simultaneously: https://github.com/boostorg/beast/tree/develop/example/http/client/crawl Regards
participants (2)
-
Sean Farrow
-
Vinnie Falco