On 28 Aug 2013 at 12:48, microcai wrote:
After digging into the proposed Boost.Afio library, I am afraid that Boost.Afio *does not* meet my standard of what a *good* library is.
Boost.Afio seems to support both async and sync operation, but obversely they failed to design the proper API. Boost.Afio relay on Boost.Asio, but they seems to ignore the elegant Asio Style. They didn't even understand what makes Boost.Asio a good library.
"They" would be me. I designed the AFIO API. I also implemented most of AFIO before the port to Boost by Paul under this year's Google Summer of Code funding.
A good library is not about good features (although its importand) but about element API.
For async file I/O, the asio style API like this
boost::asio::fstream file(io_service); file.async_open("test.txt", "r", & handle_open );
that is way more powerfull that the complex
auto mkfile(dispatcher->file(async_path_op_req(mkdir, "testdir/foo", file_flags::Create|file_flags::ReadWrite)));
The api that Boost.Afio choose is too complex yet too stupid. You might argue on me that the job which Boost.Afio tries to solve is complex, but that doesn't be a good excuse to design such stupid API.
Boost.Afio is stupid.
I designed the AFIO API. I also implemented most of AFIO. You are therefore implying that I am stupid. You are of course entitled to your opinion, but I would personally say that AFIO has an intuitive, Intellisense-friendly, elegant API which really leverages C++11 to achieve an ease of programming very hard to achieve (with performance) in C++03, and unparalleled cleanliness and elegance compared to any other async file i/o implementation. For comparison I would refer you to the libuv C library which is the nearest equivalent, and to the Windows IOCP implementation. I think you will find that AFIO compares *extremely* favourably to its nearest equivalents. It is more verbose in the single-issue API than it could be, but that is because AFIO is 100% a pure asynchronous batch API. The single issue APIs quite literally create a batch of one item, and call the batch API. The reason it is a pure asynchronous batch API is because AFIO is designed for exceptional file i/o performance: if you only need to write a few files mostly sequentially, STL iostreams are pretty good. If you need to scatter gun random writes and reads across dozens of files simultaneously and memory mapped file i/o isn't an option, STL iostreams - or Boost.Iostreams for that matter - will max out the CPU long before maxing out the storage. The reason AFIO "ignores the elegant Asio Style" is because the ASIO design is intended for non-seekable devices such as sockets and pipes. With file i/o, you very specifically need to specify the *ordering* *constraints* of reads and writes or else you will lose data (equally, not being able to say "I don't care about the commit order of this particular batch, so write it as fast as possible" is bad for performance). That can be done by hand with ASIO, but it involves lots of error and race condition prone verbose ordering control logic. AFIO relieves the programmer of all that marshalling work: you simply tell AFIO your dependencies, and AFIO figures out the ideal execution graph for you.
As the "closure execution engine", Boost.Asio already have that, there is absolutly no point in investigating yet another Boost.Asio for what ever reasons.
I think you don't understand what a closure execution engine does. AFIO implements a superset of Microsoft's closure engine (see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3558.pdf), most specifically in the form of dependency tied completions in addition to independent completions. Most closure engines presented to WG21 to date only implement the latter, mainly because they make avoiding race conditions far easier especially if you are maintaining Abrahams exception safety guarantees. AFIO gets away with the former mainly through being so simple a design I can walk it line by line for logic errors, and as of last week I believe it to be now race condition free except in the case where one runs out of memory (unit testing for that is coming). As an example of what AFIO's unusual flexibility makes possible, I believe it makes crazy-but-sane ideas like Google's "C++ pipelines" (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3534.html) very tractable. It ought to be straightforward to combine AFIO and Boost.iostreams to implement N3534, though I admit I have no idea if performance would be acceptable. It would be extremely cool however. Niall -- Currently unemployed and looking for work. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/