Boost.Fiber review January 6-15

Hi all, The review of Boost.Fiber by Oliver Kowalke begins today, Monday January 6th, and closes Wednesday January 15th. ----------------------------------------------------- About the library: Boost.Fiber provides a framework for micro-/userland-threads (fibers) scheduled cooperatively. The API contains classes and functions to manage and synchronize fibers similar to Boost.Thread. Each fiber has its own stack. A fiber can save the current execution state, including all registers and CPU flags, the instruction pointer, and the stack pointer and later restore this state. The idea is to have multiple execution paths running on a single thread using a sort of cooperative scheduling (versus threads, which are preemptively scheduled). The running fiber decides explicitly when it should yield to allow another fiber to run (context switching). Boost.Fiber internally uses coroutines from Boost.Coroutine; the classes in this library manage, schedule and, when needed, synchronize those coroutines. A context switch between threads usually costs thousands of CPU cycles on x86, compared to a fiber switch with a few hundred cycles. A fiber can only run on a single thread at any point in time. docs: http://olk.github.io/libs/fiber/doc/html/ git: https://github.com/olk/boost-fiber src: http://ok73.ok.funpic.de/boost.fiber.zip The documentation has been moved to another site; see the link above. If you have already downloaded the source, please refresh it; Oliver has added some new material. --------------------------------------------------- Please always state in your review whether you think the library should be accepted as a Boost library! Additionally please consider giving feedback on the following general topics: - What is your evaluation of the design? - What is your evaluation of the implementation? - What is your evaluation of the documentation? - What is your evaluation of the potential usefulness of the library? - Did you try to use the library? With what compiler? Did you have any problems? - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? - Are you knowledgeable about the problem domain? Nat Goodspeed Boost.Fiber Review Manager ________________________________

On Mon, Jan 6, 2014 at 6:07 AM, Nat Goodspeed <nat@lindenlab.com> wrote:
Boost.Fiber provides a framework for micro-/userland-threads (fibers) scheduled cooperatively.
A few questions and a few comments... What are the most typical use cases for a fiber library? Were any alternatives to the following behavior? If there were, what were the benefit/drawback tradeoffs that led to this decision? { boost::fibers::fiber f( some_fn); } // std::terminate() will be called What happens operationally to a detached fiber? Will it ever continue execution or is it for all practical purposes destroyed? Did you consider making algorithm specific fiber members, such as 'thread_affinity' and 'priority', controllable via. template arguments for threads? If I wanted to create a new scheduler algorithm that required per-fiber information, how would I implement that with this library? Did you consider giving some more explanation or code for the publish-subscribe application? It was a bit difficult to follow that example without knowing what reg_ and cond_ were. I love the functionality provided by 'fiber_group'. I like the convenience of the heap allocated default scheduler as an alternative to a defaulted template parameter (like std::vector's allocator). Best Regards, David Sankel

2014/1/7 David Sankel <camior@gmail.com>
What are the most typical use cases for a fiber library?
I would say that the most use cases are task related applications (as boost.thread). The interface and classes of boost.fiber are similar to boost.thread (this was intended, you can use patterns well known from multi-threaded programming). The difference between both libraries is that a waiting thread (for instance on a condition-variable) is blocked while a fiber (waiting on a condition_variable) will be suspended while the the thread running the fiber is not (e.g. other code could be executed in the meanwhile). For instance in the context of network applications which have to serve many clients at the same time (known as the C10K problem - see http://olk.github.io/libs/fiber/doc/html/fiber/asio.html) fiber prevent overloading the operating system with too many threads while the code is easy to read/understand (no scattering the code with callbacks etc. - see example pusher-subsriber in directory examples/asio). boost.fiber uses coroutines (from boost.coroutine) internally - but boost.coroutine does not provide classes to synchronize coroutines. On the developer list some requests for such synchronization primitvies were requested which is now available with boost.fiber. Were any alternatives to the following behavior? If there were, what were
the benefit/drawback tradeoffs that led to this decision?
In the context of async. I/O (boost.asio) you could use callbacks (asio's previous strategy) but you scatter your code with many callbacks which makes the code hard to read and to follow (debugging). You could use fibers in a thread pool too - with the specialized fiber-scheduler (already provided by boost.fiber) you can implement work-stealing/work-sharing easily.
{ boost::fibers::fiber f( some_fn); } // std::terminate() will be called
What happens operationally to a detached fiber? Will it ever continue execution or is it for all practical purposes destroyed?
same as for std::thread - the fiber instance must not be joined but continues executing inside the fiber-scheduler. Did you consider making algorithm specific fiber members, such as
'thread_affinity' and 'priority', controllable via. template arguments for threads?
sorry, I don't understand your question. thready_affinity() and prioritiy() are already member-functions of class fiber - both are controlled at runtime. I don't know to which template you are referring to and the purpose to make both attributes a tempalte arg.
If I wanted to create a new scheduler algorithm that required per-fiber information, how would I implement that with this library?
deriving from interface algorithm and installing your scheduler at the top of the thread. class my_scheduler : public boost::fibers::algorithm { ... }; void thread_fn() { my_scheduler ms; boost::fibers::set_scheduling_algorithm( & ms); ... }
Did you consider giving some more explanation or code for the publish-subscribe application? It was a bit difficult to follow that example without knowing what reg_ and cond_ were.
I mean more comments? yes, I will! The code is similar as you would write it for threads - the difference is that the example runs in one thread (main-thread).
I like the convenience of the heap allocated default scheduler as an alternative to a defaulted template parameter (like std::vector's allocator).
the lib allocates (via new-operator) a default scheduler - if you don't want this you could simply call set_scheduling_algorithm(). void thread_fn() { boost::fibers::round_robin rr; // allocated on thread's stack boost::fibers::set_scheduling_algorithm( & rr); // prevents allocating round_robin on the heap ... }

On 6 Jan 2014 at 8:07, Nat Goodspeed wrote:
Please always state in your review whether you think the library should be accepted as a Boost library!
Currently I am undecided. My biggest issue is in the (lack of) documentation. Without very significant improvements in the docs I'd have to recommend no right now.
Additionally please consider giving feedback on the following general topics:
- What is your evaluation of the design?
The design overall is fine. I agree with the decision to replicate std::thread closely, including all the support classes. My only qualm with the design really is I wish there were combined fibre/thread primitives i.e. say a future implementation which copes with both fibers and threads. I appreciate it probably wouldn't be particularly performant, but it sure would ease partially converting existing threaded code over to fibers, which is probably a very large majority use case. I accept this can be labelled as a future feature, and shouldn't impede entering Boost now.
- What is your evaluation of the implementation?
The quality of the implementation is generally excellent. My only issue is that the Boost.Thread mirror classes do not fully mirror those in Boost.Thread. For example, where is future::get_exception_ptr()? I think mirroring Boost.Thread rather than std::thread is wise for better porting and future proofing. I'd recommend finishing matching Boost.Thread before entering Boost. Another suggestion is that the spinlock implementation add memory transactions support. You can find a suitable spinlock implementation in AFIO at https://github.com/BoostGSoC/boost.afio/blob/master/boost/afio/detail/ MemoryTransactions.hpp.
- What is your evaluation of the documentation?
My biggest concerns with admitting this library now are with the documentation: 1. There is no formal reference section. What bits of reference section there is does not contain reference documentation for all the useful classes (e.g. the ASIO support). 2. I see no real world benchmarks. How are people supposed to know if adopting fibers is worthwhile without some real world benchmarks? I particularly want to know about time and space scalability as the number of execution contexts rises to 10,000 and beyond. 3. I see no listing of compatible architectures, compilers, etc. I want to see a list of test targets, ones regularly verified as definitely working. I also want to see a list of minimum necessary compiler versions. 4. I deeply dislike the documentation simply stating "Synchronization between a fiber running on one thread and a fiber running on a different thread is an advanced topic.". No it is not advanced, it's *exactly* what I need to do. So tell me a great deal more about it! 5. Can I transport fibers across schedulers/threads? swap() suggests I might be able to, but isn't clear. If not, why not? 6. What is the thread safety of fiber threading primitives? e.g. if a fibre waits on a fibre future on thread A, can thread B signal that fibre future safely? If not, what will it take to make this work, because again this is something I will badly need in AFIO. 7. I want to see worked tutorial examples in the documentation e.g. step me through implementing a fibre pool, and then show me how to implement a M:N threading solution. That sort of thing, because this is another majority use case. 8. There are quite a few useful code examples in the distribution in the examples directory not mentioned in the formal docs (which is bad, because people don't realise they are there), but they don't explain to me why and how they work. I find the ASIO examples particularly confusing, and I don't understand without delving into the implementation code why they work which is a documentation failure. This is bad, and it needs fixing. 9. It would help a lot to understand missing features if there were a Design Rationale page explaining how and why the library looks the way it does.
- What is your evaluation of the potential usefulness of the library?
It's very useful, and I intend to add fiber support to proposed Boost.AFIO.
- Did you try to use the library? With what compiler? Did you have any problems?
Not yet. AFIO needs to be modified first (targeted thread_source support).
- How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
A few hours as I decided exactly how AFIO will add Fiber support. I reviewed both the source distribution and the docs.
- Are you knowledgeable about the problem domain?
Yes. I wrote one of these myself in assembler in the 1990s. Niall -- Currently unemployed and looking for work. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

2014/1/7 Niall Douglas <s_sourceforge@nedprod.com>
My only qualm with the design really is I wish there were combined fibre/thread primitives i.e. say a future implementation which copes with both fibers and threads.
my intention was that another library combines fibers and threads (some kind of thread-pool with worker-threads using fibers).
I appreciate it probably wouldn't be particularly performant, but it sure would ease partially converting existing threaded code over to fibers, which is probably a very large majority use case.
The problem is that a lock on a mutex from boost.thread would block the entire thread while a mutex from from boost.fiber keeps the thread running, e.g. other fibers are executed/resumed while the current fiber is suspended until the lock on the mutex is released. You can say that boost.fiber prevents the thread (running the fibers) is blocked while calling wait-functions on some sync. primitives. I accept this can be labelled as a future feature, and shouldn't
impede entering Boost now.
OK
My only issue is that the Boost.Thread mirror classes do not fully mirror those in Boost.Thread.
Because I decided that the interface of std::thread should be the blue-print. Boost.thread has (at least for my taste) too many extensions/conditional compilations etc.
Another suggestion is that the spinlock implementation add memory transactions support. You can find a suitable spinlock implementation in AFIO at https://github.com/BoostGSoC/boost.afio/blob/master/boost/afio/detail/ MemoryTransactions.hpp.
OK - I'll take a look at it.
1. There is no formal reference section. What bits of reference section there is does not contain reference documentation for all the useful classes (e.g. the ASIO support).
OK - I followed the style of other boost libraries (like boost.thread) - it seams there is no 'standard' regarding to this topic.
2. I see no real world benchmarks. How are people supposed to know if adopting fibers is worthwhile without some real world benchmarks? I particularly want to know about time and space scalability as the number of execution contexts rises to 10,000 and beyond.
boost.fiber uses boost.coroutine (which itself uses boost.context) and provides some performance tests for context switching. Of course I could provide a benchmark for a simple task running a certain amount of fibers - but what would be a good example for such a task?
3. I see no listing of compatible architectures, compilers, etc. I want to see a list of test targets, ones regularly verified as definitely working. I also want to see a list of minimum necessary compiler versions.
boost.fiber itself contains simple C++03 code - the only restriction to architectures is defined by boost.context (because it's using assembler).
4. I deeply dislike the documentation simply stating "Synchronization between a fiber running on one thread and a fiber running on a different thread is an advanced topic.". No it is not advanced, it's *exactly* what I need to do. So tell me a great deal more about it!
because fibers do not use the underlying frameworks (like pthread) used by boost.thread. usually you run dependend fibers in the same thread concurrently. if your code requires that a fiber in thread A waits on a fiber running in thread B it is supported by boost.fiber using atomics (sync. primitves use atomics internally).
5. Can I transport fibers across schedulers/threads? swap() suggests I might be able to, but isn't clear. If not, why not?
not swap() - you have to move the fiber from fiber-scheduler scheduling fibers in thread A to fiber-scheduler ruinning in thread B. for this purpose boost.fiber proivides the class round_robin_ws which has member-functions steal_from() and migrate_to() to move a fiber between threads. Of course you are free to implement and use your own fiber-scheduler.
6. What is the thread safety of fiber threading primitives? e.g. if a fibre waits on a fibre future on thread A, can thread B signal that fibre future safely?
it's supported
7. I want to see worked tutorial examples in the documentation e.g. step me through implementing a fibre pool, and then show me how to implement a M:N threading solution. That sort of thing, because this is another majority use case.
this an advanced topic and should not be part of this lib. I'm working currently on a library implementing a thread-pool which uses internally fibers. It maps to a M:N solution. boost.fiber itself should provide only the low-level functionality (same as boost.context does for boost.coroutine and boost.coroutine acts for boost.fiber). the lib contains a unit-tests (test_migration) which shows how a fiber can be migrated between two threads (this code shows how work-stealing/work-sharing can be implemented in a thread-pool).
8. There are quite a few useful code examples in the distribution in the examples directory not mentioned in the formal docs (which is bad, because people don't realise they are there), but they don't explain to me why and how they work. I find the ASIO examples particularly confusing, and I don't understand without delving into the implementation code why they work which is a documentation failure. This is bad, and it needs fixing.
I've added comments to the publisher-subscriber example. Does the comments in the code explain the ideas better?
9. It would help a lot to understand missing features if there were a Design Rationale page explaining how and why the library looks the way it does.
OK

On 7 Jan 2014 at 13:21, Oliver Kowalke wrote:
My only qualm with the design really is I wish there were combined fibre/thread primitives i.e. say a future implementation which copes with both fibers and threads.
my intention was that another library combines fibers and threads (some kind of thread-pool with worker-threads using fibers).
Oh great! If you had a Design Rationale page which specifically says such a feature is out of scope for Boost.Fiber because it's a more complex additional layer which happens to be provided in another library X (preferably with link to it), then I would be very pleased.
My only issue is that the Boost.Thread mirror classes do not fully mirror those in Boost.Thread.
Because I decided that the interface of std::thread should be the blue-print. Boost.thread has (at least for my taste) too many extensions/conditional compilations etc.
Well ... I agree that thread cancellation support is probably a step too far, but I think there is also a reasonable happy medium between C++11 and Boost.Thread. Put it another way: what out of Boost.Thread's additions would look extremely likely to appear to C++17? The future::get_exception_ptr() is a very good example, it saves a lot of overhead when you're transferring exception state from one future to another.
1. There is no formal reference section. What bits of reference section there is does not contain reference documentation for all the useful classes (e.g. the ASIO support).
OK - I followed the style of other boost libraries (like boost.thread) - it seams there is no 'standard' regarding to this topic.
I do think you need reference docs for the ASIO support classes. I don't understand what they do, and I think I ought to.
2. I see no real world benchmarks. How are people supposed to know if adopting fibers is worthwhile without some real world benchmarks? I particularly want to know about time and space scalability as the number of execution contexts rises to 10,000 and beyond.
boost.fiber uses boost.coroutine (which itself uses boost.context) and provides some performance tests for context switching.
Sure. But it's a "why should I use this library?" sort of thing? If I see real world benchmarks for a library, it tells me the author has probably done some performance tuning. That's a big tick for me in considering to use a library. It also suggests to me if refactoring my code is worth it. If Boost.Fiber provides only a 10x linear scaling improvement, that's very different from a log(N) scaling improvement. A graph making it very obvious what the win is on both CPU time and memory footprint makes decision making regarding Fiber support much easier. Put it another way: if I am asking my management for time to prototype adopting Fibers in the company's core software, a scaling graph makes me getting that time a cinch. Without that graph, I have to either make my own graph in my spare time, or hope that management understands the difference between fibers and threads (unlikely).
Of course I could provide a benchmark for a simple task running a certain amount of fibers - but what would be a good example for such a task?
You don't need much: a throughput test of null operations (i.e. a pure test of context switching) for total threads 1...100,000 on some reasonably specified 64 bit CPU e.g. an Intel Core 2 Quad or better. I generally would display in CPU cycles to eliminate clock speed differences. Extra bonus points for the same thing on some ARMv7 CPU.
3. I see no listing of compatible architectures, compilers, etc. I want to see a list of test targets, ones regularly verified as definitely working. I also want to see a list of minimum necessary compiler versions.
boost.fiber itself contains simple C++03 code - the only restriction to architectures is defined by boost.context (because it's using assembler).
Eh, well then I guess you need a link to the correct page in boost.context where it lists the architectures it works on. Certainly a big question for anyone considering Fiber is surely "will it work on my CPU"?
4. I deeply dislike the documentation simply stating "Synchronization between a fiber running on one thread and a fiber running on a different thread is an advanced topic.". No it is not advanced, it's *exactly* what I need to do. So tell me a great deal more about it!
because fibers do not use the underlying frameworks (like pthread) used by boost.thread. usually you run dependend fibers in the same thread concurrently. if your code requires that a fiber in thread A waits on a fiber running in thread B it is supported by boost.fiber using atomics (sync. primitves use atomics internally).
Sure, but I definitely won't be using Fibers that way. And I saw Antony had the same observation too. I think you need some more docs on this: just tell us what is possible, what works and how it works and we'll figure out the rest. I definitely need the ability to signal a fibre future belonging to thread A from some arbitrary thread B. I'll also need to boost::asio::io_service::post() to an ASIO io_service running fibers in thread A from some arbitrary thread B.
5. Can I transport fibers across schedulers/threads? swap() suggests I might be able to, but isn't clear. If not, why not?
not swap() - you have to move the fiber from fiber-scheduler scheduling fibers in thread A to fiber-scheduler ruinning in thread B. for this purpose boost.fiber proivides the class round_robin_ws which has member-functions steal_from() and migrate_to() to move a fiber between threads.
That's great news, I'll be needing that too.
6. What is the thread safety of fiber threading primitives? e.g. if a fibre waits on a fibre future on thread A, can thread B signal that fibre future safely?
it's supported
The docs need to explicitly say so then, and indeed thread safety for *every* API in the library. Complexity guarantees and exception safety statements for each API would also be nice. I know they're a real pain to do, but it hugely helps getting the library into a C++ standard later.
7. I want to see worked tutorial examples in the documentation e.g. step me through implementing a fibre pool, and then show me how to implement a M:N threading solution. That sort of thing, because this is another majority use case.
this an advanced topic and should not be part of this lib. I'm working currently on a library implementing a thread-pool which uses internally fibers. It maps to a M:N solution. boost.fiber itself should provide only the low-level functionality (same as boost.context does for boost.coroutine and boost.coroutine acts for boost.fiber). the lib contains a unit-tests (test_migration) which shows how a fiber can be migrated between two threads (this code shows how work-stealing/work-sharing can be implemented in a thread-pool).
If you have a section in a design rationale page saying this, I would be happy.
8. There are quite a few useful code examples in the distribution in the examples directory not mentioned in the formal docs (which is bad, because people don't realise they are there), but they don't explain to me why and how they work. I find the ASIO examples particularly confusing, and I don't understand without delving into the implementation code why they work which is a documentation failure. This is bad, and it needs fixing.
I've added comments to the publisher-subscriber example. Does the comments in the code explain the ideas better?
I'll get back to you on your comments later. I think, as a minimum, all the examples need to appear verbatim in the docs (quickbook makes this exceptionally easy, in fact it's almost trivial). This is because Boost docs tend to appear on Google searches far quicker than code in some git repo. Niall -- Currently unemployed and looking for work. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

2014/1/7 Niall Douglas <s_sourceforge@nedprod.com>
Oh great! If you had a Design Rationale page which specifically says such a feature is out of scope for Boost.Fiber because it's a more complex additional layer which happens to be provided in another library X (preferably with link to it), then I would be very pleased.
I'll add an rational section to the documentation - the library is pre-alpha (e.g. I'm experimenting with some ideas - but if you are interested I can discuss it with you in a private email). Well ... I agree that thread cancellation support is probably a step
too far, but I think there is also a reasonable happy medium between C++11 and Boost.Thread.
I don't say that I'm against this - I've added fiber itnerruption (I would follow the suggestion from Herb Sutter that a thread/fiber should be cooperativ-cancelable == interruptable). I try to keep the interface small as possible - of course we can discuss which funtions should the interface contain.
Put it another way: what out of Boost.Thread's additions would look extremely likely to appear to C++17? The future::get_exception_ptr() is a very good example, it saves a lot of overhead when you're transferring exception state from one future to another.
This is one items which could be discussed. for instance I've concerns to add future::get_exception_ptr() because I believe it is not really required, it is only for convenience. The exception thrown by the fiber-function (function which is executed by the fiber) is re-thrown by future<>::get(). If you don't want the exception re-thrown you could simply add a try-catch-statement at the top-most level in your fiber-function and assign the catched example to an exception_ptr passed to the fiber-function. void my_fiber_fn( boost::exception_ptr & ep) { try { ... } catch ( my_exception const& ex) { ep = boost::current_exception(); } } boost::exception_ptr ep; boost::fibers::fiber( boost::bind( my_fiber_fn, boost::ref( ep) ) ).join(); if ( ep) { boost::rethrow_exception( ep); } I do think you need reference docs for the ASIO support classes. I'll do it
I don't understand what they do, and I think I ought to.
It is pretty simple - the fiber-scheduler contains a reference to asio's io_service and uses it a event-queue/event-dispatcher, e.g. it pushes fibers ready to run (e.g. fiber newly created, yielded fibers or signaled fibers suspended by a wait-operation on sync. primitives).
Sure. But it's a "why should I use this library?" sort of thing? If I see real world benchmarks for a library, it tells me the author has probably done some performance tuning. That's a big tick for me in considering to use a library.
well I did not performance tuning - my main focus was to get the library work correctly (optimizations will we done later).
It also suggests to me if refactoring my code is worth it. If Boost.Fiber provides only a 10x linear scaling improvement, that's very different from a log(N) scaling improvement. A graph making it very obvious what the win is on both CPU time and memory footprint makes decision making regarding Fiber support much easier.
I don't know the factor of fibers scaling - I would consider boost.fiber a way to simplify the code, e.g. it prevents scattering the code by callbacks (for instance in the context of boost.asio). my starting point was to solve problmes like the C10K-problem (Dan Kegel explains it in a more detail on its webpage - I'm referring to it in boost.fiber's documentation - http://olk.github.io/libs/fiber/doc/html/fiber/asio.html).
Put it another way: if I am asking my management for time to prototype adopting Fibers in the company's core software, a scaling graph makes me getting that time a cinch. Without that graph, I have to either make my own graph in my spare time, or hope that management understands the difference between fibers and threads (unlikely).
well that are the problems we are all faced to :) I've not done performance tests, sorry
Of course I could provide a benchmark for a simple task running a certain amount of fibers - but what would be a good example for such a task?
You don't need much: a throughput test of null operations (i.e. a pure test of context switching) for total threads 1...100,000 on some reasonably specified 64 bit CPU e.g. an Intel Core 2 Quad or better. I generally would display in CPU cycles to eliminate clock speed differences.
CPU cycles for context switching is provided for boost.context and boost.coroutine - but adapting the code for boost.fiber isn't an issue - I could addd it. Extra bonus points for the same thing on some ARMv7 CPU.
counting CPU cycles isn't that easy for ARM (at least as I was looking at it).
Eh, well then I guess you need a link to the correct page in boost.context where it lists the architectures it works on. Certainly a big question for anyone considering Fiber is surely "will it work on my CPU"?
the docu of boost.context is mising this info - I'll add it (could you add an bug-report for boost.context please).
Sure, but I definitely won't be using Fibers that way. And I saw Antony had the same observation too. I think you need some more docs on this: just tell us what is possible, what works and how it works and we'll figure out the rest. I definitely need the ability to signal a fibre future belonging to thread A from some arbitrary thread B. I'll also need to boost::asio::io_service::post() to an ASIO io_service running fibers in thread A from some arbitrary thread B.
yes, as I explained it is supported.Maybe because I've writen the code don't know why the documentation is not enought for you. what works is that you create fibers::packaged_task<> and execute it in a fiber on thread A and wait on fibers::future<> returned by packaged_task<>::get_future() in thread B.
The docs need to explicitly say so then, and indeed thread safety for *every* API in the library.
OK
Complexity guarantees and exception safety statements for each API would also be nice. I know they're a real pain to do, but it hugely helps getting the library into a C++ standard later.
OK
between two threads (this code shows how work-stealing/work-sharing can be implemented in a thread-pool).
If you have a section in a design rationale page saying this, I would be happy.
OK I think, as a minimum, all the examples need to appear verbatim in
the docs
the complete code? I would prefer only code snippets - the complete code be read in the example directory. otherwise the documentation would be bloated (at least I would skip pages of code).

-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Oliver Kowalke Sent: Tuesday, January 07, 2014 6:35 PM To: boost Subject: Re: [boost] Boost.Fiber review January 6-15
I think, as a minimum, all the examples need to appear verbatim in
the docs
the complete code? I would prefer only code snippets - the complete code be read in the example directory. otherwise the documentation would be bloated (at least I would skip pages of code).
It's easy to use Quickbook snippets for this key bits in the text *and* then a link to the complete code in /example. [@../../example/my_example.cpp] You can often have more than one snippet from the source code example. //[my_library_example_1 code snippet 1 ... //] [my_library_example_1] // This ends the 1st snippet. //[my_library_example_2 more code snippet 2 ... //] [my_library_example_2] // This ends the 2nd snippet. Providing sample output from the example is also sometimes useful. I have done this by pasting (part?) output into a comment at the end of the example .cpp /* //[my_library_example_output //`[* Output from running my_library_example.cpp is:] my_library_example.vcxproj -> J:\Cpp\my_library_example\Debug\my_library_example.exe Hello World! //] [my_library_example_output] // End of output snippet. */ HTH Paul --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

On 7 Jan 2014 at 19:35, Oliver Kowalke wrote:
I'll add an rational section to the documentation - the library is pre-alpha (e.g. I'm experimenting with some ideas - but if you are interested I can discuss it with you in a private email).
I know the feeling. Bjorn (Reese) has been doing a sort of private peer review of AFIO with me recently, and I keep telling him things are weird in AFIO because it's all very alpha and I want to keep my future design options open. Thing is, Bjorn is generally right, and bits of AFIO suck and need fixing. I've been trying my best to fix things, but it ain't easy compiling and debugging Boost on an Intel Atom 220 (all I have available right now).
This is one items which could be discussed. for instance I've concerns to add future::get_exception_ptr() because I believe it is not really required, it is only for convenience.
Ehhh not really ... it lets you avoid a catch(), which is one of the few places in C++ without a worst case complexity guarantee. Catching types with RTTI (anything deriving from std::exception) in a large application can be unpleasantly slow (and unpredictably slow), and if you're bouncing exception state across say five separate futures, that's five separate unnecessary try...catch() invocations. BTW, older compilers also tend to serialise parts of try...catch() processing across threads. If you fire off a thousand threads doing nothing but throwing and catching exceptions you'll see some fun on older compilers (e.g. pre-VS2012).
I don't understand what they do, and I think I ought to.
It is pretty simple - the fiber-scheduler contains a reference to asio's io_service and uses it a event-queue/event-dispatcher, e.g. it pushes fibers ready to run (e.g. fiber newly created, yielded fibers or signaled fibers suspended by a wait-operation on sync. primitives).
Oh I see ... just like AFIO does with ASIO's io_service in fact. If it's that easy, adding Fiber support should be a real cinch.
I don't know the factor of fibers scaling - I would consider boost.fiber a way to simplify the code, e.g. it prevents scattering the code by callbacks (for instance in the context of boost.asio). my starting point was to solve problmes like the C10K-problem (Dan Kegel explains it in a more detail on its webpage - I'm referring to it in boost.fiber's documentation - http://olk.github.io/libs/fiber/doc/html/fiber/asio.html).
Thing is ... the C10K problem *is* a performance problem. If you're going to suggest that Boost.Fiber helps to solve or does solve the C10K problem, I think you need to demonstrate at least a 100K socket processing capacity seeing as some useful work also needs to be done with the C10K problem. Otherwise best not mention the C10K problem, unless you're saying that you hope in the near future to be able to address that problem in which case the wording needs to be clarified.
well that are the problems we are all faced to :) I've not done performance tests, sorry
I appreciate and understand this. However, I must then ask this: is your library ready to enter Boost if you have not done any performance testing or tuning?
Eh, well then I guess you need a link to the correct page in boost.context where it lists the architectures it works on. Certainly a big question for anyone considering Fiber is surely "will it work on my CPU"?
the docu of boost.context is mising this info - I'll add it (could you add an bug-report for boost.context please).
https://svn.boost.org/trac/boost/ticket/9551
and we'll figure out the rest. I definitely need the ability to signal a fibre future belonging to thread A from some arbitrary thread B. I'll also need to boost::asio::io_service::post() to an ASIO io_service running fibers in thread A from some arbitrary thread B.
yes, as I explained it is supported.Maybe because I've writen the code don't know why the documentation is not enought for you.
It wasn't clear from the docs that your Fiber support primitives library is also threadsafe. I had assumed (like Antony seemed to as well) that for speed you wouldn't have done that, but it certainly is a lot easier if it's all threadsafe too.
what works is that you create fibers::packaged_task<> and execute it in a fiber on thread A and wait on fibers::future<> returned by packaged_task<>::get_future() in thread B.
Yes I understand now. fibers::packaged_task<> is a strict superset of std::packaged_task<>, not an incommensurate alternative. That works for me, but others may have issue with it.
The docs need to explicitly say so then, and indeed thread safety for *every* API in the library.
OK
BTW, I did this in AFIO by creating a macro with the appropriate boilerplate text saying "this function is threadsafe and exception safe", and then doing a large scale regex find and replace :) I only wish I could do the same for complexity guarantees, but that requires studying each API's possible code paths individually.
I think, as a minimum, all the examples need to appear verbatim in the docs
the complete code? I would prefer only code snippets - the complete code be read in the example directory. otherwise the documentation would be bloated (at least I would skip pages of code).
You can stick them into an Appendix section at the bottom. They're purely there for Googlebot to find, no one is expecting humans to really go there. That said, if you want to do an annotated commentary on a broken up set of snippets from the examples, do feel free :). But I think inline source comments is usually plenty enough. You asked about the comments you added to https://github.com/olk/boost-fiber/blob/master/examples/asio/publish_s ubscribe/server.cpp. They do help, but I am still struggling somewhat. How about this? Can you do a side-by-side code example where on the left side is an old style ASIO callback based implementation, and on the right is an ASIO Fiber based implementation? Something like https://ci.nedprod.com/job/Boost.AFIO%20Build%20Documentation/Boost.AF IO_Documentation/doc/html/afio/quickstart/async_file_io/hello_world.ht ml. It doesn't have to be anything more than a trivial Hello World style toy thing. I just need to map in my head what yield[ec] means, and how that interplays with boost::fibers::asio::spawn and io_service::post(). Also one last point: your fiber condvar seems to suffer from no spurious wakeups like pthread condvars? You're certainly not protecting it from spurious wakeups in the example code. If spurious wakeups can't happen, you *definitely* need to mention that in the docs as that is a much tighter guarantee over standard condvars. Niall -- Currently unemployed and looking for work. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

On Tue, Jan 7, 2014 at 4:32 PM, Niall Douglas <s_sourceforge@nedprod.com> wrote:
On 7 Jan 2014 at 19:35, Oliver Kowalke wrote:
I've not done performance tests, sorry
I appreciate and understand this. However, I must then ask this: is your library ready to enter Boost if you have not done any performance testing or tuning?
I don't have a C10K problem, but I do have a code-organization problem. Much essential processing in a large old client app is structured (if that's the word I want) as chains of callbacks from asynchronous network I/O. Given the latency of the network requests, fibers would have to have ridiculous, shocking overhead before it would start to bother me. I think that's a valid class of use cases. I don't buy the argument that adoption of Fiber requires performance tuning first.

On 7 Jan 2014 at 17:05, Nat Goodspeed wrote:
I appreciate and understand this. However, I must then ask this: is your library ready to enter Boost if you have not done any performance testing or tuning?
I don't have a C10K problem, but I do have a code-organization problem. Much essential processing in a large old client app is structured (if that's the word I want) as chains of callbacks from asynchronous network I/O. Given the latency of the network requests, fibers would have to have ridiculous, shocking overhead before it would start to bother me.
I think that's a valid class of use cases. I don't buy the argument that adoption of Fiber requires performance tuning first.
I agree one doesn't need to performance tune to enter Boost. I disagree with not at least _measuring_ performance before entering Boost. One should at least have some minimal idea as to performance. I would really like to see at least one graph of scaling with the number of active contexts, both for CPU and RAM footprint. Niall -- Currently unemployed and looking for work. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

2014/1/7 Niall Douglas <s_sourceforge@nedprod.com>
I don't know the factor of fibers scaling - I would consider boost.fiber a way to simplify the code, e.g. it prevents scattering the code by callbacks (for instance in the context of boost.asio). my starting point was to solve problmes like the C10K-problem (Dan Kegel explains it in a more detail on its webpage - I'm referring to it in boost.fiber's documentation - http://olk.github.io/libs/fiber/doc/html/fiber/asio.html).
Thing is ... the C10K problem *is* a performance problem. If you're going to suggest that Boost.Fiber helps to solve or does solve the C10K problem, I think you need to demonstrate at least a 100K socket processing capacity seeing as some useful work also needs to be done with the C10K problem. Otherwise best not mention the C10K problem, unless you're saying that you hope in the near future to be able to address that problem in which case the wording needs to be clarified.
my focus was to address the one-thread-per-client pattern used for C10K. the pattern makes code much more readable/reduces the complexity but doesn't scale. if you create too many threads on your system your overall performance will be shrink because at a certain number of threads, the overhead of the kernel scheduler starts to swamp the available cores.
I appreciate and understand this. However, I must then ask this: is your library ready to enter Boost if you have not done any performance testing or tuning?
is performance testing and tuning a precondition for a lib to be accepted? I did some tuning (for instance spinlock implementation) but it is not finished.
How about this? Can you do a side-by-side code example where on the left side is an old style ASIO callback based implementation, and on the right is an ASIO Fiber based implementation? Something like https://ci.nedprod.com/job/Boost.AFIO%20Build%20Documentation/Boost.AF IO_Documentation/doc/html/afio/quickstart/async_file_io/hello_world.ht ml.
OK
It doesn't have to be anything more than a trivial Hello World style toy thing. I just need to map in my head what yield[ec] means, and how that interplays with boost::fibers::asio::spawn and io_service::post().
- io_service::post() pushes a callable to io_service's internal queue (executed by io_service::run() and related functions) - fibers::asio::spawn() creates a new fiber and adds it to the fiber-scheduler (specialized to use asio's io_service hence asios's asyn-result feature) - yield is an instance of boost::fibers::asio::yield_context which represents the fiber running this code; it is used by asio's async result feature - yield[ec] is put to an async-operation in order suspending the current fiber and pass an error (if happend during execution of async-op) back to the calling code, for instance EOF if socket was closed
Also one last point: your fiber condvar seems to suffer from no spurious wakeups like pthread condvars? You're certainly not protecting it from spurious wakeups in the example code. If spurious wakeups can't happen, you *definitely* need to mention that in the docs as that is a much tighter guarantee over standard condvars.
OK

On 7 Jan 2014 at 23:07, Oliver Kowalke wrote:
my focus was to address the one-thread-per-client pattern used for C10K. the pattern makes code much more readable/reduces the complexity but doesn't scale. if you create too many threads on your system your overall performance will be shrink because at a certain number of threads, the overhead of the kernel scheduler starts to swamp the available cores.
If you replace "doesn't scale" with "doesn't scale linearly" then I agree. The linearity is important.
I appreciate and understand this. However, I must then ask this: is your library ready to enter Boost if you have not done any performance testing or tuning?
is performance testing and tuning a precondition for a lib to be accepted?
No. I do think the question ought to be asked though, even if the answer is no. It helps people decide the relative merit of a new library.
I did some tuning (for instance spinlock implementation) but it is not finished.
Even my own spinlock based on Intel TSX is unfinished - I had no TSX hardware to test it on, and had to rely on the Intel simulator which is dog slow. I mentioned this in another reply, but I really think you ought to at least measure performance and supply some results in the docs. You don't need to performance tune, but it sure is handy to know some idea of performance.
- io_service::post() pushes a callable to io_service's internal queue (executed by io_service::run() and related functions) - fibers::asio::spawn() creates a new fiber and adds it to the fiber-scheduler (specialized to use asio's io_service hence asios's asyn-result feature) - yield is an instance of boost::fibers::asio::yield_context which represents the fiber running this code; it is used by asio's async result feature - yield[ec] is put to an async-operation in order suspending the current fiber and pass an error (if happend during execution of async-op) back to the calling code, for instance EOF if socket was closed
Ok, let me try rewriting this description into my own words. To supply fibers to an io_service, one creates N fibers each of which call io_service::run() as if they were threads. If there is no work available, the fiber executing the run() gives up context to the next fiber. If some fiber doing some work does an io_service::post(), that selects some fiber currently paused waiting inside run() for new work, and next context switch that fiber will be activated to do work. This part I think I understand okay. The hard part for me is how yield() fits in. I get that one can call yield ordinarily like in Python def foo: a=0 while 1: a=a+1 yield a ... and every time you call foo() you get back an increasing number. Where I am finding things harder is what yield means to ASIO which takes a callback function of specification void (*callable)(const boost::system::error_code& error, size_t bytes_transferred) so fiber yield must supply a prototype matching that specification. You said above this:
- yield is an instance of boost::fibers::asio::yield_context which represents the fiber running this code; it is used by asio's async result feature
- yield[ec] is put to an async-operation in order suspending the current fiber and pass an error (if happend during execution of async-op) back to the calling code, for instance EOF if socket was closed
So I figure that yield will store somewhere a new bit of context, do a std::bind() encapsulating that context and hand off the functor to ASIO. ASIO then schedules the async i/o. When that completes, ASIO will do an io_service::post() with the bound functor, and so one of the fibers currently executing run() will get woken up and will invoke the bound functor. So far so good. But here is where I fall down: the fibre receives in an error state and bytes transferred, and obviously transmits that back to the fiber which called ASIO async_read() by switching back to the original context. I would understand just fine if yield() suspended the calling fiber, but it surely cannot because yield() will get executed before ASIO async_read() does as it's a parameter to async_read(). So I therefore must be missing something very important, and this is why I am confused. Is it possible that yield() takes a *copy* of the context like setjmp()? It's this kind of stuff which documentation is for, and why at least one person i.e. me needs hand holding through mentally groking how the ASIO support in Fiber works. Sorry for being a bit stupid. Niall -- Currently unemployed and looking for work. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

2014/1/10 Niall Douglas <s_sourceforge@nedprod.com>
Where I am finding things harder is what yield means to ASIO which takes a callback function of specification void (*callable)(const boost::system::error_code& error, size_t bytes_transferred) so fiber yield must supply a prototype matching that specification. You said above this:
- yield is an instance of boost::fibers::asio::yield_context which represents the fiber running this code; it is used by asio's async result feature
- yield[ec] is put to an async-operation in order suspending the current fiber and pass an error (if happend during execution of async-op) back to the calling code, for instance EOF if socket was closed
So I figure that yield will store somewhere a new bit of context, do a std::bind() encapsulating that context and hand off the functor to ASIO. ASIO then schedules the async i/o. When that completes, ASIO will do an io_service::post() with the bound functor, and so one of the fibers currently executing run() will get woken up and will invoke the bound functor.
So far so good. But here is where I fall down: the fibre receives in an error state and bytes transferred, and obviously transmits that back to the fiber which called ASIO async_read() by switching back to the original context. I would understand just fine if yield() suspended the calling fiber, but it surely cannot because yield() will get executed before ASIO async_read() does as it's a parameter to async_read(). So I therefore must be missing something very important, and this is why I am confused. Is it possible that yield() takes a *copy* of the context like setjmp()?
It's this kind of stuff which documentation is for, and why at least one person i.e. me needs hand holding through mentally groking how the ASIO support in Fiber works. Sorry for being a bit stupid.
you have to distinct between to 'yield' symbols - in the context of fibers without ASIO , e.g. calling boost::this_fiber::yield() the current fiber is supended at added at the end of the ready-queue of the fiber-scheduler (the fiber will be resumed if dequeued from this list). the other case you are referring to is the async-feature of boost.asio. Chris has already implemented the async-feature for coroutines (from boost.coroutine). so the best source of the internal working is the docu of boost.asio. I've added support of asio's async-feature for fibers, e.g. if provided some classes required to use fibers in the context of asio and its async-operations (for instance async_read() see examples). usaly this code should belong to boost.asio instead boost.fiber - anyway, because boost.fiber is new I've added it in my lib. the yield instance you use with the async-ops of asio (async_read() for isntance) is a global of type yield_t. this type holds an error_code as member and operator[error_code &] let you retrieve the error_code from the async-op. boost::fibers::asio::spawn() is the function which starts a new fiber

On 10 Jan 2014 at 18:46, Oliver Kowalke wrote:
It's this kind of stuff which documentation is for, and why at least one person i.e. me needs hand holding through mentally groking how the ASIO support in Fiber works. Sorry for being a bit stupid.
the other case you are referring to is the async-feature of boost.asio. Chris has already implemented the async-feature for coroutines (from boost.coroutine). so the best source of the internal working is the docu of boost.asio. I've added support of asio's async-feature for fibers, e.g. if provided some classes required to use fibers in the context of asio and its async-operations (for instance async_read() see examples). usaly this code should belong to boost.asio instead boost.fiber - anyway, because boost.fiber is new I've added it in my lib.
Aha! You're reusing ASIO's coroutines support as-is. For some reason my old brain didn't grok this, now it does. Thank you Oliver, and for putting up with my stupid questions. Ok I think I am now ready to give my final peer review: Acceptance: I recommend yes, provided considerable improvements are done on the following areas: * It needs to be explicitly documented per API which of the support classes are thread safe (I discovered above that's almost all of them). * More identical replication of Boost.Thread's specific APIs. Others have gone into more detail on this than I, but I would add that if an extension API is trivial to add, I'd say it's very likely to appear in C++17 anyway. * Basically condense this thread of discussion into docs on the ASIO support. Say explicitly you're reusing ASIO's coroutines support as-is, and link to ASIO's coroutines docs pages. * Need to see performance scaling results as N fibers of execution rise. Need to see CPU cycles and memory consumption in those results. * Need a link to Coroutine's list of supported architectures. * C++11 support needs improving. Others have mentioned more on this than I. * Include the code examples into the docs so Googlebot can find them. Like ASIO does at http://www.boost.org/doc/libs/1_55_0/doc/html/boost_asio/examples.html * You definitely absolutely need a Design Rationale page explaining why what's in Fiber is there, and why what isn't is not. Do link to external libraries extending Fiber with "missing" features if appropriate. Nice to have: * Intel TSX support to avoid locks. * Complexity guarantees and exception safety statements per API. Lastly: Oliver: you did a great job on Fiber and Coroutine. Please accept my personal thank you for all your hard work and making it available to the Boost community. Niall -- Currently unemployed and looking for work. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

On Sat, Jan 11, 2014 at 6:13 AM, Niall Douglas <s_sourceforge@nedprod.com> wrote:
* C++11 support needs improving. Others have mentioned more on this than I.
For my purposes in collating results, though, I'd ask you for a bit more detail here. As far as I can tell, you might be alluding to 'explicit operator bool' rather than the C++03 'operator safe_bool' trick. If the review might end up requesting more work from Oliver, it's only fair to be as specific as we can about what work is required. Otherwise it's sort of a "too many notes" level of critique -- not really actionable.

On 11 Jan 2014 at 10:21, Nat Goodspeed wrote:
* C++11 support needs improving. Others have mentioned more on this than I.
For my purposes in collating results, though, I'd ask you for a bit more detail here.
Sorry, that was sloppy wording on my part. I meant to say something more like this: * Direct support for C++11 needs improving. And by that I mean three main items: (i) Improved conformance with C++11 idioms. (ii) Improved conformance with C++11 std::thread patterns. (iii) Explicit #ifdef support with code for C++11 features. I didn't look closely enough to see if these are already in there, but by this I would mean move construction, initialiser lists, rvalue this overloads, deleting operators where appropriate etc - the usual stuff.
As far as I can tell, you might be alluding to 'explicit operator bool' rather than the C++03 'operator safe_bool' trick. If the review might end up requesting more work from Oliver, it's only fair to be as specific as we can about what work is required. Otherwise it's sort of a "too many notes" level of critique -- not really actionable.
I understand entirely. I didn't go into detail because I didn't think I could improve on what others have said, and I don't have the time to contribute much more detail (I have maths coursework due next week). Hopefully the above clarifies my position. Niall -- Currently unemployed and looking for work. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

2014/1/11 Niall Douglas <s_sourceforge@nedprod.com>
Aha! You're reusing ASIO's coroutines support as-is. For some reason my old brain didn't grok this, now it does. Thank you Oliver, and for putting up with my stupid questions.
there's no such thing as a stupid question ;^) * More identical replication of Boost.Thread's specific APIs. Others
have gone into more detail on this than I, but I would add that if an extension API is trivial to add, I'd say it's very likely to appear in C++17 anyway.
but I'm uncertain which one (at least if already added support for interruption) - but what about the others (many of them a conditionally compiled)? <snip> Nice to have:
* Intel TSX support to avoid locks.
may I contact you for some info regarding to this issue (I'd like to benefit from your knowledge)?
Oliver: you did a great job on Fiber and Coroutine. Please accept my personal thank you for all your hard work and making it available to the Boost community.
thank you too

Le 11/01/14 17:29, Oliver Kowalke a écrit :
2014/1/11 Niall Douglas <s_sourceforge@nedprod.com>
Aha! You're reusing ASIO's coroutines support as-is. For some reason my old brain didn't grok this, now it does. Thank you Oliver, and for putting up with my stupid questions.
there's no such thing as a stupid question ;^)
* More identical replication of Boost.Thread's specific APIs. Others
have gone into more detail on this than I, but I would add that if an extension API is trivial to add, I'd say it's very likely to appear in C++17 anyway.
but I'm uncertain which one (at least if already added support for interruption) - but what about the others (many of them a conditionally compiled)? Could you elaborate? What is the problem you find?
Vicente

Le 11/01/14 19:38, Oliver Kowalke a écrit :
2014/1/11 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
Could you elaborate? What is the problem you find?
for instance future.hpp from boost.thread has many conditional compilations
boost::future was proposed to Boost before the std::future was approved. For compatibility reason we need to provide the original interface and the standard one. You don't have this problem as your library has not been approved yet. Providing support for C++11 compilers seems to me a good point for a Boost library. Best, Vicente

On 11 Jan 2014 at 17:29, Oliver Kowalke wrote:
but I'm uncertain which one (at least if already added support for interruption) - but what about the others (many of them a conditionally compiled)?
Me personally I am very cool about (existing) interruption support. I think thread interruption really ought to be part of an improved, more generic C++ exception handling mechanism, and not bolted on top by user code. In other words, thread interuption in my opinion really ought to be supported directly by the C++ runtime. But that's another discussion you can read about on the ISO study groups (if you're really bored).
* Intel TSX support to avoid locks.
may I contact you for some info regarding to this issue (I'd like to benefit from your knowledge)?
Of course! Though I fear I have little knowledge to share, except that of how ignorant I am. Niall -- Currently unemployed and looking for work. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

07.01.2014 20:50, Niall Douglas:
provides some performance tests for context switching. Sure. But it's a "why should I use this library?" sort of thing? If I see real world benchmarks for a library, it tells me the author has
boost.fiber uses boost.coroutine (which itself uses boost.context) and probably done some performance tuning. That's a big tick for me in considering to use a library.
I made micro benchmark of Boost.Coroutine some time ago. I was able to run 400k coroutines in one thread, each had 10KiB of stack (~3.8GiB was consumed only by stacks). I used simple allocator for stacks: return last_free += allocation_size; Environment was: Windows 7 x64, MSVC2010SP1 x64, Boost 1.53. System had 6GiB of RAM, part of which was consumed by OS and background programs. I think with proper tuning it is possible to launch more coroutines on that system. -- Evgeny Panasyuk

2014/1/6 Nat Goodspeed <nat@lindenlab.com>
The review of Boost.Fiber by Oliver Kowalke begins today, Monday January 6th, and closes Wednesday January 15th.
I've got some questions about the library: * Are mutexes of Boost.Fiber thread safe? If no - what purpose do they serve (as I understand two fibers can not access same resource simultaneously) * Can fibers migrate between threads? * Can Boost.Asio use multithreading and fibers (I wish to use fibers to have a clear/readable code and multithreading to use all the CPUs)? Is there an example of such program? -- Best regards, Antony Polukhin

2014/1/7 Antony Polukhin <antoshkka@gmail.com>
* Are mutexes of Boost.Fiber thread safe?
you can use the mutexes from bosot.fiber in a multi-threaded env (but you have to use fibers in your code).
* Can fibers migrate between threads?
yes - unit-test test_migration shows how it is done
* Can Boost.Asio use multithreading and fibers (I wish to use fibers to have a clear/readable code and multithreading to use all the CPUs)? Is there an example of such program?
fibers run concurrently in one thread - if you want to use more CPUs you have multiple choices: 1.) you assign one io_service for each CPU (fiber migration could be done via a specific fiber-scheduler) 2.) you have one io_service and each CPU executes io_service::run() - you have to execute the fibers in one strand for each CPU (not tested yet)

2014/1/7 Oliver Kowalke <oliver.kowalke@gmail.com>
2014/1/7 Antony Polukhin <antoshkka@gmail.com>
* Are mutexes of Boost.Fiber thread safe?
you can use the mutexes from bosot.fiber in a multi-threaded env (but you have to use fibers in your code).
Thanks, already trying them. What will happen on fiber::mutex.lock() call if all the fibers on current thread are suspended? Will the mutex call `this_thread::yield()`?
* Can fibers migrate between threads?
yes - unit-test test_migration shows how it is done
Another implementation question: Why there are so many atomics in fiber_base? Looks like fiber is usually used in a single thread, and in situations when fiber is moved from one thread to another memory barrier would be sufficient. -- Best regards, Antony Polukhin

2014/1/7 Antony Polukhin <antoshkka@gmail.com>
Thanks, already trying them. What will happen on fiber::mutex.lock() call if all the fibers on current thread are suspended? Will the mutex call `this_thread::yield()`?
yes. btw, you could take a look at unit-test test_mutex_mt.cpp
Another implementation question: Why there are so many atomics in fiber_base? Looks like fiber is usually used in a single thread, and in situations when fiber is moved from one thread to another memory barrier would be sufficient.
yes, synchronization is done via atomics. In a single threaded env the lib wouldn't require atomics because all fibers running in on thread. A multithread env requires an memory barrier - boost.fiber uses a spinlock which yields a fiber if already locked.

The review of Boost.Fiber by Oliver Kowalke begins today, Monday January 6th, and closes Wednesday January 15th.
-----------------------------------------------------
About the library:
Boost.Fiber provides a framework for micro-/userland-threads (fibers) scheduled cooperatively. The API contains classes and functions to manage and synchronize fibers similar to Boost.Thread. Each fiber has its own stack.
A fiber can save the current execution state, including all registers and CPU flags, the instruction pointer, and the stack pointer and later restore this state. The idea is to have multiple execution paths running on a single thread using a sort of cooperative scheduling (versus threads, which are preemptively scheduled). The running fiber decides explicitly when it should yield to allow another fiber to run (context switching). Boost.Fiber internally uses coroutines from Boost.Coroutine; the classes in this library manage, schedule and, when needed, synchronize those coroutines. A context switch between threads usually costs thousands of CPU cycles on x86, compared to a fiber switch with a few hundred cycles. A fiber can only run on a single thread at any point in time.
docs: http://olk.github.io/libs/fiber/doc/html/ git: https://github.com/olk/boost-fiber src: http://ok73.ok.funpic.de/boost.fiber.zip
The documentation has been moved to another site; see the link above. If you have already downloaded the source, please refresh it; Oliver has added some new material.
One of the main questions arising for me when looking through the code is why doesn't the fiber class expose the same API as std::thread (or boost::thread for that matter)? This would make using fibers so much more usable, even more as the rest of the library was aligned with the C++11 standard library. In fact, in my book a fiber _is_ a thread-like construct and having it expose a new interface is just confusing and unnecessary. Regards Hartmut --------------- http://boost-spirit.com http://stellar.cct.lsu.edu

On Wed, Jan 8, 2014 at 9:02 AM, Hartmut Kaiser <hartmut.kaiser@gmail.com> wrote:
One of the main questions arising for me when looking through the code is why doesn't the fiber class expose the same API as std::thread (or boost::thread for that matter)? This would make using fibers so much more usable, even more as the rest of the library was aligned with the C++11 standard library.
To the best of my knowledge, Oliver is indeed trying to mimic the std::thread API. It would be very helpful if you would point out the deltas that you find distressing.

On 08/01/2014 12:20 p.m., Nat Goodspeed wrote:
On Wed, Jan 8, 2014 at 9:02 AM, Hartmut Kaiser <hartmut.kaiser@gmail.com> wrote:
One of the main questions arising for me when looking through the code is why doesn't the fiber class expose the same API as std::thread (or boost::thread for that matter)? This would make using fibers so much more usable, even more as the rest of the library was aligned with the C++11 standard library.
To the best of my knowledge, Oliver is indeed trying to mimic the std::thread API. It would be very helpful if you would point out the deltas that you find distressing.
Just from looking at the documentation: - The constructor takes a nullary function, instead of a callable and an arbitrary number of parameters. It is not clear whether Fn has to be a function object or if it can be a callable as with std::thread. The order of parameters of the overload taking attributes does not match that of boost::thread. - No notion of native_handle (this may not make sense for fibers, I haven't looked at the implementation). - There is no notion of explicit operator bool in neither boost::thread nor std::thread. - There is an operator < for fibers and none for id. There should be no relational operators for fiber, and the full set for fiber::id as well as hash support and ostream insertion. - Several functions take a fixed time_point type instead of a chrono one. - There is no indication whether the futures support void (I assume they do) and R& (I assume they don't). The return type for shared_future::get is wrong. Again, there's additional explicit operator bools. - The documentation for promise doesn't seem to support void, it is unclear whether they support references. Another explicit operator bool. - I saw mentions of async in the documentation, but I couldn't find the actual documentation for it. It's not clear whether deferred futures are supported, at least they appear not to be from future's reference. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com

2014/1/9 Agustín K-ballo Bergé <kaballo86@hotmail.com>
- The constructor takes a nullary function, instead of a callable and an arbitrary number of parameters.
will collide with the additional parameters required by fibers and not part of boost::thread/std::thread for arbitrary number of parameters bind() could be used It is not clear whether Fn has to be a function object or if it can be a
callable as with std::thread.
can be callable
The order of parameters of the overload taking attributes does not match that of boost::thread.
the attributes control things like stack-size which is not required by boost::thread
- No notion of native_handle (this may not make sense for fibers, I haven't looked at the implementation).
native_handle in the context of thread referre to handles of the underlying framework (for instance pthread) for fibers not applicable
- There is no notion of explicit operator bool in neither boost::thread nor std::thread.
add for convenience to the if a fiber instance is valid or not (== check for not-a-fiber)
- There is an operator < for fibers and none for id. There should be no relational operators for fiber, and the full set for fiber::id as well as hash support and ostream insertion.
id has 'operator<' etc.
- Several functions take a fixed time_point type instead of a chrono one.
it is a typedef of chrono::steady_clock/system_clock -> otherwise all scheduler instances would have made templates
- There is no indication whether the futures support void (I assume they do) and R& (I assume they don't).
future supports future< R >, future< R & >, future< void > - the problem was how to express it in a comfortable way in the docu
The return type for shared_future::get is wrong.
OK - this is a copy-and-past error from future<>::get. I'll fix it!
- The documentation for promise doesn't seem to support void, it is unclear whether they support references. Another explicit operator bool.
promise supports promise< R >, promise< R& >, promise< void > - suggestions how to write the docu without repreating the interface for the specializations?
- I saw mentions of async in the documentation, but I couldn't find the actual documentation for it.
the docu about futures has a short reference to async() but I'll add a explicit section for async() too.
It's not clear whether deferred futures are supported, at least they appear not to be from future's reference.
not supported

On 09/01/2014 05:06 a.m., Oliver Kowalke wrote:
2014/1/9 Agustín K-ballo Bergé <kaballo86@hotmail.com>
- The constructor takes a nullary function, instead of a callable and an arbitrary number of parameters.
will collide with the additional parameters required by fibers and not part of boost::thread/std::thread for arbitrary number of parameters bind() could be used
The order of parameters of the overload taking attributes does not match that of boost::thread.
the attributes control things like stack-size which is not required by boost::thread
boost::thread has a constructor taking attributes too, this is the first argument so there would be no collision. This should be adjusted not only to be coherent with boost::thread, but also to implement the general constructor since matching the same semantics otherwise is tricky. Consider: boost::fiber fib{ [f = std::move(f), a0 = std::move(a0), ...]() mutable { std::move(f)(std::move(a0), ...); }; } against the standard conformant constructor with the same semantics: boost::fiber fib{std::move(f), std::move(a0), ...};
- There is no notion of explicit operator bool in neither boost::thread nor std::thread.
add for convenience to the if a fiber instance is valid or not (== check for not-a-fiber)
I understand the perceived convenience of semantic sugar, to me it is an unnecessary divergence from std/boost::thread. However, I can always avoid their use completely.
- There is an operator < for fibers and none for id. There should be no relational operators for fiber, and the full set for fiber::id as well as hash support and ostream insertion.
id has 'operator<' etc.
Please add reference documentation for fiber::id.
- There is no indication whether the futures support void (I assume they do) and R& (I assume they don't).
future supports future< R >, future< R & >, future< void > - the problem was how to express it in a comfortable way in the docu
- The documentation for promise doesn't seem to support void, it is unclear whether they support references. Another explicit operator bool.
promise supports promise< R >, promise< R& >, promise< void > - suggestions how to write the docu without repreating the interface for the specializations?
Refer to the standard for a concise definition of future/promise. It basically defines the specializations only when they differ. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com

2014/1/9 Agustín K-ballo Bergé <kaballo86@hotmail.com>
boost::thread has a constructor taking attributes too, this is the first argument so there would be no collision.
This should be adjusted not only to be coherent with boost::thread, but
also to implement the general constructor since matching the same semantics otherwise is tricky. Consider:
boost::fiber fib{ [f = std::move(f), a0 = std::move(a0), ...]() mutable { std::move(f)(std::move(a0), ...); }; }
against the standard conformant constructor with the same semantics:
boost::fiber fib{std::move(f), std::move(a0), ...};
OK - agreed, it should be too complicated to add it
I understand the perceived convenience of semantic sugar, to me it is an unnecessary divergence from std/boost::thread.
However, I can always avoid their use completely. it could be removed but if found it useful in the case of fiber-stealing, e.g. boost::fibers::fiber f( other_rr->steal_from() );if ( f) { // migrate stolen fiber to scheduler running in this thread rr.migrate_to( f); ... } of course, I could let steal_from() return a bool Please add reference documentation for fiber::id.
OK
- There is no indication whether the futures support void (I assume they
do) and R& (I assume they don't).
future supports future< R >, future< R & >, future< void > - the problem was how to express it in a comfortable way in the docu
- The documentation for promise doesn't seem to support void, it is
unclear whether they support references. Another explicit operator bool.
promise supports promise< R >, promise< R& >, promise< void > - suggestions how to write the docu without repreating the interface for the specializations?
Refer to the standard for a concise definition of future/promise. It basically defines the specializations only when they differ.
I've had tried something like http://en.cppreference.com/w/cpp/thread/futurebut I wasn't satisfied with the result from quickbook-generated html. I'll follow your advice.

2014/1/8 Hartmut Kaiser <hartmut.kaiser@gmail.com>
One of the main questions arising for me when looking through the code is why doesn't the fiber class expose the same API as std::thread (or boost::thread for that matter)?
I thought is does expose the same interface (with some small additions) as std::thread does. Could you point out what you are missing or what are the differences your are not comfortable with?
This would make using fibers so much more usable, even more as the rest of the library was aligned with the C++11 standard library.
it was my intention to make boost.fiber usable as std::thread/boost::thread
In fact, in my book a fiber _is_ a thread-like construct
agreed
and having it expose a new interface is just confusing and unnecessary.
can be more specific _ I thought boost.fiber has a similiar interface as std::thread/boost::thread

On Mon, Jan 6, 2014 at 7:07 AM, Nat Goodspeed <nat@lindenlab.com> wrote: > > Please always state in your review whether you think the library should be > accepted as a Boost library! I think the library should be accepted but would prefer some changes made as outlined below. > > Additionally please consider giving feedback on the following general > topics: > > - What is your evaluation of the design? The overall design is good. I like the parallel between thread interface and fiber interface. Some suggestions: - Maybe this is something that should be handled in a separate fiber pool library but I'd like to be able to specify multiple threads for affinity. This is useful for when a fiber should be bound to any one of threads local to a NUMA domain or physical CPU (for cache sharing). - I dislike fiber_group. I understand that it parallels thread_group. I don't know the whole story behind thread_group but I think it predates move support and quick search through the mailing list archives shows there were objections about its design as well. I specifically don't like having to new up fiber object to pass ownership to add_fiber. fiber object is just a handle (holds a pointer) so it's a great candidate for move semantics. It maybe best to take out fiber_group altogether. What's a use case for it? > - What is your evaluation of the implementation? - Can I suggest replacing the use of std::auto_ptr with boost::scoped_ptr? It leads to deprecation warnings on GCC. - While I understand that scheduling algorithm is more internal than the rest of the library, I still don't like detail namespace leaking out. Perhaps these classes should be moved out of the detail namespace. - algorithm interface seems to do too much. I think a scheduling algorithm is something that just manages the run queue -- selects which fiber to run next (e.g. Linux kernel scheduler interface works like this). As a result, implemented scheduling algorithms have much overlap. Indeed, round_robin and round_robin_ws is almost identical code. > - What is your evaluation of the documentation? Ok but, like others have said, could be improved. ASIO integration is poorly documented. > - What is your evaluation of the potential usefulness of the library? Very useful. I've used Python's greenlets (via gevent) and they were very useful for doing concurrent I/O. Would love to see equivalent functionality in C++. > - Did you try to use the library? With what compiler? Did you have any > problems? Yes, I used it but very little. Used g++ 4.8. > - How much effort did you put into your evaluation? A glance? A quick > reading? In-depth study? Spend about 3 hours studying the code and writing toy examples. > - Are you knowledgeable about the problem domain? Not really but I've spent many years writing server code with completion routines so I know the value of not having to do that.

2014/1/8 Eugene Yakubovich <eyakubovich@gmail.com> > Some suggestions: > - Maybe this is something that should be handled in a separate fiber > pool library but I'd like to be able to specify multiple threads for > affinity. This is useful for when a fiber should be bound to any one > of threads local to a NUMA domain or physical CPU (for cache sharing). > yes, that's what I implement in another library (for instance thread-pining; used in performance tests of boost.context) > - I dislike fiber_group. I understand that it parallels thread_group. > I don't know the whole story behind thread_group but I think it > predates move support and quick search through the mailing list > archives shows there were objections about its design as well. I > specifically don't like having to new up fiber object to pass > ownership to add_fiber. fiber object is just a handle (holds a > pointer) so it's a great candidate for move semantics. It maybe best > to take out fiber_group altogether. What's a use case for it? > I've added fiber_group only to mimic the boost.thread API - I've never used thread_group so I've no problems to remove fiber_group from the API (if desired). > > - What is your evaluation of the implementation? > - Can I suggest replacing the use of std::auto_ptr with > boost::scoped_ptr? It leads to deprecation warnings on GCC. > OK > - While I understand that scheduling algorithm is more internal than > the rest of the library, I still don't like detail namespace leaking > out. Perhaps these classes should be moved out of the detail > namespace. > hmm - interface algorithm is already in namespace fibers. could you tell me to which class you referring to? > - algorithm interface seems to do too much. I think a scheduling > algorithm is something that just manages the run queue -- selects > which fiber to run next (e.g. Linux kernel scheduler interface works > like this). As a result, implemented scheduling algorithms have much > overlap. maybe manager would be a better wording? the implementations of algorithm (schedulers in the docu) own the fibers internal data strucutres. the fibers are stored if waiting or but in a ready-queue if ready to be resumed. what would you suggest? > Indeed, round_robin and round_robin_ws is almost identical > code. > the difference is that round_robin_ws enables fiber-stealing and owns two additional member-functions for this purpose. the internal ready-queue is made thread-safe (concurrent access by different thread required for fiber-stealing). > > - What is your evaluation of the documentation? > Ok but, like others have said, could be improved. ASIO integration is > poorly documented. > OK

On Thu, Jan 9, 2014 at 1:11 AM, Oliver Kowalke <oliver.kowalke@gmail.com> wrote:
- While I understand that scheduling algorithm is more internal than the rest of the library, I still don't like detail namespace leaking out. Perhaps these classes should be moved out of the detail namespace.
hmm - interface algorithm is already in namespace fibers. could you tell me to which class you referring to?
algorithm is in namespace fibers but the arguments and return types are often in detail namespace. For example, spawn() takes detail::fiber_base::ptr_t and get_main_notifier() returns detail::notify::ptr_t.
- algorithm interface seems to do too much. I think a scheduling algorithm is something that just manages the run queue -- selects which fiber to run next (e.g. Linux kernel scheduler interface works like this). As a result, implemented scheduling algorithms have much overlap.
maybe manager would be a better wording? the implementations of algorithm (schedulers in the docu) own the fibers internal data strucutres. the fibers are stored if waiting or but in a ready-queue if ready to be resumed. what would you suggest?
I think there are a couple of things going on inside current schedulers. One is selecting the next fiber to run (that's the round robin portion). Another is managing the suspension, resumption, and waiting of fibers -- that would be a manager portion. I forked the repo and I'm trying to see if it's possible to separate it. That would allow someone to write a priority based scheduler and then use it with both "regular" fibers and asio managed ones. I probably won't be done with it until next week.
Indeed, round_robin and round_robin_ws is almost identical code.
the difference is that round_robin_ws enables fiber-stealing and owns two additional member-functions for this purpose. the internal ready-queue is made thread-safe (concurrent access by different thread required for fiber-stealing).
I agree but my point was that b/c the class does too much, there's a bunch of copy/pasted coded (not related to the run queue).

Le 06/01/14 14:07, Nat Goodspeed a écrit :
Hi all,
The review of Boost.Fiber by Oliver Kowalke begins today, Monday January 6th, and closes Wednesday January 15th.
- What is your evaluation of the design?
Hi Oliver, glad to see that you Fibers library is under review. I have some question related to the design. The interface must at least follows the interface of the standard thread library and if there are some limitations, they must be explicitly documeted. Any difference respect Boost.Thread must also documented and the rational explained. std::thread is not copyable by design, that is only one owner. WHy boost::fibers::fiber is copyable? Why the exceptions throw by the function given to a fiber is consumed by the framework instead of terminate the program as std::thread? Which exception is thrown when the Error Conditions:resource_deadlock_would_occurand invalid_argument are signaled? Why priority and thread_affinity are not part of the fiber attributes? The interface let me think that the affinity can be changed by the owned of the thread and the thred itself. Is this by design? Please don't document get/set functions as thread_affinity altogether. The safe_bool idiom should be replaced by a explict operator bool. Why is the scheduling algorithm global? Could it be threads specific? BTW i sthere an exmple showing the |thread_specific_ptr trick mentioned on the documentation. | Why the time related function are limited to a specific clock? The interface of fiber_group based on old and deprecated thread_group is not based on move semantics. Have you take a look at the proposal N3711 <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3711.pdf> Task Groups As a Lower Level C++ Library Solution To Fork-Join Parallelism Maybe it is worth adapting it to fibers. ** Boost.Thread has deprecated the use of the nested type scoped_lock as it introduce unnecessary dependencies. DO you think it is worth maintaining it? I made some adaptations to boost::barrier that could also have a sens for fibers. I don't know if a single class could be defined that takes care of both contexts for high level classes as the barrier? Boost.Thread would deliver two synchronized bounded and unbounded queue soon based on N3533 <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3533.html> C++ Concurrent Queues Have you tried to follow the same interface? Best, Vicente

2014/1/8 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
Hi Oliver,
hello Vicente
The interface must at least follows the interface of the standard thread library and if there are some limitations, they must be explicitly documeted. Any difference respect Boost.Thread must also documented and the rational explained.
OK - section rational
std::thread is not copyable by design, that is only one owner. WHy boost::fibers::fiber is copyable?
boost::fibers::fiber should be movable only - it is derived from boost::noncopyable and uses BOOST_MOVABLE_BUT_NOT_COPYABLE
Why the exceptions throw by the function given to a fiber is consumed by the framework instead of terminate the program as std::thread?
the trampoline-function used for the context does following: - in a try-cactch-block execute the fiber-code (fiber-function given by the user) - catch exception forced_unwind from boost.coroutine -> release the fiber and continue unwinding the stack - catch fiber_interrupted -> stored inside boost::exception_ptr (might be re-thrown in fiber::join() ) - catch all other exceptions and call std::terminate() I though this would let to an equivalent behaviour as std::thread
Which exception is thrown when the Error Conditions:resource_deadlock_would_occurand invalid_argument are signaled?
I use BOOST_ASSERT instead of exception
Why priority and thread_affinity are not part of the fiber attributes?
you referre to class attributes passed to fiber's ctor? this class is a vehicle for passing special parameter (for instance stack-size) to boost::coroutine - if you use the segmented-stack feature you usually don't need it. priority() and thread_affinity() are member-functions of boost::fibers::fiber to make the modifications those parameters for an instance more explicit
The interface let me think that the affinity can be changed by the owned of the thread and the thred itself. Is this by design?
thread_affinity() expresses if the fiber is bound to the thread - this is required for fiber-stealing, e.g. a fiber which is bound to its running thread will not be selected as candidate for fiber-stealing.
Please don't document get/set functions as thread_affinity altogether.
OK
The safe_bool idiom should be replaced by a explict operator bool.
hmm - I thought using 'operator bool' is dangerous (I remind on some discussion of this issue by Scott Meyers). do you have other infos?
Why is the scheduling algorithm global? Could it be threads specific?
It is thread specific (using boost::thread_specific_ptr)
BTW i sthere an exmple showing the |thread_specific_ptr trick mentioned on the documentation. |
which trick? maybe you referring to algorithm * scheduler::instance() { if ( ! instance_.get() ) { default_algo_.reset( new round_robin() ); instance_.reset( default_algo_.get() ); } return instance_.get(); }
Why the time related function are limited to a specific clock?
it is a typedef to steady_clock (if avaliable) or system_clock - one of the main reasons is that you would have made the schedulers be templates.
The interface of fiber_group based on old and deprecated thread_group is not based on move semantics. Have you take a look at the proposal N3711 <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3711.pdf> Task Groups As a Lower Level C++ Library Solution To Fork-Join Parallelism
no - but I've no problems to remove it form the library
Maybe it is worth adapting it to fibers. ** Boost.Thread has deprecated the use of the nested type scoped_lock as it introduce unnecessary dependencies. DO you think it is worth maintaining it?
oh - I wasn't aware of this issue - I've no preferrence to scoped_lock (which is a typedef to unique_lock, AFAIK)
I made some adaptations to boost::barrier that could also have a sens for fibers.
OK - what are those adaptations?
I don't know if a single class could be defined that takes care of both contexts for high level classes as the barrier?
a problem is raised by the mutex implementations - thread's mutexes are blocking the thread while fiber's mutexes do only suspend the current fiber while keep the thread running (so other fibers are able to run instead) I was thinking on a combination of sync. primitives for threads and fibers too , but it is not that easy to implement (with a clean interface)
Boost.Thread would deliver two synchronized bounded and unbounded queue soon based on N3533 <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3533.html> C++ Concurrent Queues
OK
Have you tried to follow the same interface?
I did look at the proposal of C++ Concurrent Queues - but I didn't adapt the complete interface. for instance Element queue::value_pop(); -> queue_op_status queue::pop( Element &);

Le 09/01/14 08:42, Oliver Kowalke a écrit :
2014/1/8 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
Hi Oliver,
hello Vicente
The interface must at least follows the interface of the standard thread library and if there are some limitations, they must be explicitly documeted. Any difference respect Boost.Thread must also documented and the rational explained.
OK - section rational
std::thread is not copyable by design, that is only one owner. WHy boost::fibers::fiber is copyable?
boost::fibers::fiber should be movable only - it is derived from boost::noncopyable and uses BOOST_MOVABLE_BUT_NOT_COPYABLE Sorry, I don't find now from where I read that fiber is copyable. Maybe I was tired :(
Why the exceptions throw by the function given to a fiber is consumed by the framework instead of terminate the program as std::thread?
the trampoline-function used for the context does following: - in a try-cactch-block execute the fiber-code (fiber-function given by the user) - catch exception forced_unwind from boost.coroutine -> release the fiber and continue unwinding the stack - catch fiber_interrupted -> stored inside boost::exception_ptr (might be re-thrown in fiber::join() ) - catch all other exceptions and call std::terminate()
I though this would let to an equivalent behaviour as std::thread
Then you should be more explicit in this paragraph "Exceptions thrown by the function or callable object passed to the |fiber| <http://olk.github.io/libs/fiber/doc/html/fiber/fiber_mgmt/fiber.html#class_fiber> constructor are consumed by the framework. If you need to know which exception was thrown, use |future<>| <http://olk.github.io/libs/fiber/doc/html/fiber/synchronization/futures/future.html#class_future> and |packaged_task<>| <http://olk.github.io/libs/fiber/doc/html/fiber/synchronization/futures/packaged_task.html#class_packaged_task>. "
Which exception is thrown when the Error Conditions:resource_deadlock_would_occurand invalid_argument are signaled?
I use BOOST_ASSERT instead of exception Where is this documented?
Why priority and thread_affinity are not part of the fiber attributes?
you referre to class attributes passed to fiber's ctor? this class is a vehicle for passing special parameter (for instance stack-size) to boost::coroutine - if you use the segmented-stack feature you usually don't need it. priority() and thread_affinity() are member-functions of boost::fibers::fiber to make the modifications those parameters for an instance more explicit What do you think about adding them to the class atributes also?
The interface let me think that the affinity can be changed by the owned of the thread and the thred itself. Is this by design?
thread_affinity() expresses if the fiber is bound to the thread - this is required for fiber-stealing, e.g. a fiber which is bound to its running thread will not be selected as candidate for fiber-stealing. This doesn't respond to my question. Why the change of thread_affinity need to be changed by the thread itself and by the fiber owner?
Please don't document get/set functions as thread_affinity altogether.
OK
The safe_bool idiom should be replaced by a explict operator bool.
hmm - I thought using 'operator bool' is dangerous (I remind on some discussion of this issue by Scott Meyers). Could you explain why it is dangerous and how it is more dangerous than using the safe bool idiom? do you have other infos?
Why is the scheduling algorithm global? Could it be threads specific?
It is thread specific (using boost::thread_specific_ptr) How the user selects the thread to which the scheduling algorithm is applied? the current thread? If yes, what about adding the function on a this_thread namespace?
boost::fibers::this_thread::set_scheduling_algorithm( & mfs);
BTW i sthere an exmple showing the |thread_specific_ptr trick mentioned on the documentation. |
which trick? maybe you referring to
"Note: |set_scheduling_algorithm()| does /not/ take ownership of the passed |algorithm*|: *Boost.Fiber* does not claim responsibility for the lifespan of the referenced |scheduler| object. The caller must eventually destroy the passed |scheduler|, just as it must allocate it in the first place. (Storing the pointer in a |boost::thread_specific_ptr| is one way to ensure that the instance is destroyed on thread termination.)"
algorithm * scheduler::instance() { if ( ! instance_.get() ) { default_algo_.reset( new round_robin() ); instance_.reset( default_algo_.get() ); } return instance_.get(); }
Why the time related function are limited to a specific clock?
it is a typedef to steady_clock (if avaliable) or system_clock - one of the main reasons is that you would have made the schedulers be templates.
Maybe schedulers need to take care of a single time_point, but the other classes should provide an time related interface using any clock.
The interface of fiber_group based on old and deprecated thread_group is not based on move semantics. Have you take a look at the proposal N3711 <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3711.pdf> Task Groups As a Lower Level C++ Library Solution To Fork-Join Parallelism
no - but I've no problems to remove it form the library
Maybe it is worth adapting it to fibers. ** Boost.Thread has deprecated the use of the nested type scoped_lock as it introduce unnecessary dependencies. DO you think it is worth maintaining it?
oh - I wasn't aware of this issue - I've no preferrence to scoped_lock (which is a typedef to unique_lock, AFAIK)
I made some adaptations to boost::barrier that could also have a sens for fibers.
OK - what are those adaptations?
See http://www.boost.org/doc/libs/1_55_0/doc/html/thread/synchronization.html#th... and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3817.html#barrier_o...
I don't know if a single class could be defined that takes care of both contexts for high level classes as the barrier?
a problem is raised by the mutex implementations - thread's mutexes are blocking the thread while fiber's mutexes do only suspend the current fiber while keep the thread running (so other fibers are able to run instead)
I was thinking on a combination of sync. primitives for threads and fibers too , but it is not that easy to implement (with a clean interface)
Ok. Glad to see that you have tried it.
Boost.Thread would deliver two synchronized bounded and unbounded queue soon based on N3533 <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3533.html> C++ Concurrent Queues
OK
Have you tried to follow the same interface?
I did look at the proposal of C++ Concurrent Queues - but I didn't adapt the complete interface. for instance Element queue::value_pop(); -> queue_op_status queue::pop( Element &);
Could you explain the rationale? Element queue::value_pop(); can be used with no default constructible types while queue_op_status queue::pop(Element &); not. Best, Vicente

2014/1/9 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
Then you should be more explicit in this paragraph "Exceptions thrown by the function or callable object passed to the |fiber| <http://olk.github.io/libs/fiber/doc/html/fiber/fiber_ mgmt/fiber.html#class_fiber> constructor are consumed by the framework. If you need to know which exception was thrown, use |future<>| < http://olk.github.io/libs/fiber/doc/html/fiber/synchronization/futures/ future.html#class_future> and |packaged_task<>| < http://olk.github.io/libs/fiber/doc/html/fiber/synchronization/futures/ packaged_task.html#class_packaged_task>. "
OK
Which exception is thrown when the Error Conditions:resource_deadlock_
would_occurand invalid_argument are signaled?
I use BOOST_ASSERT instead of exception
Where is this documented?
it's not documented - I'll add some notes
What do you think about adding them to the class atributes also?
it would be possible - I've not thought on this variant
This doesn't respond to my question. Why the change of thread_affinity need to be changed by the thread itself and by the fiber owner?
- fiber owner might deside on some point that a specia fiber is save to be migrated to another thread - it is the fiber itself (code running in the fiber) which can modify its thread-affinity -> the user (code) can decide when I fiber is safe to be selected for migrating between threads
Could you explain why it is dangerous and how it is more dangerous than using the safe bool idiom?
struct X { operator bool() {} }; struct Y { operator bool() {} }; X x; Y y; if ( x == y) // does compile
How the user selects the thread to which the scheduling algorithm is applied? the current thread? If yes, what about adding the function on a this_thread namespace?
boost::fibers::this_thread::set_scheduling_algorithm( & mfs);
maybe - I would not add too many nested namespaces
"Note:
|set_scheduling_algorithm()| does /not/ take ownership of the passed |algorithm*|: *Boost.Fiber* does not claim responsibility for the lifespan of the referenced |scheduler| object. The caller must eventually destroy the passed |scheduler|, just as it must allocate it in the first place. (Storing the pointer in a |boost::thread_specific_ptr| is one way to ensure that the instance is destroyed on thread termination.)"
there is no special trick - it is the code below which installs the default scheduler if the user does not call set_scheduling_algorithm().
algorithm * scheduler::instance() { if ( ! instance_.get() ) { default_algo_.reset( new round_robin() ); instance_.reset( default_algo_.get() ); } return instance_.get(); }
Maybe schedulers need to take care of a single time_point, but the other
classes should provide an time related interface using any clock.
OK - boost.chrono is your domain. functions accepting a time_duration/time_point (from boost.chrono) can always be mapped/applied to steady_clock/system_clock?
I made some adaptations to boost::barrier that could also have a sens for
fibers.
OK - what are those adaptations?
See
http://www.boost.org/doc/libs/1_55_0/doc/html/thread/ synchronization.html#thread.synchronization.barriers and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/ n3817.html#barrier_operations
OK - that's new to me, but I don't know the use case of completion function etc. - do you have some hints?
Ok. Glad to see that you have tried it.
maybe in another library the combination of those two kinds of sync. primitives will succeed
Could you explain the rationale?
Element queue::value_pop();
can be used with no default constructible types
while
queue_op_status queue::pop(Element &);
not.
yes - you are right. I was more focused on a symmetry of the interface (returning queue_op_status)

On Thu, Jan 9, 2014 at 2:28 PM, Oliver Kowalke <oliver.kowalke@gmail.com> wrote:
2014/1/9 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
Could you explain the rationale? Element queue::value_pop(); can be used with no default constructible types while queue_op_status queue::pop(Element &); not.
yes - you are right. I was more focused on a symmetry of the interface (returning queue_op_status)
There's the question of what should happen to a caller blocked in value_pop() when the producer calls close(). Returning queue_op_status seems safer.

Le 09/01/14 20:52, Nat Goodspeed a écrit :
On Thu, Jan 9, 2014 at 2:28 PM, Oliver Kowalke <oliver.kowalke@gmail.com> wrote:
2014/1/9 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
Could you explain the rationale? Element queue::value_pop(); can be used with no default constructible types while queue_op_status queue::pop(Element &); not. yes - you are right. I was more focused on a symmetry of the interface (returning queue_op_status) There's the question of what should happen to a caller blocked in value_pop() when the producer calls close(). Returning queue_op_status seems safer.
The proposal throws an exception. Vicente

Le 09/01/14 20:28, Oliver Kowalke a écrit :
2014/1/9 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
Then you should be more explicit in this paragraph "Exceptions thrown by the function or callable object passed to the |fiber| <http://olk.github.io/libs/fiber/doc/html/fiber/fiber_ mgmt/fiber.html#class_fiber> constructor are consumed by the framework. If you need to know which exception was thrown, use |future<>| < http://olk.github.io/libs/fiber/doc/html/fiber/synchronization/futures/ future.html#class_future> and |packaged_task<>| < http://olk.github.io/libs/fiber/doc/html/fiber/synchronization/futures/ packaged_task.html#class_packaged_task>. "
OK
Which exception is thrown when the Error Conditions:resource_deadlock_
would_occurand invalid_argument are signaled?
I use BOOST_ASSERT instead of exception Where is this documented?
it's not documented - I'll add some notes
What do you think about adding them to the class atributes also?
it would be possible - I've not thought on this variant
This doesn't respond to my question. Why the change of thread_affinity need to be changed by the thread itself and by the fiber owner?
- fiber owner might deside on some point that a specia fiber is save to be migrated to another thread - it is the fiber itself (code running in the fiber) which can modify its thread-affinity
-> the user (code) can decide when I fiber is safe to be selected for migrating between threads
Could you explain why it is dangerous and how it is more dangerous than using the safe bool idiom?
struct X { operator bool() {} };
struct Y { operator bool() {} };
X x; Y y;
if ( x == y) // does compile
How the user selects the thread to which the scheduling algorithm is applied? the current thread? If yes, what about adding the function on a this_thread namespace?
boost::fibers::this_thread::set_scheduling_algorithm( & mfs);
maybe - I would not add too many nested namespaces It has the advantage to be clear. As you can see I was confused to what
I don't see the problem with explicit conversion struct X { explicit operator bool() {} }; struct Y { explicit operator bool() {} }; the function was applying.
"Note:
|set_scheduling_algorithm()| does /not/ take ownership of the passed |algorithm*|: *Boost.Fiber* does not claim responsibility for the lifespan of the referenced |scheduler| object. The caller must eventually destroy the passed |scheduler|, just as it must allocate it in the first place. (Storing the pointer in a |boost::thread_specific_ptr| is one way to ensure that the instance is destroyed on thread termination.)"
there is no special trick - it is the code below which installs the default scheduler if the user does not call set_scheduling_algorithm().
algorithm * scheduler::instance() { if ( ! instance_.get() ) { default_algo_.reset( new round_robin() ); instance_.reset( default_algo_.get() ); } return instance_.get(); }
Is there an example showing this on the repository? Can the scheduler be shared between threads?
Maybe schedulers need to take care of a single time_point, but the other
classes should provide an time related interface using any clock.
OK - boost.chrono is your domain. functions accepting a time_duration/time_point (from boost.chrono) can always be mapped/applied to steady_clock/system_clock?
No mapping can be done between clocks and there is no need to do this mapping. This is how standard chrono libray was designed. Please take a look at the following implementation template <class Clock, class Duration> bool try_lock_until(const chrono::time_point<Clock, Duration>& t) { using namespace chrono; system_clock::time_point s_now = system_clock::now(); typename Clock::time_point c_now = Clock::now(); return try_lock_until(s_now + ceil<nanoseconds>(t - c_now)); } template <class Duration> bool try_lock_until(const chrono::time_point<chrono::system_clock, Duration>& t) { using namespace chrono; typedef time_point<system_clock, nanoseconds> nano_sys_tmpt; return try_lock_until(nano_sys_tmpt(ceil<nanoseconds>(t.time_since_epoch()))); } Note that only the try_lock_until(const chrono::time_point<chrono::system_clock, nanoseconds>& ) function needs to be virtual.
I made some adaptations to boost::barrier that could also have a sens for
fibers.
OK - what are those adaptations?
See
http://www.boost.org/doc/libs/1_55_0/doc/html/thread/ synchronization.html#thread.synchronization.barriers and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/ n3817.html#barrier_operations
OK - that's new to me, but I don't know the use case of completion function etc. - do you have some hints?
From the pointed proposal " A Note on Completion Functions and Templates The proposed barrier takes an optional completion function, which may either return void or size_t. A barrier may thus do one of three things after all threads have called |count_down_and_wait()|: * Reset itself automatically (if given no completion function.) * Invoke the completion function and then reset itself automatically (if given a function returning void). * Invoke the completion function and use the return value to reset itself (if given a function returning size_t)." As you can see this parameter is most of the time related to how to reset the barrier counter.
Ok. Glad to see that you have tried it.
maybe in another library the combination of those two kinds of sync. primitives will succeed
Could you explain the rationale?
Element queue::value_pop();
can be used with no default constructible types
while
queue_op_status queue::pop(Element &);
not.
yes - you are right. I was more focused on a symmetry of the interface (returning queue_op_status)
Great. Vcente

On Thu, Jan 9, 2014 at 5:10 PM, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
Le 09/01/14 20:28, Oliver Kowalke a écrit :
2014/1/9 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
I don't see the problem with explicit conversion
struct X { explicit operator bool() {} };
struct Y { explicit operator bool() {} };
Isn't that new in C++11? I'm pretty sure Oliver is trying hard to retain C++03 compatibility.

Le 09/01/14 23:25, Nat Goodspeed a écrit :
On Thu, Jan 9, 2014 at 5:10 PM, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
Le 09/01/14 20:28, Oliver Kowalke a écrit :
2014/1/9 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> I don't see the problem with explicit conversion
struct X { explicit operator bool() {} };
struct Y { explicit operator bool() {} }; Isn't that new in C++11? I'm pretty sure Oliver is trying hard to retain C++03 compatibility.
Then include both and let the use choose if he wants to have an application portable to c++11 compiler only or to c++98 also. Vicente

2014/1/9 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
struct X {
operator bool() {} };
struct Y { operator bool() {} };
X x; Y y;
if ( x == y) // does compile
I don't see the problem with explicit conversion
you can compare X and Y
Is there an example showing this on the repository?
no its is an implementation in side the framework, e.g. class scheduler which holds the user-defined or the default implementation of the scheduler-impl (default is round_robin)
Can the scheduler be shared between threads?
no - not the schedulers (== classes derived from interface algorithm, e.g. round_robin and round_robin_ws) which might be share-able between threads but I doubt that it would make sense - a scheduler derived from alogrithm schedules the fibers running in the current thread
Maybe schedulers need to take care of a single time_point, but the other
classes should provide an time related interface using any clock.
OK - boost.chrono is your domain. functions accepting a time_duration/time_point (from boost.chrono) can always be mapped/applied to steady_clock/system_clock?
No mapping can be done between clocks and there is no need to do this mapping. This is how standard chrono libray was designed. Please take a look at the following implementation
template <class Clock, class Duration> bool try_lock_until(const chrono::time_point<Clock, Duration>& t) { using namespace chrono; system_clock::time_point s_now = system_clock::now(); typename Clock::time_point c_now = Clock::now(); return try_lock_until(s_now + ceil<nanoseconds>(t - c_now)); } template <class Duration> bool try_lock_until(const chrono::time_point<chrono::system_clock, Duration>& t) { using namespace chrono; typedef time_point<system_clock, nanoseconds> nano_sys_tmpt; return try_lock_until(nano_sys_tmpt(ceil<nanoseconds>(t.time_ since_epoch()))); }
Note that only the try_lock_until(const chrono::time_point<chrono::system_clock, nanoseconds>& ) function needs to be virtual.
seem to me at the first look that it woul be compilcate in teh case of boost.fiber because: - the scheduler (derived from algorithm, e.g. for instance round_robin) uses internally a specific clock type (steady_clock) - member-functions of sync classes like condition_variable::wait_for() takes a time_duration or condition_variable::wait_until() a time_point time_duration + time_point can be of arbitrary clock-type from boost.chrono? can I simply add a time_duration different fro msteady_clock to steady_clock::now()?
I made some adaptations to boost::barrier that could also have a sens for
fibers.
OK - what are those adaptations?
See
http://www.boost.org/doc/libs/1_55_0/doc/html/thread/ synchronization.html#thread.synchronization.barriers and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/ n3817.html#barrier_operations
OK - that's new to me, but I don't know the use case of completion function etc. - do you have some hints?
From the pointed proposal "
A Note on Completion Functions and Templates
The proposed barrier takes an optional completion function, which may either return void or size_t. A barrier may thus do one of three things after all threads have called |count_down_and_wait()|:
* Reset itself automatically (if given no completion function.) * Invoke the completion function and then reset itself automatically (if given a function returning void). * Invoke the completion function and use the return value to reset itself (if given a function returning size_t)."
As you can see this parameter is most of the time related to how to reset the barrier counter.
ready this too but do you have an example code demonstrating a use case, sorry I've no idea which use cases would require a completion function?

2014/1/9 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
struct X {
operator bool() {} };
struct Y { operator bool() {} };
X x; Y y;
if ( x == y) // does compile
I don't see the problem with explicit conversion
you can compare X and Y You are surely right, but I don't see how? Could you show it?
Is there an example showing this on the repository? no its is an implementation in side the framework, e.g. class scheduler which holds the user-defined or the default implementation of the scheduler-impl (default is round_robin) Could you post here an example at the user level?
Can the scheduler be shared between threads?
no - not the schedulers (== classes derived from interface algorithm, e.g. round_robin and round_robin_ws) which might be share-able between threads but I doubt that it would make sense - a scheduler derived from alogrithm schedules the fibers running in the current thread Why do the user needs to own the scheduler?
Maybe schedulers need to take care of a single time_point, but the other
classes should provide an time related interface using any clock.
OK - boost.chrono is your domain. functions accepting a time_duration/time_point (from boost.chrono) can always be mapped/applied to steady_clock/system_clock?
No mapping can be done between clocks and there is no need to do this mapping. This is how standard chrono libray was designed. Please take a look at the following implementation
template <class Clock, class Duration> bool try_lock_until(const chrono::time_point<Clock, Duration>& t) { using namespace chrono; system_clock::time_point s_now = system_clock::now(); typename Clock::time_point c_now = Clock::now(); return try_lock_until(s_now + ceil<nanoseconds>(t - c_now)); } template <class Duration> bool try_lock_until(const chrono::time_point<chrono::system_clock, Duration>& t) { using namespace chrono; typedef time_point<system_clock, nanoseconds> nano_sys_tmpt; return try_lock_until(nano_sys_tmpt(ceil<nanoseconds>(t.time_ since_epoch()))); }
Note that only the try_lock_until(const chrono::time_point<chrono::system_clock, nanoseconds>& ) function needs to be virtual.
seem to me at the first look that it woul be compilcate in teh case of boost.fiber because: - the scheduler (derived from algorithm, e.g. for instance round_robin) uses internally a specific clock type (steady_clock) I give you un example using system_clock. The opposite is of course possible - member-functions of sync classes like condition_variable::wait_for() takes a time_duration or condition_variable::wait_until() a time_point And? Take a look at the thread implementation of e.g. libc++ which is
Le 09/01/14 23:29, Oliver Kowalke a écrit : form me the best one.
time_duration + time_point can be of arbitrary clock-type from boost.chrono? No. can I simply add a time_duration different fro msteady_clock to steady_clock::now()? There is no time_duration time. Just duration. duration can be added to any system_clock::time_point, steady_clock::time_point or whatever timepoint<Clock, Duration>.
I made some adaptations to boost::barrier that could also have a sens for
fibers.
OK - what are those adaptations? See http://www.boost.org/doc/libs/1_55_0/doc/html/thread/ synchronization.html#thread.synchronization.barriers and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/ n3817.html#barrier_operations
OK - that's new to me, but I don't know the use case of completion function etc. - do you have some hints?
From the pointed proposal "
A Note on Completion Functions and Templates
The proposed barrier takes an optional completion function, which may either return void or size_t. A barrier may thus do one of three things after all threads have called |count_down_and_wait()|:
* Reset itself automatically (if given no completion function.) * Invoke the completion function and then reset itself automatically (if given a function returning void). * Invoke the completion function and use the return value to reset itself (if given a function returning size_t)."
As you can see this parameter is most of the time related to how to reset the barrier counter.
ready this too but do you have an example code demonstrating a use case, sorry I've no idea which use cases would require a completion function?
The completion function can be used to reset the barrier to whatever new value you want, for example. Vicente

2014/1/9 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
you can compare X and Y
You are surely right, but I don't see how? Could you show it? implicit conversion to bool through operator bool()
Is there an example showing this on the repository?
no its is an implementation in side the framework, e.g. class scheduler which holds the user-defined or the default implementation of the scheduler-impl (default is round_robin)
Could you post here an example at the user level?
sorry - I don't understand your question. boost.fiber contains already two implementations of fiber-schedulers (derived from algorithm) - class round_robin and class round_robin_ws. an instance of class round_robin will be installed by the framework if the user does not apply its own scheduler (via set_scheduling_algorithm()). how the fiber-.schedulers are managed inside the framework is out of scope for the user (because an implementation detail) 00 the user does not have to deal with it.
Why do the user needs to own the scheduler?
in contrast to threads - where the operating system owns the scheduler - there is no instance other the user responsible to own/manage the fiber-scheduler (this includes instannling the default fiber-scheduler) detailed: you can describe a fiber as a thin wrapper around a coroutine - the fiber contains some additional data structures like internal state (like READY, WAITING, TERMIANTED etc.) and a list of joining fibers (waiting on that fiber to terminate). A fiber-scheduler (e.g. a class derived from algorithm) is required to manage the fibers running in one thread - therefore the scheduler isntance is stored in a thread_specific_ptr (this is an implementation detail), hence the fiber-scheduler instance is global for this thread but in other threads you have other fiber-schdulers running. The fiber-schduler itself has two queues (this is true for round_robin and round_robin_ws from the lib) - a queue for suspended fiber waiting on some event ( to be signaled from a sync. primitive or sleeping for a certain time -> internal state is WAITING) and a queue holding ready to run coroutines (internal state is READY). the scheduler deques a fiber from the ready-queue and resumes it. If the user is not satisfied with the features provided by round_robin he can implement its own fiber-scheduler (providing priority ordering) by derving from algorithm and calling set_scheduling_algorithm().
There is no time_duration time. Just duration.
yes - I see you know what I mean ;)
duration can be added to any system_clock::time_point, steady_clock::time_point or whatever timepoint<Clock, Duration
OK - but the interface of the lib already accepts arbitrary duration types from boost.chrono.

Le 10/01/14 08:19, Oliver Kowalke a écrit :
2014/1/9 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
you can compare X and Y
You are surely right, but I don't see how? Could you show it?
implicit conversion to bool through operator bool()
Is there an example showing this on the repository?
no its is an implementation in side the framework, e.g. class scheduler which holds the user-defined or the default implementation of the scheduler-impl (default is round_robin)
Could you post here an example at the user level?
sorry - I don't understand your question. I will come later with this point.
boost.fiber contains already two implementations of fiber-schedulers (derived from algorithm) - class round_robin and class round_robin_ws.
an instance of class round_robin will be installed by the framework if the user does not apply its own scheduler (via set_scheduling_algorithm()).
how the fiber-.schedulers are managed inside the framework is out of scope for the user (because an implementation detail) 00 the user does not have to deal with it.
I understand why I was lost by the algorithm function description. The fact that you showed the algorithm interface make me though that this is an extension point pf the library. You must remove this a just say that there are two scheduler.
Why do the user needs to own the scheduler?
in contrast to threads - where the operating system owns the scheduler - there is no instance other the user responsible to own/manage the fiber-scheduler (this includes instannling the default fiber-scheduler)
detailed: you can describe a fiber as a thin wrapper around a coroutine - the fiber contains some additional data structures like internal state (like READY, WAITING, TERMIANTED etc.) and a list of joining fibers (waiting on that fiber to terminate).
A fiber-scheduler (e.g. a class derived from algorithm) is required to manage the fibers running in one thread - therefore the scheduler isntance is stored in a thread_specific_ptr (this is an implementation detail), hence the fiber-scheduler instance is global for this thread but in other threads you have other fiber-schdulers running. The fiber-schduler itself has two queues (this is true for round_robin and round_robin_ws from the lib) - a queue for suspended fiber waiting on some event ( to be signaled from a sync. primitive or sleeping for a certain time -> internal state is WAITING) and a queue holding ready to run coroutines (internal state is READY). the scheduler deques a fiber from the ready-queue and resumes it.
If the user is not satisfied with the features provided by round_robin he can implement its own fiber-scheduler (providing priority ordering) by derving from algorithm and calling set_scheduling_algorithm().
Hrr. So its is an extension point, but for advanced users that know how to do it looking at the code. You mustn't document the interface of the algorithm class even in this case.
There is no time_duration time. Just duration.
yes - I see you know what I mean ;)
duration can be added to any system_clock::time_point, steady_clock::time_point or whatever timepoint<Clock, Duration
OK - but the interface of the lib already accepts arbitrary duration types from boost.chrono.
Don't forget that my point was related to time_points. Vicente

2014/1/10 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
I understand why I was lost by the algorithm function description. The fact that you showed the algorithm interface make me though that this is an extension point pf the library. You must remove this a just say that there are two scheduler.
OK
Hrr. So its is an extension point, but for advanced users that know how to do it looking at the code. You mustn't document the interface of the algorithm class even in this case.
I should remove the description of algorithm from the docu?
Don't forget that my point was related to time_points.
in the case of time_points it is a little bit complicated. algorithm, fiber-schdulers and the sync. primitives use steady_clock::time_point. I don't see how I could make this flexible so that it would work with all kinds of clocks from boost.chrono. the only possibility would be to make the member-functions and the classes (for instance condition_variable) templates (clock-type as template parameter). but this would be make the complete code templated and uncomfortable. I thought that using one clock (steady_clock would be preferable) is OK.

Le 10/01/14 08:52, Oliver Kowalke a écrit :
2014/1/10 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
Don't forget that my point was related to time_points.
in the case of time_points it is a little bit complicated. algorithm, fiber-schdulers and the sync. primitives use steady_clock::time_point. I don't see how I could make this flexible so that it would work with all kinds of clocks from boost.chrono. the only possibility would be to make the member-functions and the classes (for instance condition_variable) templates (clock-type as template parameter). but this would be make the complete code templated and uncomfortable.
maybe it is not simple, but it is possible.
I thought that using one clock (steady_clock would be preferable) is OK.
Well, is your library, so you decide. Best, Vicente

On Fri, Jan 10, 2014 at 2:19 AM, Oliver Kowalke <oliver.kowalke@gmail.com> wrote:
2014/1/9 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
you can compare X and Y
You are surely right, but I don't see how? Could you show it?
implicit conversion to bool through operator bool()
Vicente, the bad case is when class X and class Y provide operator bool. A coder inadvertently writes: if (myXinstance == myYinstance) ... (only of course with names in the problem domain rather than names that emphasize their respective classes). This should produce a compile error; you don't want X and Y to be comparable; the comparison is meaningless. Yet the compiler accepts it, converting each of myXinstance and myYinstance to bool and then comparing those bool values. The code runs. The product ships. Then some customer gets badly whacked...

On Jan 11, 2014, at 11:01 AM, Nat Goodspeed <nat@lindenlab.com> wrote:
On Fri, Jan 10, 2014 at 2:19 AM, Oliver Kowalke <oliver.kowalke@gmail.com> wrote:
2014/1/9 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
you can compare X and Y
You are surely right, but I don't see how? Could you show it?
implicit conversion to bool through operator bool()
Vicente, the bad case is when class X and class Y provide operator bool. A coder inadvertently writes:
if (myXinstance == myYinstance) ...
(only of course with names in the problem domain rather than names that emphasize their respective classes). This should produce a compile error; you don't want X and Y to be comparable; the comparison is meaningless. Yet the compiler accepts it, converting each of myXinstance and myYinstance to bool and then comparing those bool values. The code runs. The product ships. Then some customer gets badly whacked...
That isn't an issue when the conversion operator is explicit, which Vicente suggested.

On Jan 11, 2014, at 11:25 AM, Oliver Kowalke <oliver.kowalke@gmail.com> wrote:
2014/1/11 Rob Stewart <robertstewart@comcast.net>
That isn't an issue when the conversion operator is explicit, which Vicente suggested.
many other boost libraries use the safe_bool idiom - I believe it's not bad to follow this 'common' coding practice
Those libraries were written when there was no better choice, so they are not good examples for a new library. ___ Rob (Sent from my portable computation engine)

2014/1/11 Oliver Kowalke <oliver.kowalke@gmail.com>
2014/1/11 Rob Stewart <robertstewart@comcast.net>
That isn't an issue when the conversion operator is explicit, which Vicente suggested.
many other boost libraries use the safe_bool idiom - I believe it's not bad to follow this 'common' coding practice
I'll try to stop the 'explicit bool operator problem' discussion by this link: http://www.boost.org/doc/libs/1_55_0/libs/utility/doc/html/explicit_operator... Just use the BOOST_EXPLICIT_OPERATOR_BOOL macro and let the library maintainer cope with explicit-bool-operator/safe_bool problem :) -- Best regards, Antony Polukhin

2014/1/13 Antony Polukhin <antoshkka@gmail.com>
2014/1/11 Oliver Kowalke <oliver.kowalke@gmail.com>
2014/1/11 Rob Stewart <robertstewart@comcast.net>
That isn't an issue when the conversion operator is explicit, which Vicente suggested.
many other boost libraries use the safe_bool idiom - I believe it's not bad to follow this 'common' coding practice
I'll try to stop the 'explicit bool operator problem' discussion by this link:
http://www.boost.org/doc/libs/1_55_0/libs/utility/doc/html/explicit_operator...
Just use the BOOST_EXPLICIT_OPERATOR_BOOL macro and let the library maintainer cope with explicit-bool-operator/safe_bool problem :)
that's fine - thank you, I'll change the code accordingly.

On Sat, Jan 11, 2014 at 11:18 AM, Rob Stewart <robertstewart@comcast.net> wrote:
On Jan 11, 2014, at 11:01 AM, Nat Goodspeed <nat@lindenlab.com> wrote:
the bad case is when class X and class Y provide operator bool. A coder inadvertently writes:
if (myXinstance == myYinstance) ...
That isn't an issue when the conversion operator is explicit, which Vicente suggested.
http://permalink.gmane.org/gmane.comp.lib.boost.devel/248167

On Jan 11, 2014, at 11:33 AM, Nat Goodspeed <nat@lindenlab.com> wrote:
On Sat, Jan 11, 2014 at 11:18 AM, Rob Stewart <robertstewart@comcast.net> wrote:
On Jan 11, 2014, at 11:01 AM, Nat Goodspeed <nat@lindenlab.com> wrote:
the bad case is when class X and class Y provide operator bool. A coder inadvertently writes:
if (myXinstance == myYinstance) ...
That isn't an issue when the conversion operator is explicit, which Vicente suggested.
http://permalink.gmane.org/gmane.comp.lib.boost.devel/248167
I understand that Oliver was trying for C++03 compatibility, but Vicente suggested an explicit conversion operator, and Oliver apparently didn't understand that. Furthermore, I think Vicente suggested conditional compilation to use the C++11 feature when available. ___ Rob (Sent from my portable computation engine)

Le 08/01/14 23:48, Vicente J. Botet Escriba a écrit :
Le 06/01/14 14:07, Nat Goodspeed a écrit :
Hi all,
The review of Boost.Fiber by Oliver Kowalke begins today, Monday January 6th, and closes Wednesday January 15th.
- What is your evaluation of the design?
More questions/remarks: The algorithm class should be called scheduler. Either his definition is hidden to the user or you don't use detail and document all the needed types. Please replace _ws by work_stealing. Can the user define a specific scheduler that provide the work stealing? Could you show an example of a specific algorithm? How portable is the priority if it is specific of the scheduler? I'm sure I'm missing the role of the scheduler. what are the propose of the virtual functions. I have the impression that it implements a lot of things none of them related to scheduler. It seems to me that it is the class that is calling to the scheduler that schedules. What am I missing? I'll continue once I understand what the scheduler algorithm purpose is for. Best, Vicente

2014/1/9 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
The algorithm class should be called scheduler.
or manager?
Either his definition is hidden to the user or you don't use detail and document all the needed types.
usually the user don't have to care but it allows the user to implement its own scheduler
Please replace _ws by work_stealing.
OK
Can the user define a specific scheduler that provide the work stealing?
yes, as round_robin_ws already does and the class might be a blue print
Could you show an example of a specific algorithm?
class round_robin in the library
How portable is the priority if it is specific of the scheduler?
sorry - I don't get it. inside a thread the fibers are running the semantic of a priority is the same (because you have only one scheduler dealing with the priorities) if you refer to migrating a fiber between threads (e.g. between fiber-schedulers) the interpretation of priority depends on the semantics of priority in the fiber-scheduler the fiber was migrated to.
I'm sure I'm missing the role of the scheduler. what are the propose of the virtual functions.
deriving/overloading
I have the impression that it implements a lot of things none of them related to scheduler. It seems to me that it is the class that is calling to the scheduler that schedules. What am I missing? I'll continue once I understand what the scheduler algorithm purpose is for.
please look at my previous email - it contains a rough explanation of schdulers

On 10/01/2014 20:26, Quoth Oliver Kowalke:
2014/1/9 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
The algorithm class should be called scheduler.
or manager?
"Manager" is almost as bad as "algorithm" -- it's too generic and vague. Whenever you've described what "algorithm" actually does you've called it a scheduler, so isn't that the better name?

Please always state in your review whether you think the library should be accepted as a Boost library!
Library should be accepted, but some modification must be applied (see below).
Additionally please consider giving feedback on the following general topics:
- What is your evaluation of the design?
Part that mimics the Boost.Thread interface is good. Missing functions and minor updates can be easily added even after the review. Scheduler does not look good. Class "algorithm" looks overcomplicated: too many methods, classes from detail namespace are used in function signatures, looks like some methods duplicate each other (yield, wait). I'd recommend to hide "algorithm" from user for nearest releases in detail namespace, refactor it and only after that - show it to user. Maybe an additional mini review will be required for algorithm.
- What is your evaluation of the implementation?
I'm slightly confused by the fact that fiber_base use many dynamic memory allocations (it contains std::vector, std::map) and virtual functions. There must be some way to optimize away that, otherwise fibers may be much slower than threads. Heap allocations also exist in mutex class. I dislike the `waiting_. push_back( n);` inside the spinlocked sections. Some intrusive container must be used instead of std::deque. There is a copy-pasted code in mutex.cpp. `try_lock` and `lock` member functions have very different implementations, that's suspicious. Back to the fiber_base. There's too many atomics and locks in it. As I understand, the default use-case does not include thread migrations so there is no need to put all the synchronizations inside the fiber_base. round_robin_ws and mutexes work with multithreading, ideologically it's the more correct place for synchronizations. Let fiber_base be thread unsafe, and schedulers and mutexes take care about the serializing access to the fiber_base. Otherwise singlethreaded users will loose performance.
- What is your evaluation of the documentation?
Please add "Examples" section. Paul Bristow already mentioned a QuickBook's ability to import source files using [@../../example/my_example.cpp] syntax.
- What is your evaluation of the potential usefulness of the library?
I have a few complicated libraries that will benefit from fibers. However short real-life usecase examples will be good to get the idea of fibers to new library users.
- Did you try to use the library? With what compiler? Did you have any problems?
There were no problems with GCC-4.7 and clang. Many warnings were rised during compilation, but most part of the warnings belong to the DateTime library.
- How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
I've tried a few examples, made a bunch of experiments with asio. Spend few hours looking through the source code.
- Are you knowledgeable about the problem domain?
Not a pro with fibers, but have good experience with threads. -- Best regards, Antony Polukhin

2014/1/10 Antony Polukhin <antoshkka@gmail.com>
Scheduler does not look good. Class "algorithm" looks overcomplicated: too many methods, classes from detail namespace are used in function signatures, looks like some methods duplicate each other (yield, wait).
which one should be removed an why? algorithm::wait() and algorithm::yield() have different effect on the fiber algorithm::wait() sets the internal state of a fiber to WAITING suspends and stores it into an internal queue (waiting-queue). the fiber must be signaled to be ready in order to moved from the waiting- to the ready-queue so it can be resumed later. in contrast to this algorithm::yield() suspends the fiber, let the internal state be READY and appends it at the end of the ready-queue so that it does not have to be signaled etc.
I'm slightly confused by the fact that fiber_base use many dynamic memory allocations (it contains std::vector, std::map) and virtual functions. There must be some way to optimize away that, otherwise fibers may be much slower than threads.
hmm - boost.thread uses stl containers too (for instance thread_data_base) the fiber must hold some data by itself, for instance a list of fibers waiting on it to terminated (join). otherwise thread migration between thread would be hard to implement if not impossible.
Heap allocations also exist in mutex class. I dislike the `waiting_. push_back( n);` inside the spinlocked sections. Some intrusive container must be used instead of std::deque.
OK - the kind of container can be optimized, but the mutex must know which fibers are waiting on it to lock the mutex. otherwise thread migration between thread would be hard to implement if not impossible.
There is a copy-pasted code in mutex.cpp. `try_lock` and `lock` member functions have very different implementations, that's suspicious.
why is it suspicious? mutex::try_lock() will never suspend the fiber if the mutex is already locked while mutex::lock() will do.
Back to the fiber_base. There's too many atomics and locks in it. As I understand, the default use-case does not include thread migrations so there is no need to put all the synchronizations inside the fiber_base. round_robin_ws and mutexes work with multithreading, ideologically it's the more correct place for synchronizations. Let fiber_base be thread unsafe, and schedulers and mutexes take care about the serializing access to the fiber_base. Otherwise singlethreaded users will loose performance.
the only way to do this would be that every fiber_base holds a pointer to its fiber-scheduler (algorithm *). in the case of fibers joining the current fiber (e.g. stored in the internal list of fiber_base) the fiber has to signal the schduler of the joining fiber. BOOST_FOREACH( fiber_base::ptr_t f, joining_fibers) { f->set_ready() // fiber_base::set_ready() { scheduler->set_ready( this); } } this pattern must be applied to mutex and condition_variable too. if a fier is migrated its point to the scheduler must be swapped.

2014/1/10 Oliver Kowalke <oliver.kowalke@gmail.com>
2014/1/10 Antony Polukhin <antoshkka@gmail.com>
Scheduler does not look good. Class "algorithm" looks overcomplicated: too many methods, classes from detail namespace are used in function signatures, looks like some methods duplicate each other (yield, wait).
which one should be removed an why?
Do not remove schedulers. Just remove "algorithm" from docs and hide it in namespace detail.
algorithm::wait() and algorithm::yield() have different effect on the fiber
algorithm::wait() sets the internal state of a fiber to WAITING suspends and stores it into an internal queue (waiting-queue). the fiber must be signaled to be ready in order to moved from the waiting- to the ready-queue so it can be resumed later. in contrast to this algorithm::yield() suspends the fiber, let the internal state be READY and appends it at the end of the ready-queue so that it does not have to be signaled etc.
Oh, now I see.
I'm slightly confused by the fact that fiber_base use many dynamic memory allocations (it contains std::vector, std::map) and virtual functions. There must be some way to optimize away that, otherwise fibers may be much slower than threads.
hmm - boost.thread uses stl containers too (for instance thread_data_base) the fiber must hold some data by itself, for instance a list of fibers waiting on it to terminated (join). otherwise thread migration between thread would be hard to implement if not impossible.
My concern is that fiber migration between threads is a rare case. So first of all fibers must be optimized for a single threaded usage. This means that all the thread migration mechanics must be put in scheduler, leaving fiber light and thread unsafe. What if round_robin scheduler be a thread local variable (just like now, but without any thread sync), but for cases when fiber migration is required a global round_robin_ws scheduler is created for all the threads? In that way we'll get high performance in single threaded applications and hide all the thread migrations and sync inside the thread safe round_robin_ws scheduler.
Back to the fiber_base. There's too many atomics and locks in it. As I understand, the default use-case does not include thread migrations so there is no need to put all the synchronizations inside the fiber_base. round_robin_ws and mutexes work with multithreading, ideologically it's the more correct place for synchronizations. Let fiber_base be thread unsafe, and schedulers and mutexes take care about the serializing access to the fiber_base. Otherwise singlethreaded users will loose performance.
the only way to do this would be that every fiber_base holds a pointer to its fiber-scheduler (algorithm *). in the case of fibers joining the current fiber (e.g. stored in the internal list of fiber_base) the fiber has to signal the schduler of the joining fiber.
BOOST_FOREACH( fiber_base::ptr_t f, joining_fibers) { f->set_ready() // fiber_base::set_ready() { scheduler->set_ready( this); } }
this pattern must be applied to mutex and condition_variable too. if a fier is migrated its point to the scheduler must be swapped.
Is that easy to implement? Does it scale well with proposal of thread unsafe round_robin and thread safe round_robin_ws? -- Best regards, Antony Polukhin

-----Original Message----- From: Boost [mailto:boost-bounces@lists.boost.org] On Behalf Of Antony Polukhin Sent: Friday, January 10, 2014 10:02 AM To: boost@lists.boost.org List Subject: Re: [boost] Boost.Fiber review January 6-15
2014/1/10 Oliver Kowalke <oliver.kowalke@gmail.com>
2014/1/10 Antony Polukhin <antoshkka@gmail.com>
Scheduler does not look good. Class "algorithm" looks overcomplicated: too many methods, classes from detail namespace are used in function signatures, looks like some methods duplicate each other (yield, wait).
which one should be removed an why?
Do not remove schedulers. Just remove "algorithm" from docs and hide it in namespace detail.
And document the schedulers in a separate 'Implementation' section. Paul --- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow@hetp.u-net.com

On Jan 10, 2014, at 5:20 AM, Oliver Kowalke <oliver.kowalke@gmail.com> wrote:
2014/1/10 Paul A. Bristow <pbristow@hetp.u-net.com>
And document the schedulers in a separate 'Implementation' section.
OK
May I suggest "library extension" or "customization" instead? I think it's important to allow user-supplied schedulers - e.g. the complaint about the containers used by the default scheduler implementation. Such users could, if desired, supply a scheduler with containers more to their liking. (I must admit I was a bit floored by the remark about "slower than threads," since that completely disregards the cost of preemptive kernel context switching.)

2014/1/10 Antony Polukhin <antoshkka@gmail.com>
Do not remove schedulers. Just remove "algorithm" from docs and hide it in namespace detail.
OK
My concern is that fiber migration between threads is a rare case.
yes/maybe - I've introduced this in order to support work-stealing in thread-pools
So first of all fibers must be optimized for a single threaded usage. This means that all the thread migration mechanics must be put in scheduler, leaving fiber light and thread unsafe.
that's already reality. suppose you want to migrate a fiber from thread A to thread B using round_robin_ws as the fiber-scheduler (each thread has running an instance of this class). Thread B accesses the fiber-scheduler of thread A by calling round_robin_ws::migrate_from(). this function selects a fiber (ready to run) from the ready-queue and adds this fiber to its own internal read-queue. the access to the ready-queue must be thread-safe in round_robin_ws. In contrast to this class round_robin does not support fiber-stealing and thus does not protect its internal ready-queue for concurrent access from different threads.
What if round_robin scheduler be a thread local variable (just like now, but without any thread sync), but for cases when fiber migration is required a global round_robin_ws scheduler is created for all the threads? In that way we'll get high performance in single threaded applications and hide all the thread migrations and sync inside the thread safe round_robin_ws scheduler.
see above - the current implementation already separates a 'thread-safe' scheduler from an un-safe mutex and condition_variable must be thread-safe because it could be executed concurrently in different threads. the only class which could be optimized would be fiber_base itself, e.g. the accesss to the internal state must be threadsafe (currently done via an atomic). of course this atmoic is not required if the fiber runs in a fiber-scheduler which does not support fiber-migration. threfore, as you already mentioned, it would be an optimzation to move the management of fibers internal state into the scheduler itslef (for instance internal class schedulable). The scheduler determines if accessing the internal state is thread-safe or not.
Is that easy to implement?
at the first look, yes
Does it scale well with proposal of thread unsafe round_robin and thread safe round_robin_ws?
it doesn't matter if you update the state in the fiber_base or in the scheduler - you have the same amount of function calls. the benefit is that you don't update atomics in the thread unsafe round_robin class.

On Fri, Jan 10, 2014 at 1:50 AM, Antony Polukhin <antoshkka@gmail.com> wrote:
I'm slightly confused by the fact that fiber_base use many dynamic memory allocations (it contains std::vector, std::map) and virtual functions. There must be some way to optimize away that, otherwise fibers may be much slower than threads.
Heap allocations also exist in mutex class. I dislike the `waiting_. push_back( n);` inside the spinlocked sections. Some intrusive container must be used instead of std::deque.
That's a good point. Since a fiber can only be in one waiting list at a time, an intrusive list node can be placed right into the fiber_base object. Alternatively, a node object can be allocated right on the stack. It won't get destructed as the fiber suspends itself right after putting itself into a wait queue (that's the way Linux kernel does it). The only thing is I'm not sure if boost::intrusive supports such usage.

2014/1/10 Eugene Yakubovich <eyakubovich@gmail.com>
That's a good point. Since a fiber can only be in one waiting list at a time, an intrusive list node can be placed right into the fiber_base object. Alternatively, a node object can be allocated right on the stack. It won't get destructed as the fiber suspends itself right after putting itself into a wait queue (that's the way Linux kernel does it). The only thing is I'm not sure if boost::intrusive supports such usage.
I didn't noticed boost.intrusive until now - I think it could be possible to add the required intrusive interface to fiber_base (it already adds support for intrusive_ptr). I'll try to implement this stuff.

On Fri, Jan 10, 2014 at 2:50 AM, Antony Polukhin <antoshkka@gmail.com> wrote:
Back to the fiber_base. There's too many atomics and locks in it. As I understand, the default use-case does not include thread migrations so there is no need to put all the synchronizations inside the fiber_base. round_robin_ws and mutexes work with multithreading, ideologically it's the more correct place for synchronizations. Let fiber_base be thread unsafe, and schedulers and mutexes take care about the serializing access to the fiber_base. Otherwise singlethreaded users will loose performance.
Niall Douglas said:
I definitely need the ability to signal a fibre future belonging to thread A from some arbitrary thread B.
In other words, the case in which you migrate fibers from one thread to another is not the only case in which fibers must be thread-safe. If you have even one additional thread (A and B), and some fiber AF on thread A makes a request of thread B that might later cause fiber AF to change state, all fiber state changes must be thread-safe. An application running many fibers on exactly one thread (the process's original thread) is an interesting case, and one worth optimizing. But I'm not sure how the library could safely know that. If you depend on the coder to set a library parameter, some other coder will later come along and add a new thread without even knowing about the fiber library initialization. If there were some way for the library to count the threads in the process, a new thread might be launched just after the fiber library has made its decision. Either way, the complaint would be: "This library is too buggy to use!" Better to be thread-safe by default.

In other words, the case in which you migrate fibers from one thread to another is not the only case in which fibers must be thread-safe. If you have even one additional thread (A and B), and some fiber AF on thread A makes a request of thread B that might later cause fiber AF to change state, all fiber state changes must be thread-safe. I would think it almost certain that a real-world program will have more
On 11/01/2014 17:19, Nat Goodspeed wrote: threads. Given that a blocking system call is poisonous to throughput of all the fibres on a thread (especially if a work stealing scheduler isn't running) then I would expect most programs to delegate blocking calls to a 'real' thread pool and use a future to deliver the result. You need at least one thread-safe way to deliver data to a fibre, and I suspect that in practice both 'enqueue message' and 'complete future' are needed for convenience. If you want just one thing, then the equivalent of a Haskell MVar is probably the most effective.

2014/1/11 james<james@mansionfamily.plus.com>
Given that a blocking system call is poisonous to throughput of all the fibres on a thread
I would try to prevent blocking system calls (at least set the NONBLOCK option) and use boost.asio instead (if possible).
_______________________________________________ Unsubscribe & other changes:http://lists.boost.org/mailman/listinfo.cgi/boost That's not in any realistic way feasible. It might do for home-grown code that works on sockets directly, but you will be out of luck with almost any third
On 11/01/2014 18:51, Oliver Kowalke wrote: party database library or anything else that wraps up communication or persistence in any way. It also impacts code that uses libraries that have data structures protected by locks that might be held for extended periods sometimes. Most of these will be more important to a project than fibres, or indeed boost.

2014/1/12 james <james@mansionfamily.plus.com>
That's not in any realistic way feasible. It might do for home-grown code that works on sockets directly, but you will be out of luck with almost any third party database library or anything else that wraps up communication or persistence in any way. It also impacts code that uses libraries that have data structures protected by locks that might be held for extended periods sometimes. Most of these will be more important to a project than fibres, or indeed boost.
the fiber lib is not a on-size-fits-all too

s/too/tool/ 2014/1/12 Oliver Kowalke <oliver.kowalke@gmail.com>
2014/1/12 james <james@mansionfamily.plus.com>
That's not in any realistic way feasible. It might do for home-grown code that works on sockets directly, but you will be out of luck with almost any third party database library or anything else that wraps up communication or persistence in any way. It also impacts code that uses libraries that have data structures protected by locks that might be held for extended periods sometimes. Most of these will be more important to a project than fibres, or indeed boost.
the fiber lib is not a on-size-fits-all too

Le 06/01/14 14:07, Nat Goodspeed a écrit :
Hi all,
The review of Boost.Fiber by Oliver Kowalke begins today, Monday January 6th, and closes Wednesday January 15th.
More question about stealing. What would be the advantages of using work-stealing at the fiber level instead of using it at the task level? I wonder if the steel and migrate functions shouldn't be an internal detail of the library and that the library should provide a fiber_pool. I'm wondering also if the algorithm shouldn't be replaced by an enum. What do you think? Vicente

Le 11/01/14 18:12, Vicente J. Botet Escriba a écrit :
Le 06/01/14 14:07, Nat Goodspeed a écrit :
Hi all,
The review of Boost.Fiber by Oliver Kowalke begins today, Monday January 6th, and closes Wednesday January 15th.
More question about stealing.
What would be the advantages of using work-stealing at the fiber level instead of using it at the task level?
I wonder if the steel and migrate functions shouldn't be an internal detail of the library and that the library should provide a fiber_pool. I'm wondering also if the algorithm shouldn't be replaced by an enum.
What do you think?
BTW what is the state of a fiber after a steal_from() operation? Vicente

2014/1/11 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
BTW what is the state of a fiber after a steal_from() operation?
because this is a feature of a certain fiber-scheduler it depends on its implementation - at least not a fiber which is running (state RUNNING). in the context of scheduler round_robin_ws only fibers in the ready-queue can be stolen.

2014/1/11 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
What would be the advantages of using work-stealing at the fiber level instead of using it at the task level?
it is very simple because you migate a 'first-class' object, e.g. the fiber already is like a continuation.
I wonder if the steel and migrate functions shouldn't be an internal detail of the library and that the library should provide a fiber_pool.
fiber-stealing is not required in all cases and it has to be provided by the fiber-scheduler hence it has to be part of the scheduler.
I'm wondering also if the algorithm shouldn't be replaced by an enum.
sorry - I don't get it.

Le 11/01/14 19:45, Oliver Kowalke a écrit :
2014/1/11 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
What would be the advantages of using work-stealing at the fiber level instead of using it at the task level?
it is very simple because you migate a 'first-class' object, e.g. the fiber already is like a continuation. yes, but what are the advantages? Does it performs better? It is easy to write them?
I wonder if the steel and migrate functions shouldn't be an internal detail of the library and that the library should provide a fiber_pool.
fiber-stealing is not required in all cases and it has to be provided by the fiber-scheduler hence it has to be part of the scheduler.
What i sthe cost of a scheduler supporting stealing respect to one that doesn't support it? The performances measures should show this also.
I'm wondering also if the algorithm shouldn't be replaced by an enum.
sorry - I don't get it.
I mean that if the algorithm interface is not used by the user, it is enough to have an enum to distinguish between several possible scheduler algorithms.

On Saturday, January 11, 2014 21:48:23 Vicente J. Botet Escriba wrote:
Le 11/01/14 19:45, Oliver Kowalke a écrit :
2014/1/11 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
What would be the advantages of using work-stealing at the fiber level instead of using it at the task level?
it is very simple because you migate a 'first-class' object, e.g. the fiber already is like a continuation.
yes, but what are the advantages? Does it performs better? It is easy to write them?
I wonder if the steel and migrate functions shouldn't be an internal detail of the library and that the library should provide a fiber_pool.
fiber-stealing is not required in all cases and it has to be provided by the fiber-scheduler hence it has to be part of the scheduler.
What i sthe cost of a scheduler supporting stealing respect to one that doesn't support it? The performances measures should show this also.
This question can not be easily answered as this is purely application specific. Non work stealing schedulers do include less overhead to schedule the different threads while work stealing schedulers are able to mitigate certain load imbalances, i.e. different execution time of the running tasks.
I'm wondering also if the algorithm shouldn't be replaced by an enum.
sorry - I don't get it.
I mean that if the algorithm interface is not used by the user, it is enough to have an enum to distinguish between several possible scheduler algorithms.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

2014/1/11 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
Le 11/01/14 19:45, Oliver Kowalke a écrit :
2014/1/11 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
What would be the advantages of using work-stealing at the fiber level
instead of using it at the task level?
it is very simple because you migate a 'first-class' object, e.g. the fiber already is like a continuation.
yes, but what are the advantages? Does it performs better? It is easy to write them?
because creating depended tasks would not block your thread-pool. suppose you have a thread-pool of M threads and you create (without fiber-support) many tasks N. some of the tasks create other task, executed in the pool, and wait on the results. if you have enough tasks N>>M all your worker-threads of the pool would be blocked. something like: void tsk() { ... for( int i = 0; i<X;++i) { ... packaged_task<> p(some_other_tsk); future<> f = p.get_future(); spawn( p); f.get(); // blocks worker-thread ... } ... } With fibers the code above (using packaged_task<> and future<> from boost.fiber) does not block the worker-thread.

Le 12/01/14 19:39, Oliver Kowalke a écrit :
2014/1/11 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
Le 11/01/14 19:45, Oliver Kowalke a écrit :
2014/1/11 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
What would be the advantages of using work-stealing at the fiber level
instead of using it at the task level?
it is very simple because you migate a 'first-class' object, e.g. the fiber already is like a continuation.
yes, but what are the advantages? Does it performs better? It is easy to write them?
because creating depended tasks would not block your thread-pool. suppose you have a thread-pool of M threads and you create (without fiber-support) many tasks N. some of the tasks create other task, executed in the pool, and wait on the results. if you have enough tasks N>>M all your worker-threads of the pool would be blocked.
something like:
void tsk() { ... for( int i = 0; i<X;++i) { ... packaged_task<> p(some_other_tsk); future<> f = p.get_future(); spawn( p); f.get(); // blocks worker-thread ... } ... }
With fibers the code above (using packaged_task<> and future<> from boost.fiber) does not block the worker-thread.
This is the kind of information, motivation and examples the user needs on the documentation ;-) Maybe you should add how it could be done without fibers so that the thread doesn't blocks (you remember, we added it together on your thread_pool library more than 3 years ago). Vicente

Le 06/01/14 14:07, Nat Goodspeed a écrit :
Hi all,
The review of Boost.Fiber by Oliver Kowalke begins today, Monday January 6th, and closes Wednesday January 15th.
Why do Boost.Fiber need to use Boost.Coroutine instead of using directly Boost.Context? It seems to me that the implementation would be more efficient if it uses Boost.Context directly as a fiber is not a coroutine, isn't it? Vicente

2014/1/11 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
Why do Boost.Fiber need to use Boost.Coroutine instead of using directly Boost.Context?
re-using code (which is already tested etc.)
It seems to me that the implementation would be more efficient if it uses Boost.Context directly as a fiber is not a coroutine, isn't it?
not really because you have to do all the stuff like boost.coroutine

Le 11/01/14 19:48, Oliver Kowalke a écrit :
2014/1/11 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
Why do Boost.Fiber need to use Boost.Coroutine instead of using directly Boost.Context?
re-using code (which is already tested etc.) Didn't the previous version of Boost.Fiber, used Boost.Context directly?
It seems to me that the implementation would be more efficient if it uses Boost.Context directly as a fiber is not a coroutine, isn't it?
not really because you have to do all the stuff like boost.coroutine
I have not see directly to the current implementation so I can not argument on this, but IIRC, Boost.Context was born as the minimal interface that allowed to build coroutines, fibers, ... on top of it. Anyway, this is an implementation detail and only some performance figures could guide. Best, Vicente

On Sat, Jan 11, 2014 at 12:33 PM, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
Why do Boost.Fiber need to use Boost.Coroutine instead of using directly Boost.Context? It seems to me that the implementation would be more efficient if it uses Boost.Context directly as a fiber is not a coroutine, isn't it?
Correct, a fiber is not a coroutine. Oliver is also bringing a proposal to the ISO C++ concurrency study group to introduce coroutines in the standard. Interestingly, he is not bringing a context-library proposal: the lowest-level standard API he is proposing is the coroutine API. But is the coroutine API low-level enough, and general enough, to serve as a foundation for higher-level abstractions such as fibers? You might regard the present fiber implementation as a proof-of-concept. Oliver asserts that using the Coroutine API rather than directly engaging the Context API has only trivial effect on performance.

Le 11/01/14 19:50, Nat Goodspeed a écrit :
On Sat, Jan 11, 2014 at 12:33 PM, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
Why do Boost.Fiber need to use Boost.Coroutine instead of using directly Boost.Context? It seems to me that the implementation would be more efficient if it uses Boost.Context directly as a fiber is not a coroutine, isn't it? Correct, a fiber is not a coroutine.
Oliver is also bringing a proposal to the ISO C++ concurrency study group to introduce coroutines in the standard. Interestingly, he is not bringing a context-library proposal: the lowest-level standard API he is proposing is the coroutine API. But is the coroutine API low-level enough, and general enough, to serve as a foundation for higher-level abstractions such as fibers? You might regard the present fiber implementation as a proof-of-concept.
Oliver asserts that using the Coroutine API rather than directly engaging the Context API has only trivial effect on performance.
I don't use to believe "sur parole" performance assertions. The previous version of Boost.Fiber, IIRC, used Boost.Context. Maybe it is worth comparing the performances :) Vicente

Le 06/01/14 14:07, Nat Goodspeed a écrit :
Hi all,
The review of Boost.Fiber by Oliver Kowalke begins today, Monday January 6th, and closes Wednesday January 15th.
----------------------------------------------------- Boost.Thread interruption feature adds some overhead to all the synchronization functions that are interruption_points. It is too late for Boost.Thread, but what do you think about having a simple fiber class and an interruptible::fiber class?
Vicente

On Sat, Jan 11, 2014 at 12:50 PM, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
Boost.Thread interruption feature adds some overhead to all the synchronization functions that are interruption_points. It is too late for Boost.Thread, but what do you think about having a simple fiber class and an interruptible::fiber class?
That's particularly interesting in light of recent remarks about the cost of thread-safe fiber state management. Going out on a limb... Could we identify an underlying interruption-support operation and tease it out into a policy class? Maybe "policy" is the wrong word here: given the number of Fiber classes that would engage it, adding a template parameter to each of them -- and requiring them all to be identical for a given process -- feels like the wrong API. As with the scheduling algorithm, what about replacing a library-default object? Is the interruption-support overhead sufficiently large as to dwarf a pointer indirection that could bypass it? Similarly for fiber state management thread safety: Could we identify a small set of low-level internal synchronization operations and consolidate them into a policy class? Maybe in that case it actually could be a template parameter to either the scheduler or the fiber class itself. I'd still be interested in the possibility of a runtime decision; but given a policy template parameter, I assume it should be straightforward to provide a particular policy class that would delegate to runtime. Then again, too many parameters/options might make the library confusing to use. Just a thought.

Le 11/01/14 19:40, Nat Goodspeed a écrit :
On Sat, Jan 11, 2014 at 12:50 PM, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
Boost.Thread interruption feature adds some overhead to all the synchronization functions that are interruption_points. It is too late for Boost.Thread, but what do you think about having a simple fiber class and an interruptible::fiber class? That's particularly interesting in light of recent remarks about the cost of thread-safe fiber state management.
Going out on a limb...
Could we identify an underlying interruption-support operation and tease it out into a policy class? Maybe "policy" is the wrong word here: given the number of Fiber classes that would engage it, adding a template parameter to each of them -- and requiring them all to be identical for a given process -- feels like the wrong API. As with the scheduling algorithm, what about replacing a library-default object? Is the interruption-support overhead sufficiently large as to dwarf a pointer indirection that could bypass it?
The major cost is not on the fiber class but on the condition_variable::wait operation. Next follows the implementation found on CCIA void interruptible_wait(std::condition_variable& cv, std::unique_lock<std::mutex>& lk) { interruption_point(); this_thread_interrupt_flag.set_condition_variable(cv); cv.wait(lk); this_thread_interrupt_flag.clear_condition_variable(); interruption_point(); } If a template parameter should be used I will vote for a boolean. C++ Concurrency in Action Whether the class fiber is parameterized or we have two classes fiber and interruptible_fiber could be discussed.
Similarly for fiber state management thread safety:
Could we identify a small set of low-level internal synchronization operations and consolidate them into a policy class? Maybe in that case it actually could be a template parameter to either the scheduler or the fiber class itself. I'd still be interested in the possibility of a runtime decision; but given a policy template parameter, I assume it should be straightforward to provide a particular policy class that would delegate to runtime. It is clear that a fiber that can communicate only with fibers on the same thread would avoid any thread synchronization problems and perform better. I'm sure that there are applications that fall into this more restrictive design.
Here again can have a template parameter to state if the fiber is intra-thread or inter-threads.
Then again, too many parameters/options might make the library confusing to use. Just a thought.
Right. But for the time been we have just identified two parameters, which are not too much. Vicente

2014/1/11 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
Boost.Thread interruption feature adds some overhead to all the synchronization functions that are interruption_points. It is too late for Boost.Thread, but what do you think about having a simple fiber class and an interruptible::fiber class?
boost.fiber already support interruption (boost::fibers::fiber::interrupt()) - in contrast to boost.thread it is simply a flag in an atomic variable.

Le 11/01/14 19:40, Oliver Kowalke a écrit :
2014/1/11 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
Boost.Thread interruption feature adds some overhead to all the synchronization functions that are interruption_points. It is too late for Boost.Thread, but what do you think about having a simple fiber class and an interruptible::fiber class?
boost.fiber already support interruption (boost::fibers::fiber::interrupt()) - in contrast to boost.thread it is simply a flag in an atomic variable.
Do you mean that the cost of the managing interruptible threads is null? Don't forget that you need to manage with the waiting condition variables so that you can interrupt the fiber, which has some cost independently of whether you use an atomic or a mutex. The same argument you are using to support several schedulers should apply here, as it is evident to me that the interruption management has a performance cost. Best, Vicente

Le 11/01/14 21:52, Vicente J. Botet Escriba a écrit :
Le 11/01/14 19:40, Oliver Kowalke a écrit :
2014/1/11 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
Boost.Thread interruption feature adds some overhead to all the synchronization functions that are interruption_points. It is too late for Boost.Thread, but what do you think about having a simple fiber class and an interruptible::fiber class?
boost.fiber already support interruption (boost::fibers::fiber::interrupt()) - in contrast to boost.thread it is simply a flag in an atomic variable.
Do you mean that the cost of the managing interruptible threads is null? Don't forget that you need to manage with the waiting condition variables so that you can interrupt the fiber, which has some cost independently of whether you use an atomic or a mutex.
Ah, I forgot. You need to access the fiber specific storage of the fiber for each consdition_variable::wait operation, which is far from been an operation without cost.
The same argument you are using to support several schedulers should apply here, as it is evident to me that the interruption management has a performance cost.
Best, Vicente

On Mon, Jan 6, 2014 at 8:07 AM, Nat Goodspeed <nat@lindenlab.com> wrote:
The review of Boost.Fiber by Oliver Kowalke begins today, Monday January 6th, and closes Wednesday January 15th.
---------------------------------------------------
Please always state in your review whether you think the library should be accepted as a Boost library!
I'm heartened by the level of interest I've seen so far. I hope those of you who have been participating in the Fiber library discussions will submit actual reviews by Wednesday. (Thank you, Niall, I acknowledge your review.) I'd like to make one more request. I've seen some questions and concerns raised. To help me properly collate results, let me quote from [1]: "If you identify problems along the way, please note if they are minor, serious, or showstoppers." I'm sorry, I should have stated that in my initial review announcement. It seems only fair to help Oliver prioritize. Carry on! [1] http://www.boost.org/community/reviews.html#Comments

Le 06/01/14 14:07, Nat Goodspeed a écrit :
Hi all,
The review of Boost.Fiber by Oliver Kowalke begins today, Monday January 6th, and closes Wednesday January 15th.
----------------------------------------------------- Hi, here it is my review.
Please always state in your review whether you think the library should be accepted as a Boost library! IMO the library needs some serious modifications before been include into Boost. So, for the time been, my vote is no, the library is not yet ready. I'm sure that Oliver could take care of most of the major points and that the library would be accepted after taking in account these
I would like to have more time to review the implementation, but as there are already serious concerns respect to the design and documentation, I will do it if there is a new review. points.
Additionally please consider giving feedback on the following general topics:
- What is your evaluation of the design? The design is sound but has some details that must be fixed before acceptation:
* (Showstopper) The interface must at least follows the interface of the standard thread library (C++11) and if there are some limitations, they must be explicitly documented. * (Serious) Any difference respect Boost.Thread must also be documented and the rationale explained. * (Minor) priority and thread_affinity should be part of the fiber attributes as well as the stack and allocator. This will help maintain the interface similar to the Boost.Thread one. * (Minor) The void thread_affinity( bool req) noexcept; could be named set_thread_affinity to follow the standard way. * (Minor) I suggest to hide the algorithm functionality and introduce it once you have a fiber_pool proposal that uses work stealing. * I you insists in providing it I suggest: *(Minor) set_scheduling_algorithm should return the old scheduling algorithm. algorithm*set_scheduling_algorithm( algorithm *); and the function should be on this_thread namespace. * The migration interface *(Serious) The steal_from and migrate_to functions could be grouped on a exchange function that would do the steal and the migration at once. This allows intrusive implementations that could avoid to delete/new. If the algorithm doesn't supports fiber migration a exception could be thrown. * (Minor) An alternative could be to use an enum for the algorithm class and let the library manage internally with the scheduler. * (Minor) The safe_bool idiom should be replaced by a explicit operator bool on C++11 compilers. * (Showstopper) The time related function should not be limited to a specific clock. * (Minor) The fiber_group should be removed as the design it is based based on the new C++11 move-semantic feature. Maybe it could be replaced by one based on http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2013/n3711.pdf * (Serious) Element queue::value_pop(); must be added to the queues and the interface should follow http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2013/n3533.html. * (Minor) Barrier could include the completion function as Boost.Thread and http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2013/n3817.html. * (Serious) Interruptible and non-interruptible threads must be separated. * (Serious) Intra-thread fibers should be provided (fibers that synchronize only with fibers on the same thread) * (Serious) future<>::then() and all the associated features must be added. http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2013/n3784.pdf
- What is your evaluation of the implementation? I have take a quick reading at. I would need much more time to have a reasonable evaluation, which I don't have :(
As others I would like to see performance test on the following variations points * Single/Multi-threaded fibers * Interruptible/NonInterruptible fibers * Stealing/No Stealing schedulers * - What is your evaluation of the documentation? (Serious) The documentation is minimal and should be improved. * (Minor) The link on Boost.Coroutine fails as well as the link to Boost.Thread. * (Serious) It must be clarified that the exceptions thrown by a fiber function are caught by the fiber library and the program is terminated. * (Minor) some examples showing how the affinity can be changed by the owned of the thread and the thread itself would help. * (Minor) Please don't document get/set functions as thread_affinity altogether. * (Serious) The documentation must clarify the scope of the scheduler (thread specific). * (Serious) The documentation must clarify how portable is the priority if it is specific of the scheduler. * (Serious) You mustn't document the interface of the algorithm class that the user can not use. * (Showstopper) async() documentation must be added and be compatible with the C++ standard. * (Serious) A section on how to install and how to run the test and examples would help a lot to users that want to try the library. It is not clear that the user must install fiber inside a Boost repository. It is not clear in the documentation that the library is not header only and that the user needs to build it and link with it. * (Serious) The boost/fiber/asio files are not documented. Are these a detail of the implementation? - What is your evaluation of the potential usefulness of the library? Very useful if the promised performances (C10k problem) are there. A performance benchmark must be added.
- Did you try to use the library? With what compiler? Did you have any problems?
(Serious) Yes and a section on how to install the library and how to run the tests would help me. on MacOs with the following compilers with c++98 and c++11 modes. darwin-4.7.1,clang-3.1,clang-3.2,darwin-4.7.2,darwin-4.8.0,darwin-4.8.1 I've run the tests. * auto_ptr should be replaced. * I've got this compile error clang-darwin.compile.c++ ../../../bin.v2/libs/fiber/test/test_futures_mt.test/clang-darwin-3.1xl/debug/link-static/threading-multi/test_futures_mt.o In file included from test_futures_mt.cpp:13: In file included from ../../../boost/fiber/all.hpp:17: ../../../boost/fiber/bounded_queue.hpp:570:29: error: void function 'push' should not return a value [-Wreturn-type] if ( is_closed_() ) return queue_op_status::closed; ^ ~~~~~~~~~~~~~~~~~~~~~~~ ../../../boost/fiber/bounded_queue.hpp:580:9: error: void function 'push' should not return a value [-Wreturn-type] return queue_op_status::success; ^ ~~~~~~~~~~~~~~~~~~~~~~~~ 2 errors generated. The tests test_then and test_wait_for don't compile
- How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
In-depth study of the documentation.
- Are you knowledgeable about the problem domain?
Yes. Best, Vicente

On 06/01/2014 10:07 a.m., Nat Goodspeed wrote:> Hi all,
Please always state in your review whether you think the library should be accepted as a Boost library!
My vote is to REJECT the library in its current state.
Additionally please consider giving feedback on the following general topics:
- What is your evaluation of the design?
The design is that of the C++11 thread API, so I'll only focus in the divergence points: - The lack of a variadic constructor for `fiber` and variadic arguments for `async` makes it difficult to use the library correctly (even in the presence of C++14 lambdas). The semantic of those calls is that of a deferred call, which is difficult to achieve otherwise (note that `bind` doesn't help here). - The interface can only accept a specific clock `time_point`. Correct use of Boost.Chrono is needed, the implementation can deal with a specific clock type internally. - There is no support for deferred futures, which are incredibly useful for lazy evaluation. - There are a number of minor issues with the interface (return types, parameters). These are easily fixable by looking at the standard. - The safe-bool operator is a pointless divergence which only helps save a few keystrokes. I'm ok with them staying, but please prioritize the weak and missing points of the library first. The overall impression is that the library leaves the boilerplate to users (bundling a deferred call, converting to a specific clock). Also, it's crucial to get the semantics for `fiber` constructors right which means being careful about certain details (like making sure they work with movable-only types), but those constructors are not there yet.
- What is your evaluation of the implementation?
I've only glanced at the implementation, and I have concerns about the quality of the code. For instance, the following pattern appears frequently and it's unsettling: #ifndef BOOST_NO_RVALUE_REFERENCES promise( promise && other) /* bunch of code... */ #else promise( BOOST_RV_REF( promise) other) /* same code as above... */ #endif For completeness, the correct use of Boost.Move is: promise( BOOST_RV_REF( promise) other) /* code, just once... BOOST_RV_REF(promise) will be promise&& if there is rvalue-refs support */ Unfortunately I do not have enough time now to look into it in more detail, but incorrect use of Boost.Chrono and Boost.Move is not a promising start.
- What is your evaluation of the documentation?
The reference documentation looks OK. Some points are missing, but I've already raised those and Oliver agreed to take care of them.
- What is your evaluation of the potential usefulness of the library?
The potential usefulness of this library is huge. It goes beyond that of simply being an utility library for Boost.Asio. Fibers are a great replacement for threads for two key points: the ability to create thousands (or millions) of them, and performance (lighter than a thread). It was hinted that performance is not a goal of this library, as it is merely intended to provide a way to synchronize/coordinate coroutines with Boost.Asio. If that's the case, I'd suggest to move the library to the `asio` namespace and leave the `fiber` namespace open for a fiber library that targets a wider audience.
- Did you try to use the library? With what compiler? Did you have any problems?
I did not.
- How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
I've looked at the documentation, glanced over the implementation, and followed the debate on the mailing list.
- Are you knowledgeable about the problem domain?
I work in and with a fiber-based C++11 thread API implementation. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com

On Wed, Jan 15, 2014 at 8:56 PM, Agustín K-ballo Bergé <kaballo86@hotmail.com> wrote:
- The lack of a variadic constructor for `fiber` and variadic arguments for `async` makes it difficult to use the library correctly (even in the presence of C++14 lambdas). The semantic of those calls is that of a deferred call, which is difficult to achieve otherwise (note that `bind` doesn't help here).
The review report (which I am writing now!) does not depend on my understanding this point, but on rereading your mail I realized I do not yet understand it. In my (obviously incomplete) mental model, a hypothetical fiber constructor: fiber f(some_callable, 3.14, "a string", 17); would be completely equivalent to: fiber f(bind(some_callable, 3.14, "a string", 17)); What am I missing? (If this is already well-explained elsewhere, I would appreciate a pointer as much as your own explanation.) Oliver may well understand your point already. But if you asked me to implement a variadic fiber constructor (and async() function), I would immediately forward to bind() inside each. Would that be a sufficient implementation? If not, why not? A broader question to those who requested a variadic fiber constructor (and async()): is it sufficient to provide that support only when the compiler supports variadic templates, or are you asking for the whole ugly C++03 workaround as well?

On 21/01/2014 11:39 p.m., Nat Goodspeed wrote:
On Wed, Jan 15, 2014 at 8:56 PM, Agustín K-ballo Bergé <kaballo86@hotmail.com> wrote:
- The lack of a variadic constructor for `fiber` and variadic arguments for `async` makes it difficult to use the library correctly (even in the presence of C++14 lambdas). The semantic of those calls is that of a deferred call, which is difficult to achieve otherwise (note that `bind` doesn't help here).
The review report (which I am writing now!) does not depend on my understanding this point, but on rereading your mail I realized I do not yet understand it. In my (obviously incomplete) mental model, a hypothetical fiber constructor:
fiber f(some_callable, 3.14, "a string", 17);
would be completely equivalent to:
fiber f(bind(some_callable, 3.14, "a string", 17));
What am I missing? (If this is already well-explained elsewhere, I would appreciate a pointer as much as your own explanation.)
You are missing the support for movable-only type, or the unnecessary copies otherwise. And since you ask for a link, here is one to my own blog: http://talesofcpp.fusionfenix.com/post-14/true-story-moving-past-bind
Oliver may well understand your point already. But if you asked me to implement a variadic fiber constructor (and async() function), I would immediately forward to bind() inside each. Would that be a sufficient implementation? If not, why not?
Here is another link on the subject: http://stackoverflow.com/a/21066704/927034
A broader question to those who requested a variadic fiber constructor (and async()): is it sufficient to provide that support only when the compiler supports variadic templates, or are you asking for the whole ugly C++03 workaround as well?
C++03 support would be nice. Personally, I would be ok with only variadic support, as long as it works on the latest versions of the major compilers (MSVC is the only one where this presents a challenge). Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com

Le 22/01/14 03:39, Nat Goodspeed a écrit :
On Wed, Jan 15, 2014 at 8:56 PM, Agustín K-ballo Bergé <kaballo86@hotmail.com> wrote:
- The lack of a variadic constructor for `fiber` and variadic arguments for `async` makes it difficult to use the library correctly (even in the presence of C++14 lambdas). The semantic of those calls is that of a deferred call, which is difficult to achieve otherwise (note that `bind` doesn't help here). The review report (which I am writing now!) does not depend on my understanding this point, but on rereading your mail I realized I do not yet understand it. In my (obviously incomplete) mental model, a hypothetical fiber constructor:
fiber f(some_callable, 3.14, "a string", 17);
would be completely equivalent to:
fiber f(bind(some_callable, 3.14, "a string", 17));
What am I missing? (If this is already well-explained elsewhere, I would appreciate a pointer as much as your own explanation.) boost::bind is not movable :( or is it in C++11?)
Oliver may well understand your point already. But if you asked me to implement a variadic fiber constructor (and async() function), I would immediately forward to bind() inside each. Would that be a sufficient implementation? If not, why not?
A broader question to those who requested a variadic fiber constructor (and async()): is it sufficient to provide that support only when the compiler supports variadic templates, or are you asking for the whole ugly C++03 workaround as well? From my side this variadic versions must be provided for C++11 compilers at least. Any attempt to make the C++98 interface close to the C++11 would be welcome.
Best, Vicente

On Wed, Jan 22, 2014 at 7:30 AM, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
Le 22/01/14 03:39, Nat Goodspeed a écrit :
In my (obviously incomplete) mental model, a hypothetical fiber constructor:
fiber f(some_callable, 3.14, "a string", 17);
would be completely equivalent to:
fiber f(bind(some_callable, 3.14, "a string", 17));
What am I missing? (If this is already well-explained elsewhere, I would appreciate a pointer as much as your own explanation.)
boost::bind is not movable :( or is it in C++11?)
I haven't yet read Agustín's links -- but I will, thank you! Since movable-only callables are clearly an important use case, I would hope that if (let's say) std::bind is not yet itself movable, that's a transient situation which will soon be remedied.
if you asked me to implement a variadic fiber constructor (and async() function), I would immediately forward to bind() inside each. Would that be a sufficient implementation? If not, why not?
Again, the material to which Agustín points me may answer this question. But if using bind() is not an option even internally -- must Oliver (along with the author of every library that supports user callables!) reimplement bind() by hand, with support for movable types?
A broader question to those who requested a variadic fiber constructor (and async()): is it sufficient to provide that support only when the compiler supports variadic templates, or are you asking for the whole ugly C++03 workaround as well?
From my side this variadic versions must be provided for C++11 compilers at least. Any attempt to make the C++98 interface close to the C++11 would be welcome.
Agustín, too, says "C++03 support would be nice." Having myself implemented the preprocessor iteration for a C++03 API accepting callable-with-up-to-N-args, I know it's a bit of a pain. Moreover (a key point) anyone invoking the Fiber API who is constrained to C++03 features will not have movable-only types. In that scenario, fiber(bind(callable, arg1, arg2)) *is* completely equivalent to fiber(callable, arg1, arg2). So when discussing the requested variadic fiber constructor, I will tease apart the C++11 and C++03 cases. I feel pretty comfortable stating that the former is much more important than the latter.

Le 22/01/14 17:07, Nat Goodspeed a écrit :
On Wed, Jan 22, 2014 at 7:30 AM, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
Le 22/01/14 03:39, Nat Goodspeed a écrit :
In my (obviously incomplete) mental model, a hypothetical fiber constructor:
fiber f(some_callable, 3.14, "a string", 17);
would be completely equivalent to:
fiber f(bind(some_callable, 3.14, "a string", 17));
What am I missing? (If this is already well-explained elsewhere, I would appreciate a pointer as much as your own explanation.) boost::bind is not movable :( or is it in C++11?) I haven't yet read Agustín's links -- but I will, thank you!
Since movable-only callables are clearly an important use case, I would hope that if (let's say) std::bind is not yet itself movable, that's a transient situation which will soon be remedied. I think that std::bind is movable.
if you asked me to implement a variadic fiber constructor (and async() function), I would immediately forward to bind() inside each. Would that be a sufficient implementation? If not, why not? Again, the material to which Agustín points me may answer this question. But if using bind() is not an option even internally -- must Oliver (along with the author of every library that supports user callables!) reimplement bind() by hand, with support for movable types?
A broader question to those who requested a variadic fiber constructor (and async()): is it sufficient to provide that support only when the compiler supports variadic templates, or are you asking for the whole ugly C++03 workaround as well? From my side this variadic versions must be provided for C++11 compilers at least. Any attempt to make the C++98 interface close to the C++11 would be welcome. Agustín, too, says "C++03 support would be nice." Me too. It would be nice, but not mandatory. In C++03, it is impossible to implement perfect forwarding.
Having myself implemented the preprocessor iteration for a C++03 API accepting callable-with-up-to-N-args, I know it's a bit of a pain. Moreover (a key point) anyone invoking the Fiber API who is constrained to C++03 features will not have movable-only types. In that scenario, fiber(bind(callable, arg1, arg2)) *is* completely equivalent to fiber(callable, arg1, arg2). I disagree. We have Boost/Move.
So when discussing the requested variadic fiber constructor, I will tease apart the C++11 and C++03 cases. I feel pretty comfortable stating that the former is much more important than the latter. Agreed, but at least the C++03 must support movables types.
Best, Vicente

On 22/01/2014 03:54 p.m., Vicente J. Botet Escriba wrote:
Le 22/01/14 17:07, Nat Goodspeed a écrit :
On Wed, Jan 22, 2014 at 7:30 AM, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
Le 22/01/14 03:39, Nat Goodspeed a écrit :
In my (obviously incomplete) mental model, a hypothetical fiber constructor:
fiber f(some_callable, 3.14, "a string", 17);
would be completely equivalent to:
fiber f(bind(some_callable, 3.14, "a string", 17));
What am I missing? (If this is already well-explained elsewhere, I would appreciate a pointer as much as your own explanation.) boost::bind is not movable :( or is it in C++11?) I haven't yet read Agustín's links -- but I will, thank you!
Since movable-only callables are clearly an important use case, I would hope that if (let's say) std::bind is not yet itself movable, that's a transient situation which will soon be remedied. I think that std::bind is movable.
Yes, the forwarding call wrapper returned from `std::bind` is movable. That doesn't change the fact that: - Using `bind` in the implementation provides different semantics than those required by the standard. - Using `bind` either in the implementation or in the user code means that arguments are not forwarded, thus movable-only types cannot be used with a target callable taking them by value, and using copyable types results in unnecessary copies. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com

2014/1/22 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
So when discussing the requested variadic fiber constructor, I will
tease apart the C++11 and C++03 cases. I feel pretty comfortable stating that the former is much more important than the latter.
Agreed, but at least the C++03 must support movables types.
does boost::thread support moveable types? if I look at the source code (C++03 equivalent to variadic arguments): template <class F,class A1,class A2> thread(F f,A1 a1,A2 a2): thread_info(make_thread_info(boost::bind(boost::type<void>(),f,a1,a2)); The arguments are captured by boost::bind() which (as far as I know) does not support moveable types as arguments (in the example a1 and a2) - or do I miss something?

Le 22/01/14 20:16, Oliver Kowalke a écrit :
2014/1/22 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
So when discussing the requested variadic fiber constructor, I will
tease apart the C++11 and C++03 cases. I feel pretty comfortable stating that the former is much more important than the latter.
Agreed, but at least the C++03 must support movables types.
does boost::thread support moveable types? if I look at the source code (C++03 equivalent to variadic arguments):
template <class F,class A1,class A2> thread(F f,A1 a1,A2 a2): thread_info(make_thread_info(boost::bind(boost::type<void>(),f,a1,a2));
The arguments are captured by boost::bind() which (as far as I know) does not support moveable types as arguments (in the example a1 and a2) - or do I miss something?
Right. Boost.Thread provides only a variadic version for copyables and doesn't support movable types in C++03 for the variadic version, but provides a constructor from a callable movable or copyable. template <class F> explicit thread(F f , typename disable_if_c< boost::thread_detail::is_convertible<F&,BOOST_THREAD_RV_REF(F)>::value , dummy* >::type=0 ); template <class F> explicit thread(BOOST_THREAD_RV_REF(F) f , typename disable_if<is_same<typename decay<F>::type, thread>, dummy* >::type=0 ); As I said, implementing perfect forwarding is not possible in C++03 (See Boost.Move documentation). Best, Vicente

2014/1/22 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
Right. Boost.Thread provides only a variadic version for copyables and doesn't support movable types in C++03 for the variadic version, but provides a constructor from a callable movable or copyable.
template <class F> explicit thread(F f , typename disable_if_c< boost::thread_detail::is_convertible<F&,BOOST_THREAD_RV_REF(F)>::value , dummy* >::type=0 ); template <class F> explicit thread(BOOST_THREAD_RV_REF(F) f , typename disable_if<is_same<typename decay<F>::type, thread>, dummy* >::type=0 );
As I said, implementing perfect forwarding is not possible in C++03 (See Boost.Move documentation).
OK - you did refer to the thread-function in your previous comments. I was wondering how it would be possible to support moveable-only arguments for the ctor (especially wit ha mix of copyable and moveable-only arguments).

Le 22/01/14 20:53, Oliver Kowalke a écrit :
2014/1/22 Vicente J. Botet Escriba <vicente.botet@wanadoo.fr>
Right. Boost.Thread provides only a variadic version for copyables and doesn't support movable types in C++03 for the variadic version, but provides a constructor from a callable movable or copyable.
template <class F> explicit thread(F f , typename disable_if_c< boost::thread_detail::is_convertible<F&,BOOST_THREAD_RV_REF(F)>::value , dummy* >::type=0 ); template <class F> explicit thread(BOOST_THREAD_RV_REF(F) f , typename disable_if<is_same<typename decay<F>::type, thread>, dummy* >::type=0 );
As I said, implementing perfect forwarding is not possible in C++03 (See Boost.Move documentation).
OK - you did refer to the thread-function in your previous comments. I was wondering how it would be possible to support moveable-only arguments for the ctor (especially wit ha mix of copyable and moveable-only arguments).
See http://www.boost.org/doc/libs/1_55_0/doc/html/move/construct_forwarding.html Vicente

On Wed, Jan 22, 2014 at 1:54 PM, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
Le 22/01/14 17:07, Nat Goodspeed a écrit :
Moreover (a key point) anyone invoking the Fiber API who is constrained to C++03 features will not have movable-only types. In that scenario, fiber(bind(callable, arg1, arg2)) *is* completely equivalent to fiber(callable, arg1, arg2).
I disagree. We have Boost/Move.
Hmm, okay.
So when discussing the requested variadic fiber constructor, I will tease apart the C++11 and C++03 cases. I feel pretty comfortable stating that the former is much more important than the latter.
Agreed, but at least the C++03 must support movables types.
On seeing your response above, my impulse is to think you mean: "at least with a C++11 compiler, the fiber constructor must support movable types." Please clarify? If in fact you mean C++03, may we interpret that as: "if the Fiber library provides a C++03-compatible variadic fiber constructor, that constructor must support movable types" ?

Le 23/01/14 01:34, Nat Goodspeed a écrit :
On Wed, Jan 22, 2014 at 1:54 PM, Vicente J. Botet Escriba <vicente.botet@wanadoo.fr> wrote:
Le 22/01/14 17:07, Nat Goodspeed a écrit :
Moreover (a key point) anyone invoking the Fiber API who is constrained to C++03 features will not have movable-only types. In that scenario, fiber(bind(callable, arg1, arg2)) *is* completely equivalent to fiber(callable, arg1, arg2). I disagree. We have Boost/Move. Hmm, okay.
So when discussing the requested variadic fiber constructor, I will tease apart the C++11 and C++03 cases. I feel pretty comfortable stating that the former is much more important than the latter. Agreed, but at least the C++03 must support movables types. On seeing your response above, my impulse is to think you mean: "at least with a C++11 compiler, the fiber constructor must support movable types." Please clarify?
If in fact you mean C++03, may we interpret that as: "if the Fiber library provides a C++03-compatible variadic fiber constructor, that constructor must support movable types" ?
I meant these constructor template <class F> fiber<BOOST_RV_REF(F) f); template <class F> fiber<F& f); But these one is also posible template <class F, class A1> figer(BOOST_FWD_REF(F) f, BOOST_FWD_REF(A1) p1); Vicente

SUMMARY The review of the proposed Boost.Fiber library ended on January 15, 2014. The verdict is: not in its present form. The lively discussions during the course of the review indicate considerable interest in this library, however, and every submitted review can be read as: perhaps, if certain changes are made. This produced a long list of suggestions, which constitutes the bulk of this report. On behalf of the Boost community, I would like to commend Oliver Kowalke for the work he has put into the Fiber library so far. I encourage him to continue to evolve this library and to bring it back for a mini-review. Rather than considering the subject library as a whole, a Boost mini-review zooms in on particular issues. That seems an appropriate tool to use for further evaluation of the Fiber library -- always subject, of course, to the approval of the Review Wizards. I received seven formal votes, abbreviated here: Niall Douglas: YES, IF Eugene Yakubovich: YES, AND Antony Polukhin: YES, IF Vicente J. Botet Escriba: NO, UNTIL Agustín K-ballo Bergé: NO, IN CURRENT STATE Hartmut Kaiser: NO, IN CURRENT FORM Thomas Heller: NO, IN CURRENT FORM I thank each of these reviewers, and indeed everyone who investigated the Fiber library and participated in the discussions. If you feel that I have misrepresented your position, or have omitted or garbled an important point, please respond: the archived mail thread should accurately reflect the will of the community, even if this message does not.
From the long list of suggestions, a few key themes emerged.
PERFORMANCE Many respondents requested performance tests (specifics below). There were a number of suggestions about possible performance pitfalls in the current implementation, such as use of STL containers (with consequent heap allocations) and locks for thread safety. Several people suggested implementing performance tests *before* starting any such optimizations, which seems like sensible advice. Niall Douglas pointed out that picking some fixed number of fibers is less interesting than showing how resource consumption rises with the number of fibers: "I think it isn't unreasonable for a library to enter Boost if it has good performance *scaling* to load (e.g. O(log N)), even if performance in the absolute or comparative-to-near-alternatives sense is not great. "Absolute performance can always be incrementally improved later, whereas poor performance scaling to load usually means the design is wrong and you're going to need a whole new library with new API. "This is why I really wanted to see performance scaling graphs. If they show O(N log N) or worse, then the design is deeply flawed and the library must not enter Boost. Until we have such a graph, we can't know as there is no substitute for empirical testing." Given the truth of that last observation, I have recast certain suggested optimizations as requests for measured performance cost. Performance requests include: - empty function (create, schedule, execute, delete one fiber) - same with one yield - overhead of using a future object - tests as described in [1]; such tests should allow comparing with TBB, qthreads, openmp, HPX - construct and join a single fiber vs. construct and join a single thread (empty function) - construct and join several fibers vs. construct and join several threads (empty function) - construct and detach a single fiber vs. construct and detach a single thread (empty function) - cost of STL containers (std::vector, std::map, std::deque) - cost of fiber interruption support (same as next bullet?) - cost of thread safety: intra-thread fibers vs. inter-thread fibers - cost of round_robin_ws vs. round_robin scheduler - cost of building on Coroutine vs. building on Context Once Fiber performance tests are available, I trust the community will assist Oliver in running them on systems otherwise unavailable to him. There was some disagreement on whether it is essential for the Fiber library to attain a certain level of performance before it should be accepted. Respondents fall into two broad camps. Some see fibers as lightweight threads (without the overhead of kernel context switching). They assert that since the API is that of std::thread, the only reason to accept a separate library would be astounding performance. They note that on some hardware it is already plausible to run hundreds of thousands of concurrent std::threads. Others see fibers as a tool for organizing code based around asynchronous operations (rather than chains of callbacks). They assert that for such purposes, eliminating the kernel from context switching is sufficient performance guarantee. I myself maintain that cooperative context switching provides important functionality you cannot reasonably get from std::thread. The Fiber library attempts to address both use cases. (It was suggested that if its primary target were code organization, perhaps it should be renamed something other than "Fiber.") In any case, the community clearly wants to see Fiber performance data. Some requested CPU cycles to eliminate clock speed differences, also memory consumption. ARMv7 data was requested for "extra bonus points." DOCUMENTATION Many respondents requested additional documentation, or clarifications. Requests include: - Rationale page explaining what's there, what's not and why. Explain distinction between Coroutine library and Fiber library. If certain Fiber functionality is intended to support yet another library (rather than being complete in itself), call out what would need to be added. - A section on how to install and run the tests and examples. The need to embed in a Boost tree is implied but not stated. Mention the need to build the library and link with it. - Explain synchronization between fibers on different threads. Must the code take more care with this than with synchronizing fibers on the same thread? - Clarify that an exception raised by a fiber function calls std::terminate(), as with std::thread, rather than being consumed. - More clearly explain migrating a fiber from one scheduler to another. - Document async() in a way compatible with the C++ standard. - Clarify thread-local effect of set_scheduling_algorithm(). There was a request to put this function in a this_thread nested namespace to further clarify. - Move algorithm class documentation to "Extension" or "Customization" section. Clarify that it's not part of the baseline library functionality, but a customization point. - Document fiber::id. - Better document promise/future for void and R& (per C++ standard). - Document thread safety of each support class (or method, if it varies by method). - Document complexity guarantees per API. - Document exception safety per API. - Document supported architectures (perhaps link to Coroutine library's list); state minimum compiler versions. - Document the get/set overloads of thread_affinity() and priority() separately. Perhaps rename setters to set_thread_affinity() and set_priority(). - Explain how portable is fiber priority, if it's specific to a scheduler. What does priority mean when you migrate a fiber from one thread (one scheduler instance) to another? - Document the library's ASIO support. Link to Coroutine's ASIO yield functionality; ensure that ASIO yield is adequately explained. In particular, distinguish it from this_fiber::yield. - Better explain (and/or comment) publish-subscribe example, also other existing examples. In addition to the documentation requests above, there were requests for additional examples: - Simple example of ASIO callback implementation vs. the same logic using Fiber's ASIO support, a la [4]. - Example of a fiber pool. - Example of an arbitrary thread B filling a future on which a fiber in thread A is waiting. - Example of an arbitrary thread B posting to an asio::io_service running fibers in thread A. - Either defend fibers::condition_variable from spurious wakeups in existing examples, OR document the stronger condition_variable guarantee. - Example of M:N threading with ASIO. That might involve either one io_service per CPU, with fiber migration; or a single io_service with run() calls from each CPU, grouping fibers for each CPU into strands. - Example of one thread with many fibers making service requests on a pool of worker threads performing blocking calls. - Example of using thread_specific_ptr to manage lifespan of user-specified scheduler. - Example of the owner of a fiber changing the fiber's thread affinity vs. the fiber itself. When would you use each tactic? - Load example programs into an Examples appendix so that Google searches can turn up library documentation. LIBRARY API - Four people overtly approved the close parallel with std::thread and its support classes. - Allocating a default scheduler object, rather than specifying a default template param, was praised. - Three people called out the set difference between Boost.Thread features and Fiber (e.g. future::get_exception_ptr()). One wants these implemented immediately; another says they can be added later; the third simply requests that they be documented, with rationale. - Two people frowned on introducing operator bool methods not found in std::thread or Boost.Thread. - C++11 support was mentioned, notably Boost macros such as BOOST_RV_REF and BOOST_EXPLICIT_OPERATOR_BOOL. Also mentioned were: C++11 idioms; C++11 std::thread patterns; move construction; initializer lists; rvalue this overloads; deleting operators. - The fiber constructor and async() should accept a move-only callable. - At least for a C++11 compiler, fiber constructor and async() should accept variadic parameters. These should support move-only types, like Boost.Thread. C++03 support for variadic parameters would be nice, but is less important. - Every API involving time point or duration should accept arbitrary clock types, immediately converting to a canonical duration type for internal use. - Queues should support value_pop() returning item by value. This supports an item type without a default constructor. - Nested scoped_lock typedef has been deprecated in thread library. Remove in Fiber library. - Align the return type of shared_future::get() with the standard. In general, ensure that parameter types and return types are aligned with the standard. - A couple of people were bothered by the use of types in the detail namespace as parameters or return values in the algorithm API. (I note, however, that extending e.g. Boost.Range can involve touching its detail namespace. A customization point for a library may be a bit of a gray area.) - There was a suggestion to rename algorithm to scheduler. In that case, presumably set_scheduling_algorithm() could be renamed set_scheduler(). - There was a request to rename round_robin_ws to round_robin_work_stealing. - A couple of people consider the algorithm API too monolithic, pointing to redundancies in the round_robin, round_robin_ws and asio round_robin implementations. They suggested teasing out distinct classes, so that (for instance) a user-coded scheduler might be able to override a single method to respect fiber priority. In fact Eugene Yakubovich offered to experiment with refactoring the algorithm class this way. - There was a request for set_scheduling_algorithm() to return the previous pointer. (It might be useful for the requester to explain the anticipated use case. An earlier iteration of set_scheduling_algorithm() did return the previous pointer; Oliver intentionally changed that behavior.) - fiber_group got one thumbs-up and two thumbs-down. Options: retain; improve to use move support rather than fiber*; discard. There is an opportunity to improve on thread_group; naturally there is risk in diverging from thread_group. - Request deferred futures for lazy evaluation. - There was a suggestion to introduce a global object to coordinate thread-specific fiber schedulers, in the hope that the global object could perform all relevant locking and the thread-specific fiber schedulers could themselves be thread-unsafe. - There was a request to unify steal_from() and migrate_to() into a single method. I infer that this is predicated on the previous suggestion. - Request future::then() et al, per [5]. (Someone please clarify the present status of N3784?) - Request enriched barrier support per [6] and [7]. (Someone please clarify the present status of N3817?) - There are two fiber properties specific to particular schedulers: thread_affinity (used only by round_robin_ws) and priority (as yet unused by any scheduler). What if a user-coded scheduler requires a fiber property that does not yet exist? Is there a general approach that could subsume the present support for thread_affinity and priority, in fiber and this_fiber? Could the initial values for such properties be passed as part of the fiber constructor's attributes parameter? - One use case was surfaced that may engage the previous bullet: the desire to associate a given fiber with any of a group of threads, such as the set of threads local to a NUMA domain or physical CPU. IMPLEMENTATION - Replace std::auto_ptr with boost::scoped_ptr. The former produces deprecation warnings on GCC. - Reduce redundancy between try_lock() and lock(). - boost::fibers::asio::detail::yield_handler::operator()() calls algorithm::spawn() before algorithm::run(). Does this allow the scheduler to choose the next fiber to run, e.g. a user-coded scheduler that respects fiber priority? - Add memory transaction support to spinlock a la [8]. - Intel TSX lock avoidance would be nice. LINKS [1] https://github.com/STEllAR-GROUP/hpx/tree/master/tests/performance [2] http://stellar.cct.lsu.edu/pubs/isc2012.pdf [3] http://stellar.cct.lsu.edu/pubs/scala13.pdf [4] https://ci.nedprod.com/job/Boost.AFIO%20Build%20Documentation/Boost.AFIO_Doc... [5] http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2013/n3784.pdf [6] http://www.boost.org/doc/libs/1_55_0/doc/html/thread/synchronization.html#th... [7] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3817.html#barrier_o... [8] https://github.com/BoostGSoC/boost.afio/blob/master/boost/afio/detail/ Nat Goodspeed Boost.Fiber Review Manager ________________________________

On 23 Jan 2014 at 10:05, Nat Goodspeed wrote:
The review of the proposed Boost.Fiber library ended on January 15, 2014. The verdict is: not in its present form.
Firstly thanks for such a detailed report Nat. It must have taken you an age to collate and write.
I received seven formal votes, abbreviated here:
I am very surprised there was only seven of us who voted. It seemed like there was a lot more.
"This is why I really wanted to see performance scaling graphs. If they show O(N log N) or worse, then the design is deeply flawed and the library must not enter Boost. Until we have such a graph, we can't know as there is no substitute for empirical testing."
One point of mine not mentioned in the report is that I would really like to see Boost Fibers graphed against Windows Fibers on that performance scaling graph. We know Windows Fibers are probably close to optimal performance given all the tuning done to them thanks to SQL Server competitions, therefore if we see the same scaling trend with Fiber as Windows Fibers (even if a tenth of the absolute performance) then we know the design is probably right and doesn't contain any "gotchas". Niall -- Currently unemployed and looking for work in Ireland. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

2014/1/23 Niall Douglas <s_sourceforge@nedprod.com>
One point of mine not mentioned in the report is that I would really like to see Boost Fibers graphed against Windows Fibers on that performance scaling graph.
it would be more correct to compare boost.context's fcontext with Windows Fibers than boost.fiber AFAIK Windows Fibers do not have a scheduler - ifyou want WinFibers scheduled you would have to write your own (as boost.fiber does)

On 23 Jan 2014 at 17:31, Oliver Kowalke wrote:
One point of mine not mentioned in the report is that I would really like to see Boost Fibers graphed against Windows Fibers on that performance scaling graph.
it would be more correct to compare boost.context's fcontext with Windows Fibers than boost.fiber AFAIK Windows Fibers do not have a scheduler - ifyou want WinFibers scheduled you would have to write your own (as boost.fiber does)
I would like to see Boost Fibers, fcontext and Windows Fibers. Then one can see if Boost Fibers introduces any unusual "kinks" in the scaling. Niall -- Currently unemployed and looking for work in Ireland. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/

2014/1/23 Niall Douglas <s_sourceforge@nedprod.com>
I would like to see Boost Fibers, fcontext and Windows Fibers. Then one can see if Boost Fibers introduces any unusual "kinks" in the scaling.
make_fcontext() == CreateFiber() jump_fcontext() == SwitchToFiber() You compare apples and oranges - boost.fiber uses at its lower level make_fcontext()/jump_fcontext() - so it would always 'slower' than the base-level functions. a comparison would belong to boost.context - the lib already contains code and documentation too.

On Thu, Jan 23, 2014 at 10:05:04AM -0500, Nat Goodspeed wrote:
SUMMARY
The review of the proposed Boost.Fiber library ended on January 15, 2014. The verdict is: not in its present form.
Please consider making a post starting a new thread with a distinct subject so that it'll be easier to find this report. It's currently nested very deep in a rich and sprawling discussion which makes it rather hard to locate or even find in the first place. Thanks, -- Lars Viklund | zao@acc.umu.se

On Thu, Jan 23, 2014 at 11:27 AM, Lars Viklund <zao@acc.umu.se> wrote:
Please consider making a post starting a new thread with a distinct subject so that it'll be easier to find this report.
It's currently nested very deep in a rich and sprawling discussion which makes it rather hard to locate or even find in the first place.
So noted. Are you asking me to repost the same content on the three mailing lists? It might be a long enough mail to produce a bit of eye-rolling if I do. I expect that from this point on, an interested party would find it with a search engine.

On 23/01/2014 04:23 p.m., Nat Goodspeed wrote:
On Thu, Jan 23, 2014 at 11:27 AM, Lars Viklund <zao@acc.umu.se> wrote:
Please consider making a post starting a new thread with a distinct subject so that it'll be easier to find this report.
It's currently nested very deep in a rich and sprawling discussion which makes it rather hard to locate or even find in the first place.
So noted. Are you asking me to repost the same content on the three mailing lists? It might be a long enough mail to produce a bit of eye-rolling if I do.
I expect that from this point on, an interested party would find it with a search engine.
While you repost it to the three mailing lists please use a subject like "Boost.Fiber review results" or similar as customary, so that it can be easily found with a search engine. Regards, -- Agustín K-ballo Bergé.- http://talesofcpp.fusionfenix.com

On Thu, Jan 23, 2014 at 2:35 PM, Agustín K-ballo Bergé < kaballo86@hotmail.com> wrote: On 23/01/2014 04:23 p.m., Nat Goodspeed wrote:
On Thu, Jan 23, 2014 at 11:27 AM, Lars Viklund <zao@acc.umu.se> wrote:
Please consider making a post starting a new thread with a distinct
subject so that it'll be easier to find this report.
So noted. Are you asking me to repost the same content on the three mailing lists? It might be a long enough mail to produce a bit of eye-rolling if I do.
I expect that from this point on, an interested party would find it with a search engine.
While you repost it to the three mailing lists please use a subject like "Boost.Fiber review results" or similar as customary, so that it can be easily found with a search engine.
That sounds like a second request for me to repost, and as yet no one has asked me not to. Would it be reasonable to post a new message with the requested subject line, whose body is a link such as this? https://groups.google.com/forum/#!msg/boostusers/oOYfJ1yf_Sg/DwljFDR6gWoJ

On Thu, Jan 23, 2014 at 03:54:19PM -0500, Nat Goodspeed wrote:
On Thu, Jan 23, 2014 at 2:35 PM, Agustín K-ballo Bergé < kaballo86@hotmail.com> wrote:
On 23/01/2014 04:23 p.m., Nat Goodspeed wrote:
On Thu, Jan 23, 2014 at 11:27 AM, Lars Viklund <zao@acc.umu.se> wrote:
Please consider making a post starting a new thread with a distinct
subject so that it'll be easier to find this report.
So noted. Are you asking me to repost the same content on the three mailing lists? It might be a long enough mail to produce a bit of eye-rolling if I do.
I expect that from this point on, an interested party would find it with a search engine.
While you repost it to the three mailing lists please use a subject like "Boost.Fiber review results" or similar as customary, so that it can be easily found with a search engine.
That sounds like a second request for me to repost, and as yet no one has asked me not to.
Would it be reasonable to post a new message with the requested subject line, whose body is a link such as this? https://groups.google.com/forum/#!msg/boostusers/oOYfJ1yf_Sg/DwljFDR6gWoJ
If you're going to link externally, it would probably be a better choice to use one of the gateways (gmane or mailman) listed on the Boost Lists page instead of the comparitively emphereal and for some, inaccessible Google Groups. As for reposting over and over again, the existing top-level post is probably reasonably fine for most people. My strong objection was against the invisibility of a post to a deep nesting level in a wide thread. Judging by the links to previous results on the Boost Review Status page http://www.boost.org/community/review_schedule.html there seems to be a mixture of top-level posts and more hidden posts to review threads, but none as deep as this one as far as I can see in my sampling of the reports. It'd be nice if whoever does the review of the next Boost library ends up doing it "right" from the beginning :) -- Lars Viklund | zao@acc.umu.se

On Thu, Jan 23, 2014 at 6:09 PM, Lars Viklund <zao@acc.umu.se> wrote:
As for reposting over and over again, the existing top-level post is probably reasonably fine for most people.
It'd be nice if whoever does the review of the next Boost library ends up doing it "right" from the beginning :)
Consider me educated. I will take this as advice for future reviews, rather than a call to repost this one.

The review of Boost.Fiber by Oliver Kowalke begins today, Monday January 6th, and closes Wednesday January 15th.
For the sake of completeness, I vote to REJECT the library for acceptance into Boost in its current form. For my reasoning please see the lengthy discussion in the email thread '[Fibers] Performance'. Regards Hartmut --------------- http://boost-spirit.com http://stellar.cct.lsu.edu
-----------------------------------------------------
About the library:
Boost.Fiber provides a framework for micro-/userland-threads (fibers) scheduled cooperatively. The API contains classes and functions to manage and synchronize fibers similar to Boost.Thread. Each fiber has its own stack.
A fiber can save the current execution state, including all registers and CPU flags, the instruction pointer, and the stack pointer and later restore this state. The idea is to have multiple execution paths running on a single thread using a sort of cooperative scheduling (versus threads, which are preemptively scheduled). The running fiber decides explicitly when it should yield to allow another fiber to run (context switching). Boost.Fiber internally uses coroutines from Boost.Coroutine; the classes in this library manage, schedule and, when needed, synchronize those coroutines. A context switch between threads usually costs thousands of CPU cycles on x86, compared to a fiber switch with a few hundred cycles. A fiber can only run on a single thread at any point in time.
docs: http://olk.github.io/libs/fiber/doc/html/ git: https://github.com/olk/boost-fiber src: http://ok73.ok.funpic.de/boost.fiber.zip
The documentation has been moved to another site; see the link above. If you have already downloaded the source, please refresh it; Oliver has added some new material.
---------------------------------------------------
Please always state in your review whether you think the library should be accepted as a Boost library!
Additionally please consider giving feedback on the following general topics:
- What is your evaluation of the design? - What is your evaluation of the implementation? - What is your evaluation of the documentation? - What is your evaluation of the potential usefulness of the library? - Did you try to use the library? With what compiler? Did you have any problems? - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? - Are you knowledgeable about the problem domain?
Nat Goodspeed Boost.Fiber Review Manager ________________________________
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 01/16/2014 03:22 AM, Hartmut Kaiser wrote:
The review of Boost.Fiber by Oliver Kowalke begins today, Monday January 6th, and closes Wednesday January 15th.
For the sake of completeness, I vote to REJECT the library for acceptance into Boost in its current form. For my reasoning please see the lengthy discussion in the email thread '[Fibers] Performance'.
I am supporting this vote to REJECT the library in its current form for the very same reasons. I hope it's not too late for the vote to count. Regards, Thomas
Regards Hartmut --------------- http://boost-spirit.com http://stellar.cct.lsu.edu
-----------------------------------------------------
About the library:
Boost.Fiber provides a framework for micro-/userland-threads (fibers) scheduled cooperatively. The API contains classes and functions to manage and synchronize fibers similar to Boost.Thread. Each fiber has its own stack.
A fiber can save the current execution state, including all registers and CPU flags, the instruction pointer, and the stack pointer and later restore this state. The idea is to have multiple execution paths running on a single thread using a sort of cooperative scheduling (versus threads, which are preemptively scheduled). The running fiber decides explicitly when it should yield to allow another fiber to run (context switching). Boost.Fiber internally uses coroutines from Boost.Coroutine; the classes in this library manage, schedule and, when needed, synchronize those coroutines. A context switch between threads usually costs thousands of CPU cycles on x86, compared to a fiber switch with a few hundred cycles. A fiber can only run on a single thread at any point in time.
docs: http://olk.github.io/libs/fiber/doc/html/ git: https://github.com/olk/boost-fiber src: http://ok73.ok.funpic.de/boost.fiber.zip
The documentation has been moved to another site; see the link above. If you have already downloaded the source, please refresh it; Oliver has added some new material.
---------------------------------------------------
Please always state in your review whether you think the library should be accepted as a Boost library!
Additionally please consider giving feedback on the following general topics:
- What is your evaluation of the design? - What is your evaluation of the implementation? - What is your evaluation of the documentation? - What is your evaluation of the potential usefulness of the library? - Did you try to use the library? With what compiler? Did you have any problems? - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? - Are you knowledgeable about the problem domain?
Nat Goodspeed Boost.Fiber Review Manager ________________________________
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On Thu, Jan 16, 2014 at 9:12 AM, Thomas Heller <thom.heller@gmail.com> wrote:
I am supporting this vote to REJECT the library in its current form for the very same reasons. I hope it's not too late for the vote to count.
It is not too late. I'm grateful to all those who invested time and energy in researching and discussing the library, and contributing their opinions. I ask for a few days' indulgence to collate responses. In my opinion the most valuable part of what the community wants to convey to the library author is specific points of improvement, and I want to be thorough about that.

On Thu, Jan 16, 2014 at 9:12 AM, Thomas Heller <thom.heller@gmail.com> wrote:
I am supporting this vote to REJECT the library in its current form for the very same reasons. I hope it's not too late for the vote to count.
It is not too late. I'm grateful to all those who invested time and energy in researching and discussing the library, and contributing their opinions.
I ask for a few days' indulgence to collate responses. In my opinion the most valuable part of what the community wants to convey to the library author is specific points of improvement, and I want to be thorough about that.
Frankly, I find it to be disturbing to see that the review manager appears to have come into this review with the predetermined decision to accept the library. But this is purely my impression, others might see it differently. Regards Hartmut --------------- http://boost-spirit.com http://stellar.cct.lsu.edu

On Thu, Jan 16, 2014 at 9:29 AM, Hartmut Kaiser <hartmut.kaiser@gmail.com> wrote:
Frankly, I find it to be disturbing to see that the review manager appears to have come into this review with the predetermined decision to accept the library. But this is purely my impression, others might see it differently.
:-) I will again ask your indulgence to defer an opinion of my function as a review manager until I have posted my review report. If the community then feels that I have misrepresented their collective voice, that would be a good time to say so. I admit that I am "wearing two hats." As a Boost user I would like to see Boost adopt something that fits this ecological niche. Like you, we have code that I would love to replace with an official Boost library. As review manager, I will make a sincere attempt to collate and summarize the responses of those who have invested time and energy in this review. Isn't it often true that someone willing to serve as review manager for a Boost review has at least some interest in the subject library? Would it have improved matters if I had withheld my own opinion from the discussion? Remaining silent would not have made me more objective; it would merely have concealed my own bias. Openly stating my personal bias, in effect, gives me additional incentive to be careful and thorough in presenting the review results. Again, though, please withhold judgment until I have done so.

On Jan 16, 2014, at 10:15 AM, Nat Goodspeed <nat@lindenlab.com> wrote:
On Thu, Jan 16, 2014 at 9:29 AM, Hartmut Kaiser <hartmut.kaiser@gmail.com> wrote:
Frankly, I find it to be disturbing to see that the review manager appears to have come into this review with the predetermined decision to accept the library. But this is purely my impression, others might see it differently.
:-)
I will again ask your indulgence to defer an opinion of my function as a review manager until I have posted my review report. If the community then feels that I have misrepresented their collective voice, that would be a good time to say so.
Indeed.
I admit that I am "wearing two hats." As a Boost user I would like to see Boost adopt something that fits this ecological niche. Like you, we have code that I would love to replace with an official Boost library.
As review manager, I will make a sincere attempt to collate and summarize the responses of those who have invested time and energy in this review.
Isn't it often true that someone willing to serve as review manager for a Boost review has at least some interest in the subject library?
Absolutely. A review manager should have done due diligence to be sure the library is ready for inclusion, based upon the review manager's own, biased opinion. Add domain knowledge and interest and it's little wonder the review manager would seem biased in favor of the library. Obviously, a library may enter review in a state the review manager doesn't like, which would make a decision against the library more likely than otherwise. In the end, we look for an objective decision, possibly clarified with review manager's domain knowledge factored into weighing reviewer input. ___ Rob (Sent from my portable computation engine)
participants (16)
-
Agustín K-ballo Bergé
-
Antony Polukhin
-
David Sankel
-
Eugene Yakubovich
-
Evgeny Panasyuk
-
Gavin Lambert
-
Hartmut Kaiser
-
james
-
Lars Viklund
-
Nat Goodspeed
-
Niall Douglas
-
Oliver Kowalke
-
Paul A. Bristow
-
Rob Stewart
-
Thomas Heller
-
Vicente J. Botet Escriba