2014/1/10 Antony Polukhin
Scheduler does not look good. Class "algorithm" looks overcomplicated: too many methods, classes from detail namespace are used in function signatures, looks like some methods duplicate each other (yield, wait).
which one should be removed an why? algorithm::wait() and algorithm::yield() have different effect on the fiber algorithm::wait() sets the internal state of a fiber to WAITING suspends and stores it into an internal queue (waiting-queue). the fiber must be signaled to be ready in order to moved from the waiting- to the ready-queue so it can be resumed later. in contrast to this algorithm::yield() suspends the fiber, let the internal state be READY and appends it at the end of the ready-queue so that it does not have to be signaled etc.
I'm slightly confused by the fact that fiber_base use many dynamic memory allocations (it contains std::vector, std::map) and virtual functions. There must be some way to optimize away that, otherwise fibers may be much slower than threads.
hmm - boost.thread uses stl containers too (for instance thread_data_base) the fiber must hold some data by itself, for instance a list of fibers waiting on it to terminated (join). otherwise thread migration between thread would be hard to implement if not impossible.
Heap allocations also exist in mutex class. I dislike the `waiting_. push_back( n);` inside the spinlocked sections. Some intrusive container must be used instead of std::deque.
OK - the kind of container can be optimized, but the mutex must know which fibers are waiting on it to lock the mutex. otherwise thread migration between thread would be hard to implement if not impossible.
There is a copy-pasted code in mutex.cpp. `try_lock` and `lock` member functions have very different implementations, that's suspicious.
why is it suspicious? mutex::try_lock() will never suspend the fiber if the mutex is already locked while mutex::lock() will do.
Back to the fiber_base. There's too many atomics and locks in it. As I understand, the default use-case does not include thread migrations so there is no need to put all the synchronizations inside the fiber_base. round_robin_ws and mutexes work with multithreading, ideologically it's the more correct place for synchronizations. Let fiber_base be thread unsafe, and schedulers and mutexes take care about the serializing access to the fiber_base. Otherwise singlethreaded users will loose performance.
the only way to do this would be that every fiber_base holds a pointer to its fiber-scheduler (algorithm *). in the case of fibers joining the current fiber (e.g. stored in the internal list of fiber_base) the fiber has to signal the schduler of the joining fiber. BOOST_FOREACH( fiber_base::ptr_t f, joining_fibers) { f->set_ready() // fiber_base::set_ready() { scheduler->set_ready( this); } } this pattern must be applied to mutex and condition_variable too. if a fier is migrated its point to the scheduler must be swapped.