От: | x-code | ||
Дата: | 05.10.20 19:04 | ||
Оценка: |
Difference between the various Boost multitasking methods?
OPEN
Boost has a ton of different ways to do multitasking. I'm not talking about multithreading, I'm talking about running different pieces of work on the same thread using coroutines/fibers/callbacks. While the documentation for each individually is pretty good, it does little to no explaining of how they differ from each other, or why I should choose one over the other. So I'm going to give my understanding of each one that I found. I'm trying to fill in the gaps in the documentation here, so I've had to draw conclusions that are not explicitly stated. Please tell me where I'm wrong.
boost::context::fiber — A fiber that allows you only to yield control directly to another fiber by calling resume(). There is no scheduler. IMO use of the term fiber is incorrect here, this is really a coroutine.
boost::context::continuation — I can't figure out how the use case of this is any different than boost::context::fiber. The class definitions is almost the same. The example shows it doing exactly the same thing, with only minor variation in semantics.
boost::context::execution_context — This seems to be yet another coroutine implementation, providing the same functionality as the previous two, with perhaps easier to use syntax. Is this used under the hood by them?
boost::coroutine2 Wrapper for boost.context continuation/callcc
boost::coroutine Another coroutine wrapper, but it wraps an internal c-like API of boost::context so it is deprecated.
boost::fibers::fiber — These fibers use a scheduler and you cannot yield directly to another fiber. Instead you call this_fiber::yield() which then passes control to the fiber manager, which uses a scheduler implementation to decide what fiber to run next.
boost::asio::execution_context — A base class to wrap i/o services. No relation to boost::context::execution_context.
boost::asio::io_context — A specialized version of execution_context that also supports suspending the main thread waiting on a callback from the OS I/O. However, this is probably the most complete multitasking library available, even when not using asynchronous i/o. Includes a scheduler so you can just post functions or lambdas and have them be scheduled to run as fibers (they are called strands here) on one of the executing threads.
boost::asio::io_context::strand — A fiber by another name. Runs when scheduled by the io_context.
boost::asio::io_context::spawn — creates a stackful boost::coroutine that runs in the context of a strand.
boost::asio::coroutine — a stackless coroutine. supports some very nice syntactic sugar (simplified yield statement)
boost::asio::co_spawn — another stackless coroutine implementation compliant with C++20 Coroutine TS
Conclusions
boost::asio provides all of the tasking support I would ever need — fibers (strands), asynchronous callbacks, and coroutines (both stackless and stackful). It even has a built in thread pool implementation that you can post lambdas/functors to. This seems like it would be my go-to library, even if I didn't have any asynchronous I/O to do.
boost::context seems pointless, providing 3 implementations of coroutines, and eventually being wrapped into boost::coroutine2. I would guess that if you just wanted to do a stackful coroutine, you would just use boost::coroutine2?
boost::fibers seems to provide the same functionality as boost::asio's strands. The only advantage I could see for this library is that it lets you create a custom scheduler algorithm. Maybe this functionality is also available with boost::asio? There is also a page on interop between boost::fibers and boost::asio but I don't see the point.
context, coroutine2, and fiber were all written by the same person (Oliver Kowalke). asio was written by someone else (Christopher Kohlhoff). So maybe this is the case of 2 competing implementations. My only issue is that Chris put everything at least into the same namespace/library. Meanwhile Oliver seems to have created multiple libraries/namespaces competing against himself.
edit: I'm especially interested in the performance impacts of the various equivalent methods... if nobody knows then I guess I'll have to benchmark them myself