[boost.process] 0.6 Redesign

Hi, I am currently working on a new version (0.6) of boost.process which mainly provides a new interface. Most of the underlying features have be derived from the older boost.process version. By doing all this, I tried to hide basically all platform-specifics, so that no #ifdefs are required as in 0.5. It is somewhat inspired by python and std::thread. I really hope this can help to make boost.process an official library. At the current state, I have all functionality in the library that I want, but it's not polished at all. Also it of course needs more tests and documentation. But I think it sufficient to get the basic idea. You can check it out here: https://github.com/klemens-morgenstern/boost-process/tree/develop And the little documentation it has is found here: http://klemens-morgenstern.github.io/process/ Additionally here are the development notes: https://github.com/klemens-morgenstern/boost-process/issues/2 At the current state, the tests pass on linux as well as windows (gcc-5 & MSVC-14). Requirements are C++, boost.fusion, boost.asio, boost.iostreams, boost.filesystem and boost.system. I really could use some feedback and hope you're interested. Sincerely, Klemens Here's some sample code: child c = execute("program", "param1", std_err > "error.log"); And to show off a little: Or you can modify the environment read the output into a future, write the input via pipe and redirect stderr to null. std::future<std::string> fut; process::pipe p; asio::io_service io_service; auto c = execute("other-prog", std_out > fut, std_in < p; std_err > null; env["PATH"]+="/tmp", env["BOOST_VERSION"]="1.61", io_service); iostreams::streams<iostreams::file_descriptor_sink> str(p.sink()); The child class binds the subprocess analogue to an std::thread, i.e. it will wait for the process to finished. It can be detached, terminated and waited for.

Hi, On 18 April 2016 at 17:35, Klemens Morgenstern <klemens.morgenstern@gmx.net> wrote:
Hi,
I am currently working on a new version (0.6) of boost.process which mainly provides a new interface. Most of the underlying features have be derived from the older boost.process version. By doing all this, I tried to hide basically all platform-specifics, so that no #ifdefs are required as in 0.5. It is somewhat inspired by python and std::thread.
I really hope this can help to make boost.process an official library.
Thanks for doing this effort, I believe that it is much needed indeed.
At the current state, I have all functionality in the library that I want, but it's not polished at all. Also it of course needs more tests and documentation. But I think it sufficient to get the basic idea.
You can check it out here: https://github.com/klemens-morgenstern/boost-process/tree/develop And the little documentation it has is found here: http://klemens-morgenstern.github.io/process/
Additionally here are the development notes: https://github.com/klemens-morgenstern/boost-process/issues/2
At the current state, the tests pass on linux as well as windows (gcc-5 & MSVC-14). Requirements are C++, boost.fusion, boost.asio, boost.iostreams, boost.filesystem and boost.system.
I started reading the documentation but before getting into details, could you clarify if Boost.Asio is required even if you don't use communication with the child processes? We have a few tricky cases related to child process management on Windows, in particular when trying to end a child process "cleanly" when it is console program. I'll have to check if you managed to fix the issues we are seeing, in which case this solution would be better than the hackish one we have.
I really could use some feedback and hope you're interested.
Sincerely,
Klemens
Here's some sample code:
child c = execute("program", "param1", std_err > "error.log");
And to show off a little: Or you can modify the environment read the output into a future, write the input via pipe and redirect stderr to null.
std::future<std::string> fut; process::pipe p; asio::io_service io_service; auto c = execute("other-prog", std_out > fut, std_in < p; std_err > null; env["PATH"]+="/tmp", env["BOOST_VERSION"]="1.61", io_service);
iostreams::streams<iostreams::file_descriptor_sink> str(p.sink());
The child class binds the subprocess analogue to an std::thread, i.e. it will wait for the process to finished. It can be detached, terminated and waited for.
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Am 18.04.2016 um 19:37 schrieb Klaim - Joël Lamotte:
At the current state, I have all functionality in the library that I want, but it's not polished at all. Also it of course needs more tests and documentation. But I think it sufficient to get the basic idea.
You can check it out here: https://github.com/klemens-morgenstern/boost-process/tree/develop And the little documentation it has is found here: http://klemens-morgenstern.github.io/process/
Additionally here are the development notes: https://github.com/klemens-morgenstern/boost-process/issues/2
At the current state, the tests pass on linux as well as windows (gcc-5 & MSVC-14). Requirements are C++, boost.fusion, boost.asio, boost.iostreams, boost.filesystem and boost.system.
I started reading the documentation but before getting into details, could you clarify if Boost.Asio is required even if you don't use communication with the child processes? We have a few tricky cases related to child process management on Windows, in particular when trying to end a child process "cleanly" when it is console program. I'll have to check if you managed to fix the issues we are seeing, in which case this solution would be better than the hackish one we have.
Well no, boost.asio is not used when you don't pass it to execute. But it is included, which could be changed to forward-declarations. The implementation of the async wait is to just wait for the handle, so that's quite simple. That is basically what you find in test/exit_code.cpp : async_wait.

On 18 April 2016 at 19:44, Klemens Morgenstern <klemens.morgenstern@gmx.net> wrote:
Am 18.04.2016 um 19:37 schrieb Klaim - Joël Lamotte:
At the current state, I have all functionality in the library that I want,
but it's not polished at all. Also it of course needs more tests and documentation. But I think it sufficient to get the basic idea.
You can check it out here: https://github.com/klemens-morgenstern/boost-process/tree/develop And the little documentation it has is found here: http://klemens-morgenstern.github.io/process/
Additionally here are the development notes: https://github.com/klemens-morgenstern/boost-process/issues/2
At the current state, the tests pass on linux as well as windows (gcc-5 & MSVC-14). Requirements are C++, boost.fusion, boost.asio, boost.iostreams, boost.filesystem and boost.system.
I started reading the documentation but before getting into details, could you clarify if Boost.Asio is required even if you don't use communication with the child processes? We have a few tricky cases related to child process management on Windows, in particular when trying to end a child process "cleanly" when it is console program. I'll have to check if you managed to fix the issues we are seeing, in which case this solution would be better than the hackish one we have.
Well no, boost.asio is not used when you don't pass it to execute. But it is included, which could be changed to forward-declarations.
The implementation of the async wait is to just wait for the handle, so that's quite simple. That is basically what you find in test/exit_code.cpp : async_wait.
Nice! I suppose that not including asio by default would help with compilation time, if it's possible. Anyway, for the termination problem: https://github.com/klemens-morgenstern/boost-process/blob/develop/include/bo... There you use "boost::detail::winapi::TerminateProcess(p.process_handle(), EXIT_FAILURE)" Is it a terminate message or a "kill -9" kind of message on windows? Joël Lamotte

On 18 April 2016 at 20:52, Klaim - Joël Lamotte <mjklaim@gmail.com> wrote:
Anyway, for the termination problem: https://github.com/klemens-morgenstern/boost-process/blob/develop/include/bo... There you use "boost::detail::winapi::TerminateProcess(p.process_handle(), EXIT_FAILURE)" Is it a terminate message or a "kill -9" kind of message on windows?
Joël Lamotte
I think I'll try to make a short version of the case we found problematic and provide it to you, it might help. Joël Lamotte

Am 18.04.2016 um 20:53 schrieb Klaim - Joël Lamotte:
On 18 April 2016 at 20:52, Klaim - Joël Lamotte <mjklaim@gmail.com> wrote:
Anyway, for the termination problem: https://github.com/klemens-morgenstern/boost-process/blob/develop/include/bo... There you use "boost::detail::winapi::TerminateProcess(p.process_handle(), EXIT_FAILURE)" Is it a terminate message or a "kill -9" kind of message on windows?
Joël Lamotte
I think I'll try to make a short version of the case we found problematic and provide it to you, it might help.
Joël Lamotte
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
The terminate is only invoked, if the user calls terminte explicitly. From my understanding it's an unconditional exit, so it would be equivalent to kill -9. We have kill(pid, SIGKILL) on the posix side, so this makes sense. I don't know if one could implement it any other way without gettint to system-specific.

On 18 Apr 2016 at 17:35, Klemens Morgenstern wrote:
At the current state, the tests pass on linux as well as windows (gcc-5 & MSVC-14). Requirements are C++, boost.fusion, boost.asio, boost.iostreams, boost.filesystem and boost.system.
I really could use some feedback and hope you're interested.
Firstly, well done for working on this. Last contract I got annoyed with Boost.Process and ended up replacing it with a hacked shoe in based on ASIO. More recently, I needed a child process spawner for my ACCU presentation I'm giving this week and ended up again reinventing Boost.Process, only this edition is a "bare metal" reinvention based on Outcomes [1]. So I'm all for this in principle. However I'm not sure if I'm for your specific formulation. Here are my top issues: 1. You should rip out all usage of Boost.Iostreams. It's been without maintainer for years now, and has a fundamentally broken design for async i/o. Nobody should be encouraged to use it in new code. 2. You should completely eliminate all synchronous i/o as that is also fundamentally broken for speaking to child processes. Everything needs to be async 100% of the time, it's the only sane design choice [2]. You can absolutely present publicly a device which appears to quack and waddle like a synchronous pipe for compatibility purposes, indeed AFIO v2 presents asynchronous i/o devices as synchronous ones if you use the synchronous APIs even though underneath the i/o service's run() loop gets pumped during i/o blocks. But underneath it needs to be 100% async, and therefore probably ASIO. 3. Instead of inventing your own i/o objects, I think you need to provide: (a) Async and sync objects extending ASIO's base objects with child_stdin, child_stdout and child_stderr - or whatever your preferred naming. (b) std::istream and std::ostream wrappers. These are not hard, ASIO helps you a lot with these once you have the native ASIO objects. 4. Child processes are not like threads and should not be represented as a first order object. They should instead be an opaque object represented by an abstract base class publicly and managed by a RAII managing class from whom the opaque object can be detached, assigned, transferred etc. 5. Replace all the on_exit() machinery with future continuations i.e. launching a process *always* returns a future. If someone wants to hook code onto when the process exits, a future continuation is the right tool. Similarly for fetching return codes, or detaching oneself from the child. Python's new subprocess.run() returns exactly the struct your future also needs to return. 6. Looking through your source code, I see references to boost::fusion and lots of other stuff. Great, but most people wanting a Process management library don't want to drag in a copy of Boost to get one. It's easier to just roll your own. So drop the Boost dependency. 7. Looking through your source code, I am struck about how much functionality is done elsewhere by other libraries, especially ASIO. I think less is more for Boost.Process, I always personally greatly preferred the pre-peer-review Boost.Process even with its warts over the post-peer-review one which had become too "flowery" and "ornate" if that makes sense. The latter became unintuitive to program against, I kept having to look up the documentation and that annoys me. This stuff should be blindingly obvious to use. It should "just work". I conclude my mini-review by suggesting "less is more" for Boost.Process. 99% of users want the absolute *minimum* featureset. Look at Python 3.5's new subprocess module, that is a very good API design and featureset to follow. It's intuitive, it gets the job done quickly, but it exposes enough depth if you really need it to write a really custom solution. I'd *strongly* recommend you copy that API design for Boost.Python and dispense with the current API design entirely. The absolute clincher in Python's subprocess is you can never, ever race nor deadlock stdout and stderr. That makes an underlying async i/o implementation unavoidable. I'd personally suggest save yourself a ton of hassle and use ASIO's pipe/unix socket support facilities, it's becoming the Networking TS anyway. Hope this is helpful. Niall [1]: https://github.com/ned14/boost.afio/blob/master/include/boost/afio/v2/ detail/child_process.hpp [2]: I refer to the stdout/stderr deadlock problem which is the biggest reason anyone reaches for a process management library instead of just using the syscalls directy. The internals of the child i/o needs to be 100% async to prevent deadlocking. You can absolutely present publicly a device which appears to quack and waddle like a synchronous pipe for compatibility purposes, indeed AFIO v2 presents asynchronous i/o devices as synchronous ones if you use the synchronous APIs even though underneath the i/o service's run() loop gets pumped during i/o blocks. -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/

Thanks for the Feedback, here are my thoughts on the raised points.
However I'm not sure if I'm for your specific formulation. Here are my top issues:
1. You should rip out all usage of Boost.Iostreams. It's been without maintainer for years now, and has a fundamentally broken design for async i/o. Nobody should be encouraged to use it in new code.
I do actually agree with that - boost.iostreams has an obsolete design. But using it's file_descriptors and streams makes it so much easier for now, which would be a lot of reimplementation. So as long as boost.iostreams is not marked as obsolete, I wouldn't think it's necessary to do so.
2. You should completely eliminate all synchronous i/o as that is also fundamentally broken for speaking to child processes. Everything needs to be async 100% of the time, it's the only sane design choice [2]. You can absolutely present publicly a device which appears to quack and waddle like a synchronous pipe for compatibility purposes, indeed AFIO v2 presents asynchronous i/o devices as synchronous ones if you use the synchronous APIs even though underneath the i/o service's run() loop gets pumped during i/o blocks. But underneath it needs to be 100% async, and therefore probably ASIO.
Actually I did not look at boost.afio, which might be a good idea. Currently you only have the async_pipe representation, which wraps around a either ordinary posix-pipe or a named pipe on windows. Now I do not think, that I need to make everything async, because I can think of enough scenarios where this is a complete overkill. For example, I might just want to pipe from one process to another: pipe p; auto c1 = execute("e1", std_out>p); auto c2 = execute("e2", std_in<p); There I don't need any async behaviour but I need pipes. Or I have a program where I need a strong correlation between input and output - why would I do that async? pipe p_in, p_out; auto c1 = execute("c++filt", std_out>p_out, std_in<p_in); So I could either require the pipe class to always hold a reference to an io_service or to not implement read/write functions. Doesn't really make sense two me, especially why less would be more here.
3. Instead of inventing your own i/o objects, I think you need to provide:
(a) Async and sync objects extending ASIO's base objects with child_stdin, child_stdout and child_stderr - or whatever your preferred naming.
(b) std::istream and std::ostream wrappers. These are not hard, ASIO helps you a lot with these once you have the native ASIO objects.
I don't get what you want to tell me here, sry. Thing is: I need a custom object, beacuse the current asio-objets (windows::stream_handle/posix::stream_descriptor) are bidirectional for one handle, but I need an object that writes on one and reads on the other.
4. Child processes are not like threads and should not be represented as a first order object. They should instead be an opaque object represented by an abstract base class publicly and managed by a RAII managing class from whom the opaque object can be detached, assigned, transferred etc.
If I understand what you say here correctly, that this is what I originally planned. But I decided that against that, because it is much easier the current way and you can store all needed information in a simple child. And actually you can detach the child.
5. Replace all the on_exit() machinery with future continuations i.e. launching a process *always* returns a future. If someone wants to hook code onto when the process exits, a future continuation is the right tool. Similarly for fetching return codes, or detaching oneself from the child. Python's new subprocess.run() returns exactly the struct your future also needs to return.
Well again: that would require the usage of asio all the time, I don't think that's very sensible. Currently you can just write a simple process call and wait for it, and it's as simple as it gets: execute("something"); Also I need to have a callback on_exit to cancel the read operations on async pipes. I thought about doing this automatically, but then again one could use one pipe with several subprocesses. BUT: allowing a std::future<int> to be set on exit seems like a neat feature, I should add that.
6. Looking through your source code, I see references to boost::fusion and lots of other stuff. Great, but most people wanting a Process management library don't want to drag in a copy of Boost to get one. It's easier to just roll your own. So drop the Boost dependency.
Ok now I am confused: you want me to make everything asio, so that you'll always need boost.asio, but you want to eliminate the dependency on boost.fusion? Why? That's definitely not going to happen, since I do a lot of meta-programming there, this would become too much work for a library which clearly needs other boost libraries. The only way I see that happening is, when I propose a similar library for the C++ Standard, and that won't happen very soon...
7. Looking through your source code, I am struck about how much functionality is done elsewhere by other libraries, especially ASIO. I think less is more for Boost.Process, I always personally greatly preferred the pre-peer-review Boost.Process even with its warts over the post-peer-review one which had become too "flowery" and "ornate" if that makes sense. The latter became unintuitive to program against, I kept having to look up the documentation and that annoys me. This stuff should be blindingly obvious to use. It should "just work".
Uhm, what so is this a bad thing, to use boost.asio that much? I don't really get your point here.
I conclude my mini-review by suggesting "less is more" for Boost.Process. 99% of users want the absolute *minimum* featureset. Look at Python 3.5's new subprocess module, that is a very good API design and featureset to follow. It's intuitive, it gets the job done quickly, but it exposes enough depth if you really need it to write a really custom solution. I'd *strongly* recommend you copy that API design for Boost.Python and dispense with the current API design entirely. The absolute clincher in Python's subprocess is you can never, ever race nor deadlock stdout and stderr. That makes an underlying async i/o implementation unavoidable. I'd personally suggest save yourself a ton of hassle and use ASIO's pipe/unix socket support facilities, it's becoming the Networking TS anyway.
Well I think we have different philosophies on what the library should do. If you want a pure async implementation of boost.process, one could maybe built it atop boost.process and call it apio or something. The reasons I integrated the asio stuff are the following: - on windows you need named pipes for async stuff (should be distinguished be the pipe library) - notification of on_exit must be integrated, so it is launched immediatly after executing the subprocess. - io_service must get notified on fork If that weren't the case I would prefere boost.process to be much simpler, and have not async functionality whatsoever. Though theoretically the whole pipe-functionality could be moved into another library, which is just used by boost.process.
Hope this is helpful.
Sure it was, thank you. I hope my reation doesn't seem to stubborn.
Niall
[1]: https://github.com/ned14/boost.afio/blob/master/include/boost/afio/v2/ detail/child_process.hpp
[2]: I refer to the stdout/stderr deadlock problem which is the biggest reason anyone reaches for a process management library instead of just using the syscalls directy. The internals of the child i/o needs to be 100% async to prevent deadlocking. You can absolutely present publicly a device which appears to quack and waddle like a synchronous pipe for compatibility purposes, indeed AFIO v2 presents asynchronous i/o devices as synchronous ones if you use the synchronous APIs even though underneath the i/o service's run() loop gets pumped during i/o blocks.
I really don't think this is the case: boost.process should first of all be a wrapper around the syscalls, so I can use it on different platforms without and #ifdefs. The async stuff is second, and is a set of features, extending it. At least that's the reason I need a process library, which is the reason I'm working on that.

On 19 April 2016 at 01:06, Klemens Morgenstern <klemens.morgenstern@gmx.net> wrote:
5. Replace all the on_exit() machinery with future continuations i.e.
launching a process *always* returns a future. If someone wants to hook code onto when the process exits, a future continuation is the right tool. Similarly for fetching return codes, or detaching oneself from the child. Python's new subprocess.run() returns exactly the struct your future also needs to return.
Well again: that would require the usage of asio all the time, I don't think that's very sensible.
I agree. Not everybody uses Boost.ASIO, even if it will be in the standard it does not mean that everybody will use that specific implementation. Also, if such a library needs only the task scheduling features of ASIO, would suggest to use Executor-based interface instead.
[2]: I refer to the stdout/stderr deadlock problem which is the
biggest reason anyone reaches for a process management library instead of just using the syscalls directy. The internals of the child i/o needs to be 100% async to prevent deadlocking. You can absolutely present publicly a device which appears to quack and waddle like a synchronous pipe for compatibility purposes, indeed AFIO v2 presents asynchronous i/o devices as synchronous ones if you use the synchronous APIs even though underneath the i/o service's run() loop gets pumped during i/o blocks.
I really don't think this is the case: boost.process should first of all be a wrapper around the syscalls, so I can use it on different platforms without and #ifdefs.
Exaclty.
The async stuff is second, and is a set of features, extending it. At least that's the reason I need a process library, which is the reason I'm working on that.
Same here and I believe the majority of people looking for such library just want to spawn processes in cross-platform contexts. (at least from memory of previous boost.process versions reviews) (BTW you may need to have a way to tell if a platform cannot launch another process? Not sure if a static_if or some kind of no-op would be useful)

Same here and I believe the majority of people looking for such library just want to spawn processes in cross-platform contexts. (at least from memory of previous boost.process versions reviews)
(BTW you may need to have a way to tell if a platform cannot launch another process? Not sure if a static_if or some kind of no-op would be useful)
I detect the platform via the preprocessor, i.e. boost/system/api_config.hpp. If you include boost.process without either posix or windows, you'll get an error, saying you have a not supported system api. I'd consider this the right behaviour, since I cannot start any process. Do you have any scenario in mind, where I might want to have a no-op?
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 19 April 2016 at 10:53, Klemens Morgenstern <klemens.morgenstern@gmx.net> wrote:
Same here and I believe the majority of people looking for such library
just want to spawn processes in cross-platform contexts. (at least from memory of previous boost.process versions reviews)
(BTW you may need to have a way to tell if a platform cannot launch another process? Not sure if a static_if or some kind of no-op would be useful)
I detect the platform via the preprocessor, i.e. boost/system/api_config.hpp. If you include boost.process without either posix or windows, you'll get an error, saying you have a not supported system api.
I'd consider this the right behaviour, since I cannot start any process. Do you have any scenario in mind, where I might want to have a no-op?
No, like you I prefer the compile-time error in my use cases, but I don't know the most usual cases. I was considering the cases where you have a function using such library and want it to do nothing when it can't do anything on the platform, but still not have ifdefs. I guess just having a different cpp file for non-confirming platforms is enough anyway, so compile-time error seems ok.
_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

On 2016-04-19 12:23, Klaim - Joël Lamotte wrote:
On 19 April 2016 at 10:53, Klemens Morgenstern <klemens.morgenstern@gmx.net> wrote:
Same here and I believe the majority of people looking for such library
just want to spawn processes in cross-platform contexts. (at least from memory of previous boost.process versions reviews)
(BTW you may need to have a way to tell if a platform cannot launch another process? Not sure if a static_if or some kind of no-op would be useful)
I detect the platform via the preprocessor, i.e. boost/system/api_config.hpp. If you include boost.process without either posix or windows, you'll get an error, saying you have a not supported system api.
I'd consider this the right behaviour, since I cannot start any process. Do you have any scenario in mind, where I might want to have a no-op?
No, like you I prefer the compile-time error in my use cases, but I don't know the most usual cases. I was considering the cases where you have a function using such library and want it to do nothing when it can't do anything on the platform, but still not have ifdefs. I guess just having a different cpp file for non-confirming platforms is enough anyway, so compile-time error seems ok.
Can Boost.Process functionality be emulated with std::/boost::thread+std::system()? At least partially? Maybe that would be a suitable fallback?

Can Boost.Process functionality be emulated with std::/boost::thread+std::system()? At least partially? Maybe that would be a suitable fallback?
That would mean you are on a platform that is neither posix nor windows, but has a std::system. That would simply not be supported by boost.process, so I don't know if a fallback would make sense at all. But, if we take a subset of boost.process to run on a third platform, it would need more functionality than this, i.e. at least redirecting I/O and wait/terminate. So I'd really need a use-case.

On 19 Apr 2016 at 1:06, Klemens Morgenstern wrote:
2. You should completely eliminate all synchronous i/o as that is also fundamentally broken for speaking to child processes. Everything needs to be async 100% of the time, it's the only sane design choice [2]. You can absolutely present publicly a device which appears to quack and waddle like a synchronous pipe for compatibility purposes, indeed AFIO v2 presents asynchronous i/o devices as synchronous ones if you use the synchronous APIs even though underneath the i/o service's run() loop gets pumped during i/o blocks. But underneath it needs to be 100% async, and therefore probably ASIO.
Actually I did not look at boost.afio, which might be a good idea.
v2 is a *very* different design to v1. The peer review here last summer was heard. v2's async facilities are quite minimal mainly because v2 also has no knowledge of threads nor memory management, so no shared_ptr. Indeed v2 doesn't even throw exceptions! It also uses gsl::span<T> throughout including for scatter gather buffers, so it's a bit more C++ 1z-y in look and feel. v2 has a native_handle_type for the OS handle type, it's a very thin and dumb value type. An afio::handle manages a native_handle_type, basically closes it on destruction. There are increasing refinements of afio::handle into afio::io_handle, file_handle, async_file_handle and so on. An io_handle works as expected with a pipe or socket. If the native_handle_type is tagged as async/non-blocking, the synchronous read() and write() virtual functions will hang around/pump the i/o dispatch queue until the synchronous i/o completes, thereby quacking and waddling like a synchronous i/o operation. In this it's not dissimilar to ASIO, but AFIO is far lighter weight than ASIO. We don't bother with IOCP on Windows for example, it's too heavyweight. v2 aims for hundreds of opcodes overhead over the system calls, no more.
Currently you only have the async_pipe representation, which wraps around a either ordinary posix-pipe or a named pipe on windows. Now I do not think, that I need to make everything async, because I can think of enough scenarios where this is a complete overkill. For example, I might just want to pipe from one process to another:
I meant internal implementation needs to be async. It's unavoidable unless your design is to be fundamentally broken.
If I understand what you say here correctly, that this is what I originally planned. But I decided that against that, because it is much easier the current way and you can store all needed information in a simple child. And actually you can detach the child.
In this you enforce a design choice and consequence onto your users. For example you'll force your users to use smart pointers and memory allocation to stop the default behaviour you imposed on them. The way I suggested does not have these problems.
Well again: that would require the usage of asio all the time, I don't think that's very sensible.
As I mentioned, the biggest reason to choose a child process management library rather than throwing one together of your own is that someone has solved the stdout/stderr deadlock problem for me. If you don't solve that problem for me, it's faster and easier to bodge together my own process library because your library delivers nothing of value to me.
Ok now I am confused: you want me to make everything asio, so that you'll always need boost.asio, but you want to eliminate the dependency on boost.fusion? Why? That's definitely not going to happen, since I do a lot of meta-programming there, this would become too much work for a library which clearly needs other boost libraries. The only way I see that happening is, when I propose a similar library for the C++ Standard, and that won't happen very soon...
There is zero need for metaprogramming in a child process library. If you think you need it, you have the wrong design. ASIO is not like Fusion. ASIO is entering the ISO C++ standard. That makes it one day not-a-dependency. I *personally* think you should use ASIO rather than duplicating work. But you don't have to, feel free to rip bits out of AFIO v2 into your own solution if you like. Mine is certainly much easier to borrow from.
The async stuff is second, and is a set of features, extending it. At least that's the reason I need a process library, which is the reason I'm working on that.
I don't think you understand the stdout/stderr deadlock problem. Look into Python's subprocess.communicate(). Python tried to use non-async i/o in its popen() for years. It never worked right, because it can't. Async i/o is unavoidable if child i/o is to be reliable [1]. It's a non-negotiable. Niall [1]: Strictly speaking background threads to pump stdout and stderr also works just fine. I've used that a few times in a pinch as it's quicker to deploy working code than doing proper async i/o. -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/

Currently you only have the async_pipe representation, which wraps around a either ordinary posix-pipe or a named pipe on windows. Now I do not think, that I need to make everything async, because I can think of enough scenarios where this is a complete overkill. For example, I might just want to pipe from one process to another:
I meant internal implementation needs to be async. It's unavoidable unless your design is to be fundamentally broken.
Well, it's as broken as the system. That's good enough for me.
If I understand what you say here correctly, that this is what I originally planned. But I decided that against that, because it is much easier the current way and you can store all needed information in a simple child. And actually you can detach the child.
In this you enforce a design choice and consequence onto your users. For example you'll force your users to use smart pointers and memory allocation to stop the default behaviour you imposed on them. The way I suggested does not have these problems.
What? Everything there is movable, you don't need any smart pointer. As a matter of fact, I originally designed the return type of execute to be a template depending on the parameter, so that you'd have an stdin/out/err member depending on whether or not you pipe that somewhere. But I decided against that because that (despite being overcomplicated) WOULD have required smart pointers.
Well again: that would require the usage of asio all the time, I don't think that's very sensible.
As I mentioned, the biggest reason to choose a child process management library rather than throwing one together of your own is that someone has solved the stdout/stderr deadlock problem for me. If you don't solve that problem for me, it's faster and easier to bodge together my own process library because your library delivers nothing of value to me.
Well, you have the faculties for that in the current boost.process design. And if that ain't enough it can be built atop. Feel free to build boost.apio :).
Ok now I am confused: you want me to make everything asio, so that you'll always need boost.asio, but you want to eliminate the dependency on boost.fusion? Why? That's definitely not going to happen, since I do a lot of meta-programming there, this would become too much work for a library which clearly needs other boost libraries. The only way I see that happening is, when I propose a similar library for the C++ Standard, and that won't happen very soon...
There is zero need for metaprogramming in a child process library. If you think you need it, you have the wrong design.
ASIO is not like Fusion. ASIO is entering the ISO C++ standard. That makes it one day not-a-dependency.
Have you even looked at the design? Because it absolutley requires metaprogramming and it is the only elegant choice. On Windows you have a function overblown with parameters, making it unreadable. On Posix you have to call several functions to get a process up. And my design has a variadic function, which allows you to pass only the arguments you need.
I *personally* think you should use ASIO rather than duplicating work. But you don't have to, feel free to rip bits out of AFIO v2 into your own solution if you like. Mine is certainly much easier to borrow from.
What? I do that!? The Async Pipe has two boost::asio::windows::stream_handle/boost::asio::posix::stream descriptor. They are just packed differently, because it's a pipe, not a stream.
The async stuff is second, and is a set of features, extending it. At least that's the reason I need a process library, which is the reason I'm working on that.
I don't think you understand the stdout/stderr deadlock problem. Look into Python's subprocess.communicate().
Yeah right, because I don't think we should to everything async I don't understand the problem. It's not like I implemented that deadlock enough times myself. If you try to solve a possible deadlock by making everything async and multithreaded, it's like using GC because to avoid leaks. I want a C++ and not a java or python library, and that means to expose as much to the developer as possible and keep the overhead optional. So would you just design the library like the python equivalent? How would you view this library? http://templated-thoughts.blogspot.de/2016/03/sub-processing-with-modern-c.h...

On 19 Apr 2016 at 13:34, Klemens Morgenstern wrote:
Because it absolutley requires metaprogramming and it is the only elegant choice. On Windows you have a function overblown with parameters, making it unreadable. On Posix you have to call several functions to get a process up. And my design has a variadic function, which allows you to pass only the arguments you need.
A freeform constructor with tagged parameters is absolutely right for when the parameters can be extended by third party code. Here though parameter types are completely known, so the appropriate design for when there are too many parameters and/or you'd like to name them is to pass a tagged tuple aka a struct. The choice to use metaprogramming should always confer an absolute design win over all other design alternatives given the increased compile times, memory usage, brittleness and other factors. Metaprogramming is very much not free of cost. It should be used as sparingly as possible in publicly facing library APIs. In particular I find Process' use of a freeform constructor gratuitous given the much simpler close substitutes like passing a struct.
So would you just design the library like the python equivalent?
How would you view this library? http://templated-thoughts.blogspot.de/2016/03/sub-processing-with-modern-c.h...
Very favourably indeed. I will say I don't think much of his internal implementation unfortunately, a lot of problems. But the public API design is the right direction, very intuitive and straightforward to use. No messing around. Niall -- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/

Am 19.04.2016 um 20:08 schrieb Niall Douglas:
On 19 Apr 2016 at 13:34, Klemens Morgenstern wrote:
Because it absolutley requires metaprogramming and it is the only elegant choice. On Windows you have a function overblown with parameters, making it unreadable. On Posix you have to call several functions to get a process up. And my design has a variadic function, which allows you to pass only the arguments you need.
A freeform constructor with tagged parameters is absolutely right for when the parameters can be extended by third party code. Here though parameter types are completely known, so the appropriate design for when there are too many parameters and/or you'd like to name them is to pass a tagged tuple aka a struct.
The choice to use metaprogramming should always confer an absolute design win over all other design alternatives given the increased compile times, memory usage, brittleness and other factors. Metaprogramming is very much not free of cost. It should be used as sparingly as possible in publicly facing library APIs.
In particular I find Process' use of a freeform constructor gratuitous given the much simpler close substitutes like passing a struct.
Well, it doesn't seem like we'll agree any time soon, but that's imho the beauty of open-source. If you have any design for a process library, I would be very much interested in it. Btw: boost.process 0.5 only allowed initializers to be passed to the execute function, i.e. you had to write execute(set_exe("gcc"), set_args({"--version"}); I intentionally changed that, so one could write execute("gcc", "--version"); because I find this much more intuitive (i.e. design win). This allows more parameter to be passed, like std::error_code or boost::asio::io_service. And unlike your proposed "make everything async" solution, you only pay for it, if you use it. I could add an exe_args initializer, so you can get rid of the initializer building altogether. I personally consider the overhead reasonable, at least in comparison to forcing to much structure onto the user. And considering the amount of possible parameter you can pass, I'd think a struct would be rather incomprehensible.

On Tue, 19 Apr 2016 20:51:52 +0200 Klemens Morgenstern <klemens.morgenstern@gmx.net> wrote:
Am 19.04.2016 um 20:08 schrieb Niall Douglas:
On 19 Apr 2016 at 13:34, Klemens Morgenstern wrote:
[...]
A freeform constructor with tagged parameters is absolutely right for when the parameters can be extended by third party code. Here though parameter types are completely known, so the appropriate design for when there are too many parameters and/or you'd like to name them is to pass a tagged tuple aka a struct.
The choice to use metaprogramming should always confer an absolute design win over all other design alternatives given the increased compile times, memory usage, brittleness and other factors. Metaprogramming is very much not free of cost. It should be used as sparingly as possible in publicly facing library APIs.
In particular I find Process' use of a freeform constructor gratuitous given the much simpler close substitutes like passing a struct.
Well, it doesn't seem like we'll agree any time soon, but that's imho the beauty of open-source. If you have any design for a process library, I would be very much interested in it.
Btw: boost.process 0.5 only allowed initializers to be passed to the execute function, i.e. you had to write
execute(set_exe("gcc"), set_args({"--version"});
I intentionally changed that, so one could write
execute("gcc", "--version");
because I find this much more intuitive (i.e. design win). This allows more parameter to be passed, like std::error_code or boost::asio::io_service. And unlike your proposed "make everything async" solution, you only pay for it, if you use it. I could add an exe_args initializer, so you can get rid of the initializer building altogether.
`execute` is a function that takes `T&&...`, but certain types are going to fail internally. For example, the `execute` signature allows for zero arguments which does not make sense. And what advantage is there to making the first argument templated instead of a `const char*`? Is there an operating system that takes an argument other than a null-terminated c-style string for the command parameter? It is easy to provide forwarding overloads for other types such as boost::filesystem. And it would be nice if the command-line arguments to the process were not intermingled with the options. What if instead of `execute` you had a function `command` that returned a callable with overloads for `operator|`, `operator<`, and `operator>`: (command("echo", "-n", "some text") > "foo.txt")(options...) (command("wget", url) | command("sha256sum") > ostream))() Which I _think_ has the desired precedence (relational before `|`). In fact, why not include `||` and `&&` - is this too crazy? The immediate invocation of each operand is irrelevant to this use case. Also, the returned object could _always_ be a nullary-callable, but with named parameters for modifying options. I would consider converting static functions into constexpr functors if they are templated - or at least the `execute` function. This would make the functions in the library easier to use with std::bind and boost::fit which might be useful. Lee

`execute` is a function that takes `T&&...`, but certain types are going to fail internally. For example, the `execute` signature allows for zero arguments which does not make sense. And what advantage is there to making the first argument templated instead of a `const char*`? Is there an operating system that takes an argument other than a null-terminated c-style string for the command parameter? It is easy to provide forwarding overloads for other types such as boost::filesystem.
Good point, especially since the failure when passing zero arguments will only come up at run-time. That's indeed bad, but could easily be solved with an enable_if or static_assert. It is actually not a requirement to put the command first in the execute function, which is why that is not overloaded but just a part of the T&&...; this is basically inherited from 0.5 and I see no reason, why the command should be first.
And it would be nice if the command-line arguments to the process were not intermingled with the options. What if instead of `execute` you had a function `command` that returned a callable with overloads for `operator|`, `operator<`, and `operator>`:
(command("echo", "-n", "some text") > "foo.txt")(options...) (command("wget", url) | command("sha256sum") > ostream))()
I really like this idea, and thought about features like that, but you always have the downside to it, that it's to limited. But: maybe this can be added, since execute is a template, you could build this with the execute function. The actual Problem I'd have here is, that I don't get a child-handle for each call back, which bothers me. Because this child binds the process by default, which means it waits for the exit on destructor call. This can be changed by calling detach, but it's hard to see how that would be managed in a configurable way in your example. I think that should be considered a library one implements with, i.e. atop boost.process. So basically this constructs a functor: std::future<std::string> fut; auto exec = command("wget", url) | command("sha256sum") > fut ; //and now we invoke the boost.process impl: exec(); Seperating that (though not necessarily in two libraries) has a lot of advantages: boost.process can implement all the low-level stuff in a portable way, and the other library (let's call it boost.neat_process) can than run completely wild with expression-templates without needing to be concerned about the syscalls etc..
I would consider converting static functions into constexpr functors if they are templated - or at least the `execute` function. This would make the functions in the library easier to use with std::bind and boost::fit which might be useful.
Makes sense, I use this pattern for the initializers already anyway.

Because it absolutley requires metaprogramming and it is the only elegant choice. On Windows you have a function overblown with parameters, making it unreadable. On Posix you have to call several functions to get a process up. And my design has a variadic function, which allows you to pass only the arguments you need.
A freeform constructor with tagged parameters is absolutely right for when the parameters can be extended by third party code. Here though parameter types are completely known, so the appropriate design for when there are too many parameters and/or you'd like to name them is to pass a tagged tuple aka a struct.
I beg to disagree. Passing a struct to a function (and I assume passing a struct by constant reference) is not the best of all choices. It is cumbersome, inflexible, and not extensible. Why do you try to get away from the API Boost.Process V0.5 provides in the first place? Do you have a rationale for this? I found that API to be very nice, concise, easily extensible (even by the user), and to be trivial for trivial use cases. All of which are signs of a good API design.
The choice to use metaprogramming should always confer an absolute design win over all other design alternatives given the increased compile times, memory usage, brittleness and other factors. Metaprogramming is very much not free of cost. It should be used as sparingly as possible in publicly facing library APIs.
In particular I find Process' use of a freeform constructor gratuitous given the much simpler close substitutes like passing a struct.
Sorry to ask, but what is a 'freeform constructor'? Regards Hartmut --------------- http://boost-spirit.com http://stellar.cct.lsu.edu

Why do you try to get away from the API Boost.Process V0.5 provides in the first place? Do you have a rationale for this? I found that API to be very nice, concise, easily extensible (even by the user), and to be trivial for trivial use cases. All of which are signs of a good API design.
I still do not really get what Niall would like to have, but it seems to be completely different from what boost.process 0.5 or my 0.6 version are. Concerning the interface changes, I did what I would consider simplifications for the user. I.e. you do not have to name every initializers explicitly and they are more treated as properties, i.e. you can write the following: //0.5 std::vector<std::string> args = {"--version"}; execute(set_exe("gcc.exe"), set_args(args)); //0.6 execute(exe="gcc.exe", args={"--version"}); execute("gcc.exe", "--version");//exe-args style execute("gcc.exe --version"); //cmd-style I consider that an improvement of the current interface, not a major change. And that is as trivial as it gets. And it is still extendable, and easier so since on_success, on_error and on_setup are now consistent across posix and windows. But that's not entirely finished as of yet.

Hi, I consider that an improvement of the current interface, not a major
change. And that is as trivial as it gets.
It would be great if that interface, arguably nice, could boil down to a more template-less and simpler interface. Programs that tend to make an heavy use of external process launching usually need to setup how the process are launched in different stages. You might want to allow an iterative construction (and reuse) of the process launch options. For example: struct process_options { boost::optional<boost::filesystem::path> working_directory; boost::optional<std::map<std::string, std::string>> env; struct channel { ... }; channel stdin; channel stdout; channel stderr; }; This can be hidden through a nice interface, using some good defaults. But when you need to setup dynamically how a process should be launched, there is no other choice than aggregating the setup options until the launch point. The previous version of boost.process did not allow it[1], and forced me to come up with my own thing[2]. Also, do you plan to port back the type erasure PR[3] to your new version ? Cheers, [1] https://github.com/BorisSchaeling/boost-process/issues/2 [2] https://github.com/hotgloupi/configure/blob/master/src/configure/Process.hpp [3] https://github.com/BorisSchaeling/boost-process/pull/8

It would be great if that interface, arguably nice, could boil down to a more template-less and simpler interface. Programs that tend to make an heavy use of external process launching usually need to setup how the process are launched in different stages. You might want to allow an iterative construction (and reuse) of the process launch options.
For example:
struct process_options { boost::optional<boost::filesystem::path> working_directory; boost::optional<std::map<std::string, std::string>> env; struct channel { ... }; channel stdin; channel stdout; channel stderr; };
This can be hidden through a nice interface, using some good defaults.
I think the way you can do it is too diverse to cover it with defaults, and I actually don't know what good defaults would be. We already had the discussion here about making everything async. I.e. we would sacrifice a lot of flexibility. Of course you can implement the example above very easily with the process library: struct my_process { //btw: that's in, but not yet documented filesystem::path working_dir = this_process::pwd(); process::environment env = this_process::environment(); process::pipe in; process::pipe out; filesystem::path err; process::child launch(const std::string & cmd) { return process::execute(cmd, process::start_dir = working_dir, process::std_in < in, process::std_out > out, process::std_err > err, env); } }; BUT you can also use a functional style if you want to: auto in_setting = process::std_in < null; auto c = execute("thingy", in_setting);
But when you need to setup dynamically how a process should be launched, there is no other choice than aggregating the setup options until the launch point. The previous version of boost.process did not allow it[1], and forced me to come up with my own thing[2]. Also, do you plan to port back the type erasure PR[3] to your new version ?
Cheers,
[1] https://github.com/BorisSchaeling/boost-process/issues/2 [2] https://github.com/hotgloupi/configure/blob/master/src/configure/Process.hpp [3] https://github.com/BorisSchaeling/boost-process/pull/8
No and no. So first of all, I don't really get it: you have a few settings which will be aggregated like environment settings or args. Now those can already be joint before the call of execute, i.e. execute("thingy", args+="para1", args+="para2", env["PATH"]+="/path1", env["PATH"]+="/path2"); //can be written as environment env = this_process::environment(); env["PATH"] += "/path1"; env["PATH"] += "/path2"; vector<string> args = {"para1", "para2"}; execute("thingy", args, env); The Problem with [2] and [3] is, that I now have the initializer-sequence as a template parameter of the executor. I did this so initializers can access the sequence, which was necessary for the async stuff (i.e. so they can access the io_service). This renders any virtualization impossible, because the handler-functions now have to be tempalted. Also: though not yet implemented, I'd like to have a few compile-time checks, e.g. that you don't redirect a pipe twice etc. That would be completely impossible with a initializer sequence built at runtime. Now: since we have a finite amount of initializers, it would be possible to implement some polymorphic sequence with boost.variant, but I still fail to see why that must be determined at runtime. Keep in mind: you would not determine the value of some initializer but which initializers you set. Do you have an example where it has to be done at runtime? That maybe helpful for me to understand the actual problem here. I really don't like to use runtime-polymorphy when it can be done at compile-time.

Hi, Of course you can implement the example above very easily with the process
library:
Thanks, that's almost what I need, but it is not practical for stream initializations: enum stream { std_in, std_out, std_err, dev_null }; struct options { // stderr redirection boost::variant< boost::filesystem::path, boost::process::pipe, stream> err; ... }; How can I spawn a process, so that it will redirect stderr alternatively to a path, to a pipe, to stdout, or to /dev/null ? If I'm right, I will need to write explicitly 4 different calls to process::execute(). And it gets worse if we want to have the same flexibility for the stdin and stdout ... All in all, I'm not suggesting that you add support for boost::variant, but instead suggesting that boost.process could have one low-level generic way to spawn a process. As mentioned earlier, the Python subprocess library is great, its subprocess.Popen[1] constructor provides a way to do so. BUT you can also use a functional style if you want to:
auto in_setting = process::std_in < null; auto c = execute("thingy", in_setting);
I'm not suggesting that you should remove the nice API, more that it could be optional.
you have a few settings which will be aggregated like environment
settings or args.
Yes, sorry, I didn't spot that for the env and args. However the argument remains for the streams initialization. The Problem with [2] and [3] is, that I now have the
initializer-sequence as a template parameter of the executor. I did this so initializers can access the sequence, which was necessary for the async stuff (i.e. so they can access the io_service).
AFAIK, this was also the case in the 0.5 version
This renders any virtualization impossible, because the handler-functions now have to be tempalted.
Yes.
Also: though not yet implemented, I'd like to have a few compile-time checks, e.g. that you don't redirect a pipe twice etc. That would be completely impossible with a initializer sequence built at runtime.
I believe that those checks are incredibly cheap compared to a fork or a CreateProcess(), why not also do some runtime checks ?
Now: since we have a finite amount of initializers, it would be possible to implement some polymorphic sequence with boost.variant, but I still fail to see why that must be determined at runtime. Keep in mind: you would not determine the value of some initializer but which initializers you set. Do you have an example where it has to be done at runtime? That maybe helpful for me to understand the actual problem here. I really don't like to use runtime-polymorphy when it can be done
at compile-time I completely agree that compile-time checks are nice, but having a lower level non fully typesafe API cannot hurt. An example where it is necessary to do the initialization at runtime could be a simple launcher that can exercise every combination of options that boost.process allow, like: Usage: boost-process-launcher [OPTIONS] -- COMMAND [ARG]... That you could use like that $ echo test | boost-process-launcher --stdout=STDERR --stderr=./somefile.txt --E ENVVAR=1 --no-inherit-env -- grep test Cheers, [1] https://docs.python.org/3.6/library/subprocess.html#popen-constructor

Am 20.04.2016 um 17:50 schrieb Raphaël Londeix:
Hi,
Of course you can implement the example above very easily with the process
library:
Thanks, that's almost what I need, but it is not practical for stream initializations:
enum stream { std_in, std_out, std_err, dev_null };
struct options { // stderr redirection boost::variant< boost::filesystem::path, boost::process::pipe, stream> err;
... };
How can I spawn a process, so that it will redirect stderr alternatively to a path, to a pipe, to stdout, or to /dev/null ? If I'm right, I will need to write explicitly 4 different calls to process::execute(). And it gets worse if we want to have the same flexibility for the stdin and stdout ...
I don't think so. You could built a class as follows (I think, didn't test this) - looks not as nice as it could be, but should do the job. using stream = variant< decltype(process::null), decltype(process::close), filesystem::path, process::pipe, stream>; //< the last one would not work... template<typename T> stream_vis : boost::static_visitor<child> { std::string cmd; stream_vis(const std::string & cmd) cmd(cmd) {} template<typename T, typename U, typename V> child operator()(const T & in, const U & out, const V & err) const { return execute(cmd, std_in < in, std_out > out, std_err > err); } }; child my_execute(const std::string &cmd, const stream &in, const stream &out, const stream &err) { stream_vis sv(cmd); return apply_visitor(sv, in, out, err); } I think this would be the way to go, though there could be some helper class for that.
All in all, I'm not suggesting that you add support for boost::variant, but instead suggesting that boost.process could have one low-level generic way to spawn a process. As mentioned earlier, the Python subprocess library is great, its subprocess.Popen[1] constructor provides a way to do so.
I need the initializers to be different classes for the I/O. Or at least I'd strongly prefer it, because the async stuff stores a few more thing which would just be annoying for other types.
BUT you can also use a functional style if you want to:
auto in_setting = process::std_in < null; auto c = execute("thingy", in_setting);
I'm not suggesting that you should remove the nice API, more that it could be optional.
you have a few settings which will be aggregated like environment
settings or args.
Yes, sorry, I didn't spot that for the env and args. However the argument remains for the streams initialization.
No problem, the documentation is not that detailed yet.
The Problem with [2] and [3] is, that I now have the
initializer-sequence as a template parameter of the executor. I did this so initializers can access the sequence, which was necessary for the async stuff (i.e. so they can access the io_service).
AFAIK, this was also the case in the 0.5 version
Nope, there execute was not a template, only the operator(). But I have some cross-dependency in the sequence now, due to the async stuff.
This renders any virtualization impossible, because the handler-functions now have to be tempalted.
Yes.
Also: though not yet implemented, I'd like to have a few compile-time checks, e.g. that you don't redirect a pipe twice etc. That would be completely impossible with a initializer sequence built at runtime.
I believe that those checks are incredibly cheap compared to a fork or a CreateProcess(), why not also do some runtime checks ?
It's not a time concern, but I like a function with invalid arguments to be checked at compile-time. That's the ordinary behaviour if you call a function with an invalid argument list. I.e. wrong at compile time -> error at compile time.
Now: since we have a finite amount of initializers, it would be possible to implement some polymorphic sequence with boost.variant, but I still fail to see why that must be determined at runtime. Keep in mind: you would not determine the value of some initializer but which initializers you set. Do you have an example where it has to be done at runtime? That maybe helpful for me to understand the actual problem here. I really don't like to use runtime-polymorphy when it can be done
at compile-time
I completely agree that compile-time checks are nice, but having a lower level non fully typesafe API cannot hurt.
That is actually given by the iostreams::file_descriptor - this thing just wraps around a stream-handle, so I think that would be the way to go. You can use them for everything, but you currently have to use file_descriptor_sink or file_descriptor_source.
An example where it is necessary to do the initialization at runtime could be a simple launcher that can exercise every combination of options that boost.process allow, like:
Usage: boost-process-launcher [OPTIONS] -- COMMAND [ARG]...
That you could use like that
$ echo test | boost-process-launcher --stdout=STDERR --stderr=./somefile.txt --E ENVVAR=1 --no-inherit-env -- grep test
Ok, this I would actually implement via variants.
Cheers,
[1] https://docs.python.org/3.6/library/subprocess.html#popen-constructor
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
participants (7)
-
Andrey Semashev
-
Hartmut Kaiser
-
Klaim - Joël Lamotte
-
Klemens Morgenstern
-
Lee Clagett
-
Niall Douglas
-
Raphaël Londeix