On 20/02/2021 22:15, Andrzej Krzemienski via Boost wrote:
Now, I choose to use the DI-library (this is where I have troubles with
understanding: why would I want to do that?). I get the following result:
```
int main()
{
State s = di::make_injector().create<State>(0, 0, 2, 2, 1, 1);
}
```
And now I get back to the situation where I am passing six ints and can
easily confuse which int represents what.
I am pretty sure I am now unfairly mischaracterizing the library. But this
is what I get from the motivation and tutorial sections, and the
explanations I have seen so far. You cannot see this behavior in the
tutorial example that is using class `app`, because most of the types in
there are default-constructed or constructed from values that were
themselves (perhaps recursively) default-constructed. So it looks to me
that in order to appreciate this library, I have to make most of my types
default-constructible.
At which point am I missing something?
I think the other answers answered most of your points, so I'll add to
those only two:
1. It is hard to see the wood from the trees for C++ devs because C++
uses dependency injection all over the place in its standard library,
its standard idioms, and commonplace practice. The classic example is
Allocators:
std::vector
std::vector delegates the responsibility of allocating, creating,
destroying and deallocating arrays of int to the user supplied service
MyAllocator. This is literally Dependency Injection, and it is so
commonplace in C++ as a design pattern that we don't call it that.
Purists from other languages will point out that std::vector knows the
*concrete type* of the delegated service. But other than that it has no
idea what the implementation is, other than it promises the side effects
guaranteed by the Allocator Concept.
If you remove knowledge of _concrete_ type, and replace that with
_abstract_ type, you get what we would call a visitor class in C++ -
basically a bunch of pure virtual functions. This corresponds to
std::pmr::vector<int> whereby the concrete implementation type for
allocating, creating, destroying and deallocating arrays of int is no
longer known to vector, only that there is an abstract API which
promises the side effects guaranteed by the Allocator Concept.
Now, imagine that your program has some memory problem, and it only ever
uses std::vector<int>. Thanks to the Dependency Injection, you get two
degrees of freedom:
a) If you chose the concrete type MyAllocator, through a recompile you
can inject a mock MyAllocator for testing and debug.
b) If you chose the abstract type MyPmrAllocator, you don't need to
recompile your code, you simply swap the MyPmrAllocator instance you
construct at the beginning which is injected into all your classes with
a mock MyPmrAllocator for testing and debug.
Option a) is tractable in codebases < 100M lines of code. Option b)
becomes worth it in codebases > 100M lines of code. Note that as my
codebase grows, I can proactively take a decision to move from degree of
freedom a) to b) without breaking all my source code i.e. I can choose
for my runtime to be slower in exchange for radically reduced recompile
times.
2. I just gave a specific example of the value of the two typical forms
of Dependency Injection in C++, and I'm going to assume it's
uncontroversial (I actually think it's an exemplar of all that's wrong
with Allocators, but that is off topic for here).
Something peculiar about how we typically do Dependency Injection in C++
is that it's always *specific* and not *generalised*. If we have a
problem e.g. delegation of memory allocation, we design a _specific_
dependency injected solution. What we don't do in C++ is design a
_generalised_ dependency injection solution which is universal (unlike
say in Java).
The advantage of a universal DI which exists everywhere is very much
like why choosing Outcome is better than rolling your own result<T>
type. Yes anybody can roll their own result<T> type, indeed probably
most people do. But when library A has resultA<T>, and library B has
resultB<T>, and library C has resultC<T>, how is a codebase dependent on
all three libraries supposed to interoperate between those three
libraries easily?
Most of Outcome's complexity stems from being friendly to third party
resultX<T> types. I myself I have deployed Outcome in foreign codebases
each using their own Result types, and Outcome can (usually) capture all
of those seamlessly without loss of original information fidelity. Thus
Outcome becomes "the ring to rule them all", which is its exact value
proposition and why I would suppose Outcome was accepted into Boost.
What I would like to see of any Boost.DependencyInjection is the exact
same "one ring to rule them all" in that it should be easy to integrate
*any* bespoke third party Dependency Injection mechanism or framework
into Boost.DependencyInjection such that one can *seamlessly* compose
library A, library B and library C in a single application, and it all
"just works".
I'll be frank in saying that I don't believe the current proposed
Boost.DI does this. Unless I and most other people here can be convinced
otherwise, my personal current expectation is that the proposed Boost.DI
will be rejected, but hopefully with ample feedback on what to do for a
Boost.DI v2, assuming Kris has the stamina and will.
In my personal opinion, if Boost.DI _can_ do things like seamlessly
compose arbitrary Allocators, both concrete and abstract, and arbitrary
other custom bespoke Dependency Injection designs from across the C++
standard library and the Boost libraries, then its tutorial ought to
describe that integration just like the end of the Outcome tutorial
shows three separate, independent, different error handling strategies
in three separate library dependencies being seamlessly integrated into
one application with Outcome doing all the donkey work between those
dependencies.
I think that if the tutorial demonstrated that seamless composure in
action, that would be compelling.
Niall