
At Fri, 4 Jun 2010 13:46:53 -0600, Bartlett, Roscoe A wrote:
I am fine with the language allowing for undefined behavior. Clearly if you want the highest performance, you have to turn off array-bounds checking, for instance, which allows for undefined behavior. What I am not okay with is people writing programs that expose that undefined behavior, especially w.r.t. to usage of memory. Every computational scientist has had the experience of writing code that appeared to work just fine on their main development platform but when they took to over to another machine for a "production" run on a large (expensive) MPP to run on 1000 processors, it segfaulted after having run for 45 minutes and lost everything. This happens all the time with current CSE software written in C, C++, and even Fortran. This is bad on many levels.
Absolutely. People need better tools for detecting memory usage errors. I would rather rely on tools like purify than build this sort of thing into a library dependency, but others' mileage may vary.
Are you suggesting that people should write programs that rely on undefined behavior
of course not!
or are you just saying that you don't think it should be eliminated from the language?
neither from the language nor from libraries (in the same sense as the language has undefined behavior, i.e. making certain usages illegal).
Also, as described in Section 5.9.2, separating weak_ptr from shared_ptr is not ideal from a design perspective since it reduces the generality and reusability of software (see the arguments for this in the document).
I see the arguments. They boil down to, “there are complex cases where weakness needs to be determined at runtime rather than compile time.” IMO any design that uses reference counting but can't statically establish a hierarchy at ownership is broken. Even if the program works today, it will, eventually, acquire a cycle that consists entirely of non-weak pointers, and leak. So that *shouldn't* be convenient to write, so it's a problem with Teuchos that it encourages such designs. The fact that shared_ptr and weak_ptr are different types is a feature, not a bug. Furthermore if you really-really-really-really need to do that with boost, it's easy enough to build a type that acts as you wish (e.g. boost::variant<shared_ptr<T>, weak_ptr<T> >).
[Bartlett, Roscoe A]
I have concrete use cases where it is not obvious that an RCP should be strong or weak w.r.t to a single class.
I don't know what “with respect to” means in this context. But the fact that you have come up with an example or two doesn't make this a good idea.
One use case is where a Thyra::VectorBase object points to its Thyra:VectorSpaceBase object but where the scalar product of the Thyra::VectorSpaceBase object can be defined to be a diagonal p.d. matrix with the diagonal being another Thyra::VectorBase object. This creates a circular dependency between the diagonal VectorBase and the VectorSpaceBase objects that can only be resolved by making the RCP pointing to the VSB object weak.
That's not clear enough to guess at the structure you're describing.
This is an unusual use case
And thus, doesn't need to be convenient.
that the developers of the VectorBase and VectorSpaceBase subclasses should not have to worry about or change their design to handled. There are other examples too that I can present in other Trilinos packages.
I realize that it must seem arrogant of me to say so without seeing the actual code, but IMO all these examples are very likely indicative of a design flaw.
This point is debatable but in having to chose between shared_ptr and weak_ptr you are injecting notions of memory management into the design of a class that in many cases is orthogonal to the purpose of the class.
Ownership, lifetime, whole/part relationships, and invariants are all very important elements of a class design, and the choice of shared-vs-weak is a reflection of those properties.
This technically violates the Single Responsibility Principle (SRP).
I guess I just disagree about that.
By using RCP consistently, you don't violate SRP or the Open-Closed Principle (OCP). External collaborators make the final decision about strong or weak ownership.
We can agree to disagree but I can show other examples where having the feature where RCP can be weak or strong (decided at runtime) solved a circular reference problem without damaging the cohesion of the classes involved or even requiring a single line of code be changed in those classes.
Again, that doesn't make it a good idea. If you make all your data members public, you can solve all sorts of problems by monkeypatching without changing a single line of the enclosing classes. The fact that shared_ptr/weak_ptr are different types makes it possible to statically ensure that you don't have cycles, just like private members make it possible to statically ensure that class invariants are not violated by clients.
Are you instead arguing for a static and stack-based approach to programming?
No, I'm arguing for a value-based approach. If you're strict about it, you even turn references into values using type erasure… although doing so is often unfortunately so tedious that I wouldn't insist on it.
Value types, by definition, involve deep copy which is a problem with large objects. If a type uses shallow copy semantics to avoid the deep copy then it is really no different than using shared_ptr or RCP.
The whole notion of deep-vs-shallow copy is a fallacy. If you nail down what it means to make a “copy”---which will force you to nail down what constitutes an object's value---you'll see that. -- Dave Abrahams BoostPro Computing http://www.boostpro.com