
On Mon, Nov 5, 2012 at 3:46 PM, Paul Mensonides <pmenso57@comcast.net> wrote:
For an app it's easy to depend on another lib. But for a lib, depending on another lib that might not be easily available / installable can be problematic.
In some ways, Windows deployment is easier because you can distribute in-directory DLLs for many libraries that don't require their own installation programs and largely avoid DLL hell. In many ways, the Linux model is better because it has better facilities for reuse, but dealing with C++ ABI issues and version availablity issues can be a nightmare also. Granted, you can do the same thing as with Windows with rpath if you really want to, but then you throw away memory and, usually less importantly, disk reuse (just as you get with Windows with in-directory DLLs).
Actually, I meant build-time deployment. Getting includes and libs installed.
They could be better but I don't think calling them fundamentally broken is fair.
Sorry, I didn't mean the tools themselves. I'm referring to the single points of update and/or vetting of the content that those tools work with (at least, via official repositories). They are fundamentally broken because all updates are essentially serialized through a single point. That just doesn't scale despite herculian effort, and most Linux distros are way behind the most current releases of most software because of that. Pressure for throughput at that point far outweights the available throughput--the outcome is inevitable. Currently, deploying on Linux via any of the package management systems is a nightmare unless you only need old compilers and only rely on old versions of other libraries. Besides the boilerplate distro differences in how one specifies a package, you run smack into version availability issues (related to which versions have so far gone through the single point) and ABI issues.
I guess my definition of broken is different than yours. Yes, the model can be improved (greatly), but calling it broken? -- Olaf