Fw: Interlibrary version cchecking

"Robert Ramey" <ramey@rrsd.com> wrote in message news:<hsdfuq$prj$1@dough.gmane.org>...
Turns out that the idea of a single boost library deployment has come up a
couple of times here at BoostCon 2010. While preparing my presentation I considered the problem of what would happen if one installed one library which depended on an newer version of a different library than the user already had installed. I actually woke up in the middle of the night imagining someone would ask about that during my talk. I puzzled about it
for some time and the day before I found a way to address it - in my own mind at least. It also seems that it turns out that there are tools that can also manage this.
It's not that I don't trust tools but I wear a belt AND suspenders. So I would like a method which guarentees that when I build one library, I don't accidently include code from a prerequisite library which is a version so old that things won't work. I would also like to know that I'm not accidently running with a DLL built with an old version.
Basically each library includes a file "manifest.hpp". This get's included when any header get's includes (once at most due to include guards). This
includes a static assert that checks that the prequiste libraries are of sufficiently recent version that a dependent library (or user application)
requires. There is also a manifest.cpp file included in the library which
checks any prequiste DLLS. This would be an exheedingly small overhead to
avoid what could be an incredibly large headache.
I just verified that the code compiles so at this point it's only an idea.
But it seems that some kind of check like this is un-avoidable of one want's to deploy libraries as needed rather than as a monolithic distribution.
FYI - no one asked me about this problem at my presentation. Add one more
confirmation of Murphy's law.
Robert Ramey

At Mon, 18 Oct 2010 09:05:15 -0800, Robert Ramey wrote:
"Robert Ramey" <ramey@rrsd.com> wrote in message news:<hsdfuq$prj$1@dough.gmane.org>...
Turns out that the idea of a single boost library deployment has come up a couple of times here at BoostCon 2010. While preparing my presentation I considered the problem of what would happen if one installed one library which depended on an newer version of a different library than the user already had installed. I actually woke up in the middle of the night imagining someone would ask about that during my talk. I puzzled about it
for some time and the day before I found a way to address it - in my own mind at least. It also seems that it turns out that there are tools that can also manage this.
It's not that I don't trust tools but I wear a belt AND suspenders. So I would like a method which guarentees that when I build one library, I don't accidently include code from a prerequisite library which is a version so old that things won't work. I would also like to know that I'm not accidently running with a DLL built with an old version.
Basically each library includes a file "manifest.hpp". This gets included when any header gets included (once at most due to include guards). This includes a static assert that checks that the prequiste libraries are of sufficiently recent version that a dependent library (or user application) requires. There is also a manifest.cpp file included in the library which checks any prequiste DLLS. This would be an exheedingly small overhead to avoid what could be an incredibly large headache. I just verified that the code compiles so at this point it's only an idea.
But it seems that some kind of check like this is un-avoidable of one want's to deploy libraries as needed rather than as a monolithic distribution.
Seems like you need something like this somewhere. Whether belt and suspenders are needed or not is open to debate I suppose. If you think your source code will be used outside the environment of any given tool, it's probably a good idea to have these internal checks. One issue, of course, is that some library dependencies aren't Boost libraries, and they have (or don't) their own way of indicating their version. Your scheme seems a lot more complicated than industry standard practice, which is to use one or two long integer constants as macros (c.f. __GCC_VERSION__ and friends). That's also useful for #ifdefing, whereas mpl::int_<>s are not.
#ifndef BOOST_ITERATOR_MANIFEST_HPP #define BOOST_ITERATOR_MANIFEST_HPP
// MS compatible compilers support #pragma once #if defined(_MSC_VER) && (_MSC_VER >= 1020) # pragma once #endif
/////////1/////////2/////////3/////////4/////////5/////////6/////////7/////////8 // manifest.hpp: Verify that all dependent libraries are at the correct // versions
// (C) Copyright 2010 Robert Ramey - http://www.rrsd.com . // Use, modification and distribution is subject to the Boost Software // License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at // http://www.boost.org/LICENSE_1_0.txt)
#include <boost/mpl/int.hpp>
namespace boost { namespace iterator {
class manifest { public: // specify the current version numbers for this library typedef boost::mpl::int_<2> interface_version; typedef boost::mpl::int_<3> implemenation_version; };
} // serialization } // boost
#endif // BOOST_ITERATOR_MANIFEST_HPP
-- Dave Abrahams BoostPro Computing http://www.boostpro.com

David Abrahams wrote:
At Mon, 18 Oct 2010 09:05:15 -0800, Robert Ramey wrote: Seems like you need something like this somewhere. Whether belt and suspenders are needed or not is open to debate I suppose. If you think your source code will be used outside the environment of any given tool, it's probably a good idea to have these internal checks.
One issue, of course, is that some library dependencies aren't Boost libraries, and they have (or don't) their own way of indicating their version.
Your scheme seems a lot more complicated than industry standard practice, which is to use one or two long integer constants as macros (c.f. __GCC_VERSION__ and friends). Since I wrote that, I have had occasion to investigate the versioning scheme suggested for linux shared libraries. It seems to me that this
Of course, and those will always have to be addressed in an adhoc manner. proposal is remarkably similar to that used for these libraries: see http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html among many others. My proposal is only an idea and I'm not prepared to mount a serious defense of it. But it seems that something along these lines is going to be necessary.'
That's also useful for > #ifdefing, whereas mpl::int_<>s are not.
Well, if you want to enhance my proposal to add constants and have the mpl_<... or static assert use those constants, that would be fine with me. Robert Ramey

At Mon, 18 Oct 2010 10:29:14 -0800, Robert Ramey wrote:
Your scheme seems a lot more complicated than industry standard practice, which is to use one or two long integer constants as macros (c.f. __GCC_VERSION__ and friends).
Since I wrote that, I have had occasion to investigate the versioning scheme suggested for linux shared libraries. It seems to me that this proposal is remarkably similar to that used for these libraries:
see http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html
Really? What parts do you see as similar? It jumps out at me that your proposal is full of source code and that page doesn't seem to have any recommendations for source code (unless I overlooked them)
among many others. My proposal is only an idea and I'm not prepared to mount a serious defense of it. But it seems that something along these lines is going to be necessary.'
Yes, something along these lines. I'm just wondering if you are reinventing tank treads when we already have a perfectly good wheel.
That's also useful for > #ifdefing, whereas mpl::int_<>s are not.
Well, if you want to enhance my proposal to add constants and have the mpl_<... or static assert use those constants, that would be fine with me.
Why do you want the mpl_<...> thing in the first place? -- Dave Abrahams BoostPro Computing http://www.boostpro.com

David Abrahams wrote:
At Mon, 18 Oct 2010 10:29:14 -0800, Robert Ramey wrote:
Your scheme seems a lot more complicated than industry standard practice, which is to use one or two long integer constants as macros (c.f. __GCC_VERSION__ and friends).
Since I wrote that, I have had occasion to investigate the versioning scheme suggested for linux shared libraries. It seems to me that this proposal is remarkably similar to that used for these libraries:
see http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html
Really? What parts do you see as similar? It jumps out at me that your proposal is full of source code and that page doesn't seem to have any recommendations for source code (unless I overlooked them)
I was referring to the scheme itself - I don't remember where I saw it but maybe I cited the wrong page. ....so files have major and minor release. I think that the major one refers to an api change while the minor refers to an implemenation change and/or enhancement. I don't remember where I saw it, but I believe it was something like this http://www.gnu.org/software/libtool/manual/libtool.html#Updating-version-inf... where there are several levels of "versions" so that I can specify what level of dependency my code has on other packages.
among many others. My proposal is only an idea and I'm not prepared to mount a serious defense of it. But it seems that something along these lines is going to be necessary.'
Yes, something along these lines. I'm just wondering if you are reinventing tank treads when we already have a perfectly good wheel.
I hope so, but I'm not seeing it. The ....so.m.n.o scheme is fine as far as it goes - but It doesn't say anything about how it is to be enforced at compile/link/runtime. As far as I can tell, dependencies of header only libraries aren't even considered.
That's also useful for > #ifdefing, whereas mpl::int_<>s are not.
Well, if you want to enhance my proposal to add constants and have the mpl_<... or static assert use those constants, that would be fine with me.
Why do you want the mpl_<...> thing in the first place?
So that immediately when attampting to compile things would crap out and not depend on this or that instantiation. I wouldn't have to scatther #ifdef/... all over the place. As soon as I include a header that's too old it fails in exactly the right place. If the code isn't recompiled, the construction of the static runtime variable should force the system to fail immediately as well rather than just running until it fails. Again - all without having to scatter if/else ... all over the place. RObert Ramey

At Mon, 18 Oct 2010 13:45:08 -0800, Robert Ramey wrote:
David Abrahams wrote:
At Mon, 18 Oct 2010 10:29:14 -0800, Robert Ramey wrote:
Your scheme seems a lot more complicated than industry standard practice, which is to use one or two long integer constants as macros (c.f. __GCC_VERSION__ and friends).
Since I wrote that, I have had occasion to investigate the versioning scheme suggested for linux shared libraries. It seems to me that this proposal is remarkably similar to that used for these libraries:
see http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html
Really? What parts do you see as similar? It jumps out at me that your proposal is full of source code and that page doesn't seem to have any recommendations for source code (unless I overlooked them)
I was referring to the scheme itself - I don't remember where I saw it but maybe I cited the wrong page. ....so files have major and minor release. I think that the major one refers to an api change while the minor refers to an implemenation change and/or enhancement.
Yes, that's "industry standard" practice.
I don't remember where I saw it, but I believe it was something like this http://www.gnu.org/software/libtool/manual/libtool.html#Updating-version-inf... where there are several levels of "versions" so that I can specify what level of dependency my code has on other packages.
Sure, I thought everybody was aware of that sort of scheme, and what you were mostly proposing had to do with how it was expressed in code.
among many others. My proposal is only an idea and I'm not prepared to mount a serious defense of it. But it seems that something along these lines is going to be necessary.'
Yes, something along these lines. I'm just wondering if you are reinventing tank treads when we already have a perfectly good wheel.
I hope so, but I'm not seeing it. The ....so.m.n.o scheme is fine as far as it goes - but It doesn't say anything about how it is to be enforced at compile/link/runtime. As far as I can tell, dependencies of header only libraries aren't even considered.
So now we're back to expressing it in code.
That's also useful for > #ifdefing, whereas mpl::int_<>s are not.
Well, if you want to enhance my proposal to add constants and have the mpl_<... or static assert use those constants, that would be fine with me.
Why do you want the mpl_<...> thing in the first place?
So that immediately when attampting to compile things would crap out and not depend on this or that instantiation.
#defines do that nicely too.
I wouldn't have to scatther #ifdef/... all over the place.
I don't know what you're talking about. Wouldn't you just do something like STATIC_ASSERT(LIBFOO_VERSION >= 4001003); // libfoo must be 4.1.3 or later ? I don't see any #ifdefs here.
As soon as I include a header that's too old it fails in exactly the right place.
Doesn't the above accomplish it?
If the code isn't recompiled, the construction of the static runtime variable should force the system to fail immediately as well rather than just running until it fails. Again - all without having to scatter if/else ... all over the place.
I think maybe you're being a bit dramatic about this :-) The advantage I was citing for a preprocessor #define is that you *can* use #ifdefs, if and where you need them. It doesn't mean you have to. They do everything your mpl typedefs do, and more. -- Dave Abrahams BoostPro Computing http://www.boostpro.com

David Abrahams wrote:
Robert Ramey wrote:
David Abrahams wrote:
Robert Ramey wrote:
Your scheme seems a lot more complicated than industry standard practice, which is to use one or two long integer constants as macros (c.f. __GCC_VERSION__ and friends).
[snip] I was referring to the scheme itself - I don't remember where I saw it but maybe I cited the wrong page. ....so files have major and minor release. I think that the major one refers to an api change while the minor refers to an implemenation change and/or enhancement.
Yes, that's "industry standard" practice.
Exactly
That's also useful for > #ifdefing, whereas mpl::int_<>s are not.
Well, if you want to enhance my proposal to add constants and have the mpl_<... or static assert use those constants, that would be fine with me.
Why do you want the mpl_<...> thing in the first place?
That was my question, too. It would even mean that MPL couldn't be versioned like everything else because it would be foundational to all other versioned libraries.
So that immediately when attampting to compile things would crap out and not depend on this or that instantiation.
#defines do that nicely too.
+1
I wouldn't have to scatther #ifdef/... all over the place.
I don't know what you're talking about. Wouldn't you just do something like
STATIC_ASSERT(LIBFOO_VERSION >= 4001003);
? I don't see any #ifdefs here.
+1 It can be useful to define not only a standardized way of encoding the values but to provide macros to extract the components: STATIC_ASSERT(BOOST_MAJOR_VERSION(LIBFOO_VERSION) == 4); STATIC_ASSERT(BOOST_MINOR_VERSION(LIBFOO_VERSION) >= 1); versus: STATIC_ASSERT(LIBFOO_VERSION >= 4001000 && LIBFOO_VERSION < 5000000);
The advantage I was citing for a preprocessor #define is that you *can* use #ifdefs, if and where you need them. It doesn't mean you have to. They do everything your mpl typedefs do, and more.
+1 _____ Rob Stewart robert.stewart@sig.com Software Engineer, Core Software using std::disclaimer; Susquehanna International Group, LLP http://www.sig.com IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses.

Robert Ramey wrote:
among many others. My proposal is only an idea and I'm not prepared to mount a serious defense of it. But it seems that something along these lines is going to be necessary.'
Yes, something along these lines. I'm just wondering if you are reinventing tank treads when we already have a perfectly good wheel.
I hope so, but I'm not seeing it. The ....so.m.n.o scheme is fine as far as it goes - but It doesn't say anything about how it is to be enforced at compile/link/runtime.
It's enforced, on Unix, by linking against the properly-named .so, and then, at runtime, the linker will naturally complain unless you have a library of compatible version around. At present, each version of Boost is incompatible with every other version of itself, and the .so names represent this very well. I don't have experience with doing similar on Windows, but I'm pretty sure that the manifests can be employed for the same effect -- and that's the right way. - Volodya

Hi Robert, I don't know if your proposal is the correct way to manage with library dependencies at compile-time and run-time, but what I'm sure is that we need to have the version of the dependant libraries as preprocessor symbols, as we can need to make some adaptation if we compile respect to one version or another. ----- Original Message ----- From: "Robert Ramey" <ramey@rrsd.com> To: <boost@lists.boost.org> Sent: Monday, October 18, 2010 7:05 PM Subject: [boost] Fw: Interlibrary version cchecking
"Robert Ramey" <ramey@rrsd.com> wrote in message news:<hsdfuq$prj$1@dough.gmane.org>...
It's not that I don't trust tools but I wear a belt AND suspenders. So I would like a method which guarentees that when I build one library, I don't accidently include code from a prerequisite library which is a version so old that things won't work. I would also like to know that I'm not accidently running with a DLL built with an old version.
Usualy this is managed by the packager of the platform, isn't it?
Basically each library includes a file "manifest.hpp". This get's included when any header get's includes (once at most due to include guards). This includes a static assert that checks that the prequiste libraries are of sufficiently recent version that a dependent library (or user application) requires. There is also a manifest.cpp file included in the library which checks any prequiste DLLS. This would be an exheedingly small overhead to avoid what could be an incredibly large headache.
But it seems that some kind of check like this is un-avoidable of one want's to deploy libraries as needed rather than as a monolithic distribution.
Robert Ramey
I see your proposal as an intrusive workaround for users that don't use the packager of the platform. The thing I don't like is that it forces the dependant libraries to use an homogeneous framework, which usualy will be quite difficult to setup outside Boost. This could be done if the proposal is only restricted to see the coherence between a Boost library and its dependants once we start to release the Boost libraries independently, but I would prefer we try to see what we can do using the packager of the platform the time been. Best, Vicente
participants (5)
-
David Abrahams
-
Robert Ramey
-
Stewart, Robert
-
vicente.botet
-
Vladimir Prus