
On Tue, Jun 05, 2007 at 10:43:48PM +0200, Henrik Sundberg wrote:
I'm new to svn. And I'm trying to build a good structure for a family of systems built on ~200 components with 1-20 subcomponents per component. Our current (pre-svn) method handles releases on subcomponent level. Integration teams selects the subcomponents to build a system from, builds it, and releases the resulting binaries/configurations.
I'm trying to understand what the best svn-structure would be. Boost, and KDE, seem to have the same problems as I. I don't understand KDE (I did read their tutorial) enough to know how it can be compiled without a clear (to me) structure of subreleases.
I'm trying to understand when to use externals among other things. They seemed to be useful for making higher order releases based on lower ones. But I got discouraged.
Thanks for your interest.... Since this (is/will become) part of a proposal for boost I guess this is on-topic. I can give you a tour/brain-dump of how we do it. Warning: this is an unedited rambling sprint through a lot of information. The jargon we use is 'projects' and 'meta-projects'. Meta-projects are just collections of projects. I assume the mapping would be meta-project => component, project => subcomponent. Our repository (recently outfitted with the slick new Trac browser) is here: http://code.icecube.wisc.edu/projects/icecube/browser you'll see projects/ and meta-projects/. Under projects/ there are a couple hundred projects, many of them have fallen out of use, or are maintained by people that haven't yet (or never intend to) submitted them for review. Each project directory (say, project icetray) looks like this: branches/ somebranch/ otherbranch/ release/ V01-00-00/ V01-00-01/ trunk/ And each project is organized in a certain fashion: http://code.icecube.wisc.edu/projects/icecube/browser/projects/icetray/trunk The toplevel directory contains a makefile, some cruft, and a public dir (headers), src dir (src and tests), and resources (misc). We have essentially three main distributions: offline-software (core stuff), icerec (core + some algorithms), simulation (core + other algorithms). offline-software / \ icerec simulation One of the fundamental operations of our enterprise is to have the simulation group produce data that is used by the icerec group to test the performance of the components inside icerec. Looking at a release of the offline-software meta-project: http://code.icecube.wisc.edu/projects/icecube/browser/meta-projects/offline-... you can see that the metaproject contains a little bit of boilerplate, some cruft (that 'mutineeer' thing), and a list of externals, like ithon http://code.icecube.wisc.edu/svn/projects/ithon/releases/B01-10-00 which means that when you check out the metaproject, directory ithon/ will be populated with what's on the other end of that URL. On a regular basis, releases of offline-software come out (this metaproject ought to be called 'core' or something), and the simulation and reconstruction groups merge, at their convenicence, this released code into their metaprojects. For instance this simulation metaproject: http://code.icecube.wisc.edu/projects/icecube/browser/meta-projects/simulati... contains the same externals as offline-software version V01-07-05, as well as the 'trunks' of a bunch of simulation-specific projects that may or may not be dependent on offline-software projects. Similarly for icerec: http://code.icecube.wisc.edu/projects/icecube/browser/meta-projects/icerec/t... As simulation stabilizes, they will tag up their various projects and when all of their components are 'stable' URLs, they copy off a release: http://code.icecube.wisc.edu/projects/icecube/browser/meta-projects/simulati... Once nice thing about this is that it is easy assemble other meta-projects. For instance, I put together a visualization tool that is dependent only on a small subset of offline-software (it only needs to be able to read data, then it makes pictures) and therefore only needs to be rereleased when serialization methods change, or there are new gui features. To achieve this you simply copy off the offline-software metaproject and tweak the externals: http://code.icecube.wisc.edu/projects/icecube/browser/meta-projects/glshovel... and people can check this out and build it without having to build the entire world. You'll notice that in the meta-projects directory: http://code.icecube.wisc.edu/projects/icecube/browser/meta-projects there are lots of such compliations of code. I believe about 2/3rds are in active use. Another thing that happens very frequently in our world is that somebody has to do a specific analysis, which involves writing a lot of customized code and usually hacking everything to bits. Once you get started making graphs you do *not* want to have your code broken by somebody's release, and you do *not* want somebody to reject your changes pending code review. So they simply copy off a metaproject and make it their PhD. Here's one: http://code.icecube.wisc.edu/projects/icecube/browser/meta-projects/string-2... You'll notice that: * Meta-projects don't nest. When the simulation group merges in changes from a new release of offline-software, they have to change all of the urls. This is dictated by the way svn:externals work. It seems like it isn't ideal, but having the dependencies of each metaproject laid out flat can be an advantage. * Branching multiple projects can be tedious. If you allow your individual projects to become coupled, you will need to branch many at once, and this you have to do by hand (svn copy, change external. svn copy, change external...) This can be tedious. Nobody has taken the time to develop tools to automate this, but SVN has good python bindings, it would probably be just an afternoon's work. Let me stop now. Is that clear? ;) -t