Six-month cycles are great. Now let’s talk about meta-cycles: broader release cycles for major work. I’m very interested in a cross-community conversation about this, so will sketch out some ideas and then encourage people from as many different free software communities as possible to comment here. I’ll summarise those comments in a follow-up post, which will no doubt be a lot wiser and more insightful than this one :-)

Background: building on the best practice of cadence

The practice of regular releases, and now time-based releases, is becoming widespread within the free software community. From the kernel, to GNOME and KDE, to X, and distributions like Ubuntu, Fedora, the idea of a regular, predictable cycle is now better understood and widely embraced. Many smarter folks than me have articulated the benefits of such a cadence: energising the whole community, REALLY releasing early and often, shaking out good and bad code, rapid course correction.

There has been some experimentation with different cycles. I’m involved in projects that have 1 month, 3 month and 6 month cycles, for different reasons. They all work well.

..but addressing the needs of the longer term

But there are also weaknesses to the six-month cycle:

  • It’s hard to communicate to your users that you have made some definitive, significant change,
  • It’s hard to know what to support for how long, you obviously can’t support every release indefinitely.

I think there is growing insight into this, on both sides of the original “cadence” debate.

A tale of two philosophies, perhaps with a unifying theory

A few years back, at AKademy in Glasgow, I was in the middle of a great discussion about six month cycles. I was a passionate advocate of the six month cycle, and interested in the arguments against it. The strongest one was the challenge of making “big bold moves”.

“You just can’t do some things in six months” was the common refrain. “You need to be able to take a longer view, and you need a plan for the big change.” There was a lot of criticism of GNOME for having “stagnated” due to the inability to make tough choices inside a six month cycle (and with perpetual backward compatibility guarantees). Such discussions often become ideological, with folks on one side saying “you can evolve anything incrementally” and others saying “you need to make a clean break”.

At the time of course, KDE was gearing up for KDE 4.0, a significant and bold move indeed. And GNOME was quite happily making its regular releases. When the KDE release arrived, it was beautiful, but it had real issues. Somewhat predictably, the regular-release crowd said “see, we told you, BIG releases don’t work”. But since then KDE has knuckled down with regular, well managed, incremental improvements, and KDE is looking fantastic. Suddenly, the big bold move comes into focus, and the benefits become clear. Well done KDE :-)

On the other side of the fence, GNOME is now more aware of the limitations of indefinite regular releases. I’m very excited by the zest and spirit with which the “user experience MATTERS” campaign is being taken up in Gnome, there’s a real desire to deliver breakthrough changes. This kicked off at the excellent Gnome usability summit last year, which I enjoyed and which quite a few of the Canonical usability and design folks participated in, and the fruits of that are shaping up in things like the new Activities shell.

But it’s become clear that a change like this represents a definitive break with the past, and might take more than a single six month release to achieve. And most important of all, that this is an opportunity to make other, significant, distinctive changes. A break with the past. A big bold move. And so there’s been a series of conversations about how to “do a 3.0″, in effect, how to break with the tradition of incremental change, in order to make this vision possible.

It strikes me that both projects are converging on a common set of ideas:

  • Rapid, predictable releases are super for keeping energy high and code evolving cleanly and efficiently, they keep people out of a deathmarch scenario, they tighten things up and they allow for a shakeout of good and bad ideas in a coordinated, managed fashion.
  • Big releases are energising too. They are motivational, they make people feel like it’s possible to change anything, they release a lot of creative energy and generate a lot of healthy discussion. But they can be a bit messy, things can break on the way, and that’s a healthy thing.

Anecdotally, there are other interesting stories that feed into this.

Recently, the Python community decided that Python 3.0 will be a shorter cycle than the usual Python release. The 3.0 release is serving to shake out the ideas and code for 3.x, but it won’t be heavily adopted itself so it doesn’t really make sense to put a lot of effort into maintaining it – get it out there, have a short cycle, and then invest in quality for the next cycle because 3.x will be much more heavily used than 3.0. This reminds me a lot of KDE 4.0.

So, I’m interesting in gathering opinions, challenges, ideas, commitments, hypotheses etc about the idea of meta-cycles and how we could organise ourselves to make the most of this. I suspect that we can define a best practice, which includes regular releases for continuous improvement on a predictable schedule, and ALSO defines a good practice for how MAJOR releases fit into that cadence, in a well structured and manageable fashion. I think we can draw on the experiences in both GNOME and KDE, and other projects, to shape that thinking.

This is important for distributions, too

The major distributions tend to have big releases, as well as more frequent releases. RHEL has Fedora, Ubuntu makes LTS releases, Debian takes cadence to its logical continuous integration extreme with Sid and Testing :-).

When we did Ubuntu 6.06 LTS we said we’d do another LTS in “2 to 3 years”. When we did 8.04 LTS we said that the benefits of predictability for LTS’s are such that it would be good to say in advance when the next LTS would be. I said I would like that to be 10.04 LTS, a major cycle of 2 years, unless the opportunity came up to coordinate major releases with one or two other major distributions – Debian, Suse or Red Hat.

I’ve spoken with folks at Novell, and it doesn’t look like there’s an opportunity to coordinate for the moment. In conversations with Steve McIntyre, the current Debian Project Leader, we’ve identified an interesting opportunity to collaborate. Debian is aiming for an 18 month cycle, which would put their next release around October 2010, which would be the same time as the Ubuntu 10.10 release. Potentially, then, we could defer the Ubuntu LTS till 10.10, coordinating and collaborating with the Debian project for a release with very similar choices of core infrastructure. That would make sharing patches a lot easier, a benefit both ways. Since there will be a lot of folks from Ubuntu at Debconf, and hopefully a number of Debian developers at UDS in Barcelona in May, we will have good opportunities to examine this opportunity in detail. If there is goodwill, excitement and broad commitment to such an idea from Debian, I would be willing to promote the idea of deferring the LTS from 10.04 to 10.10 LTS.

Questions and options

So, what would the “best practices” of a meta-cycle be? What sorts of things should be considered in planning for these meta-cycles? What problems do they cause, and how are those best addressed? How do short term (3 month, 6 month) cycles fit into a broader meta-cycle? Asking these questions across multiple communities will help test the ideas and generate better ones.

What’s a good name for such a meta-cycle? Meta-cycle seems…. very meta.

Is it true that the “first release of the major cycle” (KDE 4.0, Python 3.0) is best done as a short cycle that does not get long term attention? Are there counter-examples, or better examples, of this?

Which release in the major cycle is best for long term support? Is it the last of the releases before major new changes begin (Python 2.6? GNOME 2.28?) or is it the result of a couple of quick iterations on the X.0 release (KDE 4.2? GNOME 3.2?) Does it matter? I do believe that it’s worthwhile for upstreams to support an occasional release for a longer time than usual, because that’s what large organisations want.

Is a whole-year cycle beneficial? For example, is 2.5 years a good idea? Personally, I think not. I think conferences and holidays tend to happen at the same time of the year every year and it’s much, much easier to think in terms of whole number of year cycles. But in informal conversations about this, some people have said 18 months, others have said 30 months (2.5 years) might suit them. I think they’re craaaazy, what do you think?

If it’s 2 years or 3 years, which is better for you? Hardware guys tend to say “2 years!” to get the benefit of new hardware, sooner. Software guys say “3 years!” so that they have less change to deal with. Personally, I am in the 2 years camp, but I think it’s more important to be aligned with the pulse of the community, and if GNOME / KDE / Kernel wanted 3 years, I’d be happy to go with it.

How do the meta-cycles of different projects come together? Does it make sense to have low-level, hardware-related things on a different cycle to high-level, user visible things? Or does it make more sense to have a rhythm of life that’s shared from the top to the bottom of the stack?

Would it make more sense to stagger long term releases based on how they depend on one another, like GCC then X then OpenOffice? Or would it make more sense to have them all follow the same meta-cycle, so that we get big breakage across the stack at times, and big stability across the stack at others?

Are any projects out there already doing this?

Is there any established theory or practice for this?

A cross-community conversation

If you’ve read this far, thank you! Please do comment, and if you are interested then please do take up these questions in the communities that you care about, and bring the results of those discussions back here as comments. I’m pretty sure that we can take the art of software to a whole new level if we take advantage of the fact that we are NOT proprietary, and this is one of the key ways we can do it.

I had the opportunity to present at the Linux Symposium on Friday, and talked further about my hope that we can improve the coordination and cadence of the entire free software stack. I tried to present both the obvious benefits and the controversies the idea has thrown up.

Afterwards, a number of people came up to talk about it further, with generally positive feedback.

Christopher Curtis, for example, emailed to say that the idea of economic clustering in the motor car industry goes far further than the location of car dealerships. He writes:

Firstly, every car maker releases their new models at about the same time. Each car maker has similar products – economy, sedan, light truck. They copy each other prolifically. Eventually, they all adopt a certain baseline – seatbelts, bumpers, airbags, anti-lock brakes. Yet they compete fiercely (OnStar from GM; Microsoft Sync from Ford) and people remain brand loyal. This isn’t going to change in the Linux world. Even better, relations like Debian->Ubuntu match car maker relations like Toyota->Lexus.

I agree with him wholeheartedly. Linux distributions and car manufacturers are very similar: we’re selling products that reach the same basic audience (there are niche specialists in real-time or embedded or regional markets) with a similar range (desktop, workstation, server, mobile), and we use many of the same components just as the motor industry uses common suppliers. That commonality and coordination benefits the motor industry, and yet individual brands and products retain their identity.

Let’s do a small thought experiment. Can you name, for the last major enterprise release of your favourite distribution, the specific major versions of kernel, gcc, X, GNOME, KDE, OpenOffice.org or Mozilla that were shipped? And can you say whether those major versions were the same or different to any of the enterprise releases of Ubuntu, SLES, Debian, or RHEL which shipped at roughly the same time? I’m willing to bet that any particular customer would say that they can’t remember either which versions were involved, or how those stacked up against the competition, and don’t care either. So looking backwards, differences in versions weren’t a customer-differentiating item.  We can do the same thought experiment looking forwards. WHAT IF you knew that the next long-term supported releases of Ubuntu, Debian, Red Hat and Novell Linux would all have the same major versions of kernel, GCC, X, GNOME, KDE, OO.o and Mozilla. Would that make a major difference for you? I’m willing to bet not – that from a customer view, folks who prefer X will still prefer X. A person who prefers Red Hat will stick with Red Hat. But from a developer view, would that make it easier to collaborate? Dramatically so.

Another member of the audience came up to talk about the fashion industry. That’s also converged on a highly coordinated model – fabrics and technologies “release” first, then designers introduce their work simultaneously at fashion shows around the world. “Spring 2009″ sees new collections from all the major houses, many re-using similar ideas or components. That hasn’t hurt their industry, rather it helps to build awareness amongst the potential audience.

The ultimate laboratory, nature, has also adopted release coordination. Anil Somayaji, who was in the audience for the keynote, subsequently emailed this:

Basically, trees of a given species will synchronize their seed releases in time and in amount, potentially to overwhelm predators and to coordinate with environmental conditions. In effect, synchronized seed releases is a strategy for competitors to work together to ensure that they all have the best chance of succeeding. In a similar fashion, if free software were to “release its seeds” in a synchronized fashion (with similar types of software or distributions having coordinated schedules, but software in different niches having different schedules), it might maximize the chances of all of their survival and prosperity.

There’s no doubt in my mind that the stronger the “pulse” we are able to create, by coordinating the freezes and releases of major pieces of the free software stack, the stronger our impact on the global software market will be, and the better for all companies – from MySQL to Alfresco, from Zimbra to OBM, from Red Hat to Ubuntu.

Discussing free software syncronicity

Thursday, May 15th, 2008

There’s been a flurry of discussion around the idea of syncronicity in free software projects. I’d like to write up a more comprehensive view, but I’m in Prague prepping for FOSSCamp and the Ubuntu Developer Summit (can’t wait to see everyone again!) so I’ll just contribute a few thoughts and responses to some of the commentary I’ve seen so far.

Robert Knight summarized the arguments I made during a keynote at aKademy last year. I’m really delighted by the recent announcement of that the main GNOME and KDE annual developer conferences (GUADEC and aKademy) will be held at the same time, and in the same place, in 2009. This is an important step towards even better collaboration. Initiatives like FreeDesktop.org have helped tremendously in recent years, and a shared conference venue will accelerate that process of bringing the best ideas to the front across both projects. Getting all of the passionate and committed developers from both of these into the same real-space will pay dividends for both projects.

Aaron Seigo of KDE Plasma has taken a strong position against synchronized release cycles, and his three recent posts on the subject make interesting reading.

Aaron raises concerns about features being “punted” out of a release in order to stick to the release cycle. It’s absolutely true that discipline about “what gets in” is essential in order to maintain a commitment on the release front. It’s unfortunate that features don’t always happen on the schedule we hope they might. But it’s worth thinking a little bit about the importance of a specific feature versus the whole release. When a release happens on time, it builds confidence in the project, and injects a round of fresh testing, publicity, enthusiasm and of course bug reports. Code that is new gets a real kicking, and improves as a result. Free software projects are not like proprietary projects – they don’t have to ship new releases in order to get the money from new licenses and upgrades.  We can choose to slip a particular feature in order to get a new round of testing and feedback on all the code which did make it.

Some developers are passionate about specific features, others are passionate about the project as a whole. There are two specific technologies, or rather methodologies, that have hugely helped to separate those two and empower them both. They are very-good-branching VCS, and test-driven development (TDD).

We have found that the developers who are really focused on a specific feature tend to work on that feature in a branch (or collaborative set of branches), improving it “until it is done” regardless of the project release cycle. They then land the feature as a whole, usually after some review. This of course depends on having a VCS that supports branching and merging very well. You need to be able to merge from trunk continuously, so that your feature branch is always mergeable *back* to trunk. And you need to be able to merge between a number of developers all working on the same features. Of course, my oft-stated preference in VCS is Bazaar, because the developers have thought very carefully about how to support collaborative teams across platforms and projects and different workflows, but any VCS, even a centralised one, that supports good branches will do.

A comprehensive test suite, on the other hand, lets you be more open to big landings on trunk, because you know that the tests protect the functionality that people had *before* the landing. A test suite is like a force-field, protecting the integrity of code that was known to behave in a particular way yesterday, in the face of constant change. Most of the projects I’m funding now have adopted a tests-before-landings approach, where landings are trunk are handled by a robot who refuses to commit the landing unless all tests passed. You can’t argue with the robot! The beauty of this is that your trunk is “always releasable”. That’s not *entirely* true, you always want to do a little more QA before you push bits out the door, but you have the wonderful assurance that the test suite is always passing. Always.

So, branch-friendly VCS’s and test-driven development make all the difference.  Work on your feature till it’s done, then land it on the trunk during the open window. For folks who care about the release, the freeze window can be much narrower if you have great tests.

There’s a lot of discussion about the exact length of cycle that is “optimal”, with some commentary about the windows of development, freeze, QA and so on.  I think that’s a bit of a red herring, when you factor in good branching, because feature development absolutely does not stop when the trunk is frozen in preparation for a release. Those who prefer to keep committing to their branches do so, they scratch the itch that matters most to them.

I do think that cycle lengths matter, though. Aaron speculates that a 4-month cycle might be good for a web site. I agree, and we’ve converged on a 4-month planning cycle for Launchpad after a few variations on the theme. The key difference for me with a web site is that one has only one deployment point of the code in question, so you don’t have to worry as much about update and cross-version compatibility. The Launchpad team has a very cool system, where they roll out fresh code from trunk every day to a set of app servers (called “edge.launchpad.net”), and the beta testers of LP use those servers by default. Once a month, they roll out a fresh drop from tip to all the app servers, which is also when they rev the database and can introduce substantial new features. It’s tight, but it does give the project a lot of rhythm. And we plan in “sets of 4 months”, at least, we are for the next cycle. The last planning cycle was 9 months, which was just way too long.

I think the cycles-within-cycles idea is neat. Aaron talks about how 6 months is too long for quick releases, and too short to avoid having to bump features from one cycle to the next. I’ve already said that a willingness to bump a feature that is not ready is a strength and not a weakness. It would be interesting to see if the Plasma team adopted a shorter “internal” cycle, like 2 months or 3 months, and fit that into a 6 month “external” cycle, whether Aaron’s concerns were addressed.

For large projects, the fact that a year comes around every, well, year, turns out to be quite significant. You really want a cycle that divides neatly into a year, because a lot of external events are going to happen on that basis. And you want some cohesion between the parts. We used to run the Canonical sprints on a 4-month cycle (3 times a year) and the Ubuntu releases on a six month cycle (twice a year) and it was excessively complex. As soon as we all knew each other well enough not to need to meet up every 4 months, we aligned the two and it’s been much smoother ever since.

Some folks feel that distributions aren’t an important factor in choosing an upstream release cycle. And to a certain extent that’s true. There will always be a “next” release of whatever distribution you care about, and hopefully, an upstream release that misses “this” release will make it into the next one. But I think that misses the benefit of getting your work to a wider audience as fast as possible. There’s a great project management methodology called “lean”, which we’ve been working with. And it says that any time that the product of your work sits waiting for someone else to do something, is “waste”. You could have done that work later, and done something else before that generated results sooner. This is based on the amazing results seen in real-world production lines, like cars and electronics.

So while it’s certainly true that you could put out a release that misses the “wave” of distribution releases, but catches the next wave in six months time, you’re missing out on all the bug reports and patches and other opportunities for learning and improvement that would have come if you’d been on the first wave. Nothing morally wrong with that, and there may be other things that are more important for sure, but it’s worth considering, nonetheless.

Some folks have said that my interest in this is “for Canonical”, or “just for Ubuntu”. And that’s really not true. I think it’s a much more productive approach for the whole free software ecosystem, and will help us compete with the proprietary world. That’s good for everyone. And it’s not just Ubuntu that does regular 6-month releases, Fedora has adopted the same cycle, which is great because it improves the opportunities to collaborate across both distributions – we’re more likely to have the same versions of key components at any given time.

Aaron says:

Let’s assume project A depends on B, and B releases at the same time as A. That means that A is either going to be one cycle behind B in using what B provides, or will have to track B’s bleeding edge for the latter part of their cycle allowing some usage. What you really want is a staggered approach where B releases right about when A starts to work on things.

This goes completely counter to the “everyone on the same month, every 6 months” doctrine Mark preaches, of course.

I have never suggested that *everyone* should release at the same time. In fact, at Ubuntu we have converged around the idea of releasing about one month *after* our biggest predictable upstream, which happens to be GNOME. And similarly, the fact that the kernel has their own relatively predictable cycle is very useful. We don’t release Ubuntu on the same day as a kernel release that we will ship, of course, but we are able to plan and communicate meaningfully with the folks at kernel.org as to which version makes sense for us to collaborate around.

Rather than try and release the entire stack all at the same time, it makes sense to me to offset the releases based on a rough sense of dependencies.

Just to be clear, I’m not asking the projects I’ll mention below to change anything, I’m painting a picture or a scenario for the purposes of the discussion. Each project should find their own pace and scratch their itch in whatever way makes them happiest. I think there are strong itch-scratching benefits to syncronicity, however, so I’ll sketch out a scenario.

Imagine we aimed to have three waves of releases, about a month apart.

In the first wave, we’d have the kernel, toolchain, languages and system libraries, and possibly components which are performance- and security-critical. Linux, GCC, Python, Java, Apache, Tomcat… these are items which likely need the most stabilisation and testing before they ship to the innocent, and they are also pieces which need to be relatively static so that other pieces can settle down themselves. I might also include things like Gtk in there.

In the second wave, we’d have applications, the desktop environments and other utilities. AbiWord and KOffice, Gnumeric and possibly even Firefox (though some would say Firefox is a kernel and window manager so… ;-)).

And in the third wave, we’d have the distributions – Ubuntu, Fedora, Gentoo, possibly Debian, OpenSolaris. The aim would be to encourage as much collaboration and discussion around component versions in the distributions, so that they can effectively exchange information and patches and bug reports.

I’ll continue to feel strongly that there is value to projects in getting their code to a wider audience than those who will check it out of VCS-du-jour, keep it up to date and build it. And the distributions are the best way to get your code… distributed! So the fact that both Fedora and Ubuntu have converged on a rhythm bodes very well for upstreams who can take advantage of that to get wider testing, more often, earlier after their releases. I know every project will do what suits it, and I hope that projects will feel it suits them to get their code onto servers and desktops faster so that the bug fixes can come faster, too.

Stepping back from the six month view, it’s clear that there’s a slower rhythm of “enterprise”, “LTS” or “major” releases. These are the ones that people end up supporting for years and years. They are also the ones that hardware vendors want to write drivers for, more often than not. And a big problem for them is still “which version of X, kernel, libC, GCC” etc should we support? If the distributions can articulate, both to upstreams and to the rest of the ecosystem, some clear guidance in that regard then I have every reason to believe people would respond to it appropriates. I’ve talked with kernel developers who have said they would LOVE to know which kernel version is going to turn into RHEL or an Ubuntu LTS release, and ideally, they would LOVE it if those were the same versions, because it would enable them to plan their own work accordingly. So let’s do it!

Finally, in the comments on Russell Coker’s thoughtful commentary there’s a suggestion that I really like – that it’s coordinated freeze dates more than coordinated release dates that would make all the difference. Different distributions do take different views on how they integrate, test and deploy new code, and fixing the release dates suggests a reduction in the flexibility that they would have to position themselves differently. I think this is a great point. I’m primarily focused on creating a pulse in the free software community, and encouraging more collaboration. If an Ubuntu LTS release, and a Debian release, and a RHEL release, used the same major kernel version, GCC version and X version, we would be able to improve greatly ALL of their support for today’s hardware. They still wouldn’t ship on the same date, but they would all be better off than they would be going it alone. And the broader ecosystem would feel that an investment in code targeting those key versions would be justified much more easily.