Discussing free software syncronicity

Thursday, May 15th, 2008

There’s been a flurry of discussion around the idea of syncronicity in free software projects. I’d like to write up a more comprehensive view, but I’m in Prague prepping for FOSSCamp and the Ubuntu Developer Summit (can’t wait to see everyone again!) so I’ll just contribute a few thoughts and responses to some of the commentary I’ve seen so far.

Robert Knight summarized the arguments I made during a keynote at aKademy last year. I’m really delighted by the recent announcement of that the main GNOME and KDE annual developer conferences (GUADEC and aKademy) will be held at the same time, and in the same place, in 2009. This is an important step towards even better collaboration. Initiatives like FreeDesktop.org have helped tremendously in recent years, and a shared conference venue will accelerate that process of bringing the best ideas to the front across both projects. Getting all of the passionate and committed developers from both of these into the same real-space will pay dividends for both projects.

Aaron Seigo of KDE Plasma has taken a strong position against synchronized release cycles, and his three recent posts on the subject make interesting reading.

Aaron raises concerns about features being “punted” out of a release in order to stick to the release cycle. It’s absolutely true that discipline about “what gets in” is essential in order to maintain a commitment on the release front. It’s unfortunate that features don’t always happen on the schedule we hope they might. But it’s worth thinking a little bit about the importance of a specific feature versus the whole release. When a release happens on time, it builds confidence in the project, and injects a round of fresh testing, publicity, enthusiasm and of course bug reports. Code that is new gets a real kicking, and improves as a result. Free software projects are not like proprietary projects – they don’t have to ship new releases in order to get the money from new licenses and upgrades.  We can choose to slip a particular feature in order to get a new round of testing and feedback on all the code which did make it.

Some developers are passionate about specific features, others are passionate about the project as a whole. There are two specific technologies, or rather methodologies, that have hugely helped to separate those two and empower them both. They are very-good-branching VCS, and test-driven development (TDD).

We have found that the developers who are really focused on a specific feature tend to work on that feature in a branch (or collaborative set of branches), improving it “until it is done” regardless of the project release cycle. They then land the feature as a whole, usually after some review. This of course depends on having a VCS that supports branching and merging very well. You need to be able to merge from trunk continuously, so that your feature branch is always mergeable *back* to trunk. And you need to be able to merge between a number of developers all working on the same features. Of course, my oft-stated preference in VCS is Bazaar, because the developers have thought very carefully about how to support collaborative teams across platforms and projects and different workflows, but any VCS, even a centralised one, that supports good branches will do.

A comprehensive test suite, on the other hand, lets you be more open to big landings on trunk, because you know that the tests protect the functionality that people had *before* the landing. A test suite is like a force-field, protecting the integrity of code that was known to behave in a particular way yesterday, in the face of constant change. Most of the projects I’m funding now have adopted a tests-before-landings approach, where landings are trunk are handled by a robot who refuses to commit the landing unless all tests passed. You can’t argue with the robot! The beauty of this is that your trunk is “always releasable”. That’s not *entirely* true, you always want to do a little more QA before you push bits out the door, but you have the wonderful assurance that the test suite is always passing. Always.

So, branch-friendly VCS’s and test-driven development make all the difference.  Work on your feature till it’s done, then land it on the trunk during the open window. For folks who care about the release, the freeze window can be much narrower if you have great tests.

There’s a lot of discussion about the exact length of cycle that is “optimal”, with some commentary about the windows of development, freeze, QA and so on.  I think that’s a bit of a red herring, when you factor in good branching, because feature development absolutely does not stop when the trunk is frozen in preparation for a release. Those who prefer to keep committing to their branches do so, they scratch the itch that matters most to them.

I do think that cycle lengths matter, though. Aaron speculates that a 4-month cycle might be good for a web site. I agree, and we’ve converged on a 4-month planning cycle for Launchpad after a few variations on the theme. The key difference for me with a web site is that one has only one deployment point of the code in question, so you don’t have to worry as much about update and cross-version compatibility. The Launchpad team has a very cool system, where they roll out fresh code from trunk every day to a set of app servers (called “edge.launchpad.net”), and the beta testers of LP use those servers by default. Once a month, they roll out a fresh drop from tip to all the app servers, which is also when they rev the database and can introduce substantial new features. It’s tight, but it does give the project a lot of rhythm. And we plan in “sets of 4 months”, at least, we are for the next cycle. The last planning cycle was 9 months, which was just way too long.

I think the cycles-within-cycles idea is neat. Aaron talks about how 6 months is too long for quick releases, and too short to avoid having to bump features from one cycle to the next. I’ve already said that a willingness to bump a feature that is not ready is a strength and not a weakness. It would be interesting to see if the Plasma team adopted a shorter “internal” cycle, like 2 months or 3 months, and fit that into a 6 month “external” cycle, whether Aaron’s concerns were addressed.

For large projects, the fact that a year comes around every, well, year, turns out to be quite significant. You really want a cycle that divides neatly into a year, because a lot of external events are going to happen on that basis. And you want some cohesion between the parts. We used to run the Canonical sprints on a 4-month cycle (3 times a year) and the Ubuntu releases on a six month cycle (twice a year) and it was excessively complex. As soon as we all knew each other well enough not to need to meet up every 4 months, we aligned the two and it’s been much smoother ever since.

Some folks feel that distributions aren’t an important factor in choosing an upstream release cycle. And to a certain extent that’s true. There will always be a “next” release of whatever distribution you care about, and hopefully, an upstream release that misses “this” release will make it into the next one. But I think that misses the benefit of getting your work to a wider audience as fast as possible. There’s a great project management methodology called “lean”, which we’ve been working with. And it says that any time that the product of your work sits waiting for someone else to do something, is “waste”. You could have done that work later, and done something else before that generated results sooner. This is based on the amazing results seen in real-world production lines, like cars and electronics.

So while it’s certainly true that you could put out a release that misses the “wave” of distribution releases, but catches the next wave in six months time, you’re missing out on all the bug reports and patches and other opportunities for learning and improvement that would have come if you’d been on the first wave. Nothing morally wrong with that, and there may be other things that are more important for sure, but it’s worth considering, nonetheless.

Some folks have said that my interest in this is “for Canonical”, or “just for Ubuntu”. And that’s really not true. I think it’s a much more productive approach for the whole free software ecosystem, and will help us compete with the proprietary world. That’s good for everyone. And it’s not just Ubuntu that does regular 6-month releases, Fedora has adopted the same cycle, which is great because it improves the opportunities to collaborate across both distributions – we’re more likely to have the same versions of key components at any given time.

Aaron says:

Let’s assume project A depends on B, and B releases at the same time as A. That means that A is either going to be one cycle behind B in using what B provides, or will have to track B’s bleeding edge for the latter part of their cycle allowing some usage. What you really want is a staggered approach where B releases right about when A starts to work on things.

This goes completely counter to the “everyone on the same month, every 6 months” doctrine Mark preaches, of course.

I have never suggested that *everyone* should release at the same time. In fact, at Ubuntu we have converged around the idea of releasing about one month *after* our biggest predictable upstream, which happens to be GNOME. And similarly, the fact that the kernel has their own relatively predictable cycle is very useful. We don’t release Ubuntu on the same day as a kernel release that we will ship, of course, but we are able to plan and communicate meaningfully with the folks at kernel.org as to which version makes sense for us to collaborate around.

Rather than try and release the entire stack all at the same time, it makes sense to me to offset the releases based on a rough sense of dependencies.

Just to be clear, I’m not asking the projects I’ll mention below to change anything, I’m painting a picture or a scenario for the purposes of the discussion. Each project should find their own pace and scratch their itch in whatever way makes them happiest. I think there are strong itch-scratching benefits to syncronicity, however, so I’ll sketch out a scenario.

Imagine we aimed to have three waves of releases, about a month apart.

In the first wave, we’d have the kernel, toolchain, languages and system libraries, and possibly components which are performance- and security-critical. Linux, GCC, Python, Java, Apache, Tomcat… these are items which likely need the most stabilisation and testing before they ship to the innocent, and they are also pieces which need to be relatively static so that other pieces can settle down themselves. I might also include things like Gtk in there.

In the second wave, we’d have applications, the desktop environments and other utilities. AbiWord and KOffice, Gnumeric and possibly even Firefox (though some would say Firefox is a kernel and window manager so… ;-)).

And in the third wave, we’d have the distributions – Ubuntu, Fedora, Gentoo, possibly Debian, OpenSolaris. The aim would be to encourage as much collaboration and discussion around component versions in the distributions, so that they can effectively exchange information and patches and bug reports.

I’ll continue to feel strongly that there is value to projects in getting their code to a wider audience than those who will check it out of VCS-du-jour, keep it up to date and build it. And the distributions are the best way to get your code… distributed! So the fact that both Fedora and Ubuntu have converged on a rhythm bodes very well for upstreams who can take advantage of that to get wider testing, more often, earlier after their releases. I know every project will do what suits it, and I hope that projects will feel it suits them to get their code onto servers and desktops faster so that the bug fixes can come faster, too.

Stepping back from the six month view, it’s clear that there’s a slower rhythm of “enterprise”, “LTS” or “major” releases. These are the ones that people end up supporting for years and years. They are also the ones that hardware vendors want to write drivers for, more often than not. And a big problem for them is still “which version of X, kernel, libC, GCC” etc should we support? If the distributions can articulate, both to upstreams and to the rest of the ecosystem, some clear guidance in that regard then I have every reason to believe people would respond to it appropriates. I’ve talked with kernel developers who have said they would LOVE to know which kernel version is going to turn into RHEL or an Ubuntu LTS release, and ideally, they would LOVE it if those were the same versions, because it would enable them to plan their own work accordingly. So let’s do it!

Finally, in the comments on Russell Coker’s thoughtful commentary there’s a suggestion that I really like – that it’s coordinated freeze dates more than coordinated release dates that would make all the difference. Different distributions do take different views on how they integrate, test and deploy new code, and fixing the release dates suggests a reduction in the flexibility that they would have to position themselves differently. I think this is a great point. I’m primarily focused on creating a pulse in the free software community, and encouraging more collaboration. If an Ubuntu LTS release, and a Debian release, and a RHEL release, used the same major kernel version, GCC version and X version, we would be able to improve greatly ALL of their support for today’s hardware. They still wouldn’t ship on the same date, but they would all be better off than they would be going it alone. And the broader ecosystem would feel that an investment in code targeting those key versions would be justified much more easily.

43 Responses to “Discussing free software syncronicity”

  1. Vadim P. Says:

    +1 for me, I was right! (http://www.markshuttleworth.com/archives/146#comment-300591)

  2. Tom Says:

    Nice post.
    Makes a lot of sense to me. ( But I already said similar things in the comments to your previous post *pats his back* )
    I would really love it if “the community” would give this a try. It would show that we all have something in common ( like a common goal or something. Kinda nice. ) .. just a try 🙂 ..

    The only negative thing i can think of is that same versions would offer better attack vectors for attackers .. but more eyes could probably counter balance that.

  3. Achim Says:

    I like the idea of synchronizing free software development, but I have one question.

    I am not sure about the meaning of this two sentences.

    `Stepping back from the six month view, it’s clear that there’s a slower rhythm of “enterprise”, “LTS” or “major” releases. These are the ones that people end up supporting for years and years. They are also the ones that hardware vendors want to write drivers for, more often than not.`

    For me it sounds like to help hardware vendors to create proprietary drivers, that only support “LTS” or “major” releases.
    I thought that hardware vendors should develop drivers within the kernel, so that every release gets the best hardware support, that we can get at this time.

    Maybe I have misunderstood this two sentences, so please enlighten me.

    best wishes

    Mark Shuttleworth says:

    Oops, I wasn’t clear. I was talking about free software drivers. Even with an open source approach, there’s some work involved in porting between versions of the Linux kernel, and I’ve seen vendors get very frustrated and ultimately do a bad job of supporting Linux because of it.

  4. lefty.crupps Says:

    Why does Ubuntu care about KDE anyway? Those users get the shaft on this distro. However, I do find the idea interesting, but I think asking Debian to be a part of it is unlikely to happen — they release too far apart, and Stable is generally older software for that reason. Debian Testing provides for rolling releases, which is why I moved to it from Kubuntu, but there is never a real “release” of Testing.

  5. limp lumb Says:

    Hi Mark,

    How do you feel about serving Linux 2.6.25 as a proposed update for Ubuntu 8.04? It could be cooking in -proposed until the three month update respin… Fedora9 is already using it, OpenSUSE11 will be using it… Some of RH and Novells long term support systems might want to use it.

    The thing is Linux 2.6.24 no longer will get stable updates unless it cure serious security problems and everybody else moved to 2.6.25, Ubuntu should sync to that release also. 2.6.25 could get a real big userbase and might get long term stability releases.

    Now lets se Ubuntu raise the bar and you put your money where your mouth is.

  6. Achim Says:

    Thanks for your quick reply Mark.
    Now I understand what you have meant.

    I really hope for us all that your dream will come true.


  7. Post-Reload Syndrome at hortont::blog Says:

    […] releases of lots of Linux software to a unified release schedule (a la Mark Shuttleworth’s latest blog post) would be a really positive thing. I can see it, but we also discussed whether this would lead to […]

  8. Joshua Rosen Says:

    Getting agreement on which components are going to be designated for long term support would be especially helpful. This is particularly true of the kernel. If every Nth kernel was designated as an Checkpoint kernel, which would mean that all new drivers would be back ported to that kernel until the next Checkpoint kernel is released. If the stable distros were standardize on using the Checkpoint kernels it would fix their most serious problem which is their inability to run on new hardware.

  9. Me Says:

    > And it’s not just Ubuntu that does regular 6-month releases, Fedora has adopted the same cycle, which is great because it improves the opportunities to collaborate across both distributions – we’re more likely to have the same versions of key components at any given time.

    F = Fedora 9
    U = Ubuntu 8-04

    F : gcc 4.3
    U : gcc 4.2.1

    F : SeLinux
    U : AppArmor

    F : Xorg server 1.5 (1.4.99)
    U : Xorg server 1.3

    F : Linux 2.6.25
    U : Linux 2.6.24

    F : glibc 2.8
    U : glibc 2.7

    F : NetworkManager 0.7
    U : NetworkManager 0.6

    F : KDE 4.0
    U : KDE 3.5.X

    F : rpm 🙂
    U : dpkg


    Should Fedora slow down ?
    Are you switching to SeLinux ?

    > In the first wave, we’d have the kernel

    Fedora doesn’t want an “old” kernel, Fedora wants the latest.
    Fedora wants to work with upstream project, not after they are done.

    If you want to sync Ubuntu with RHEL, begin by syncing Ubuntu with Fedora (RHEL is based on Fedora).

  10. Callum Says:

    I like the way you’re selling this. I really hope it comes off. Particularly for the long term releases, and to sync freeze dates rather than release dates. If all the major distros were shipping similar kernel / X / etc versions, I can see that making a really big impact for open source software in general. As the patches flow upstream, they’ll flow back down, and everyone will benefit from more solid software. Here’s hoping this becomes a reality. 🙂

  11. Me Says:

    > Finally, in the comments on Russell Coker’s thoughtful commentary there’s a suggestion that I really like

    I like his comment too event if I am not agree with you.

    But you miss one big point :
    – “I believe that the Debian project should align it’s release cycles with Red Hat Enterprise Linux.”

    It’s not “RHEL should align its release date with Ubuntu 4-10 (or Debian)”. It’s not “RHEL should use the same components as Ubuntu 4-10 (or Debian), etc”.

    Russel Coker think Debian can take advantage to align with RHEL. Of course, Debian are free to do so.

    > If an Ubuntu LTS release, and a Debian release, and a RHEL release, used the same major kernel version, GCC version and X version

    Do what Russell Coker propose to Debian. But do not request Red Hat to align RHEL to Ubuntu.

  12. Colin Says:

    Synchronicity != Synchronization – check a dictionary!

  13. Vadim P. Says:

    @ Me and your stats:



    Interesting, eh?

  14. Cristiano Says:

    > Do what Russell Coker propose to Debian. But do not request Red Hat to align RHEL to Ubuntu.

    I think you’ve not read Mark’s other post about syncronicity:

    > There’s one thing that could convince me to change the date of the next Ubuntu LTS: the opportunity to collaborate with the other, large distributions on a coordinated major / minor release cycle. If two out of three of Red Hat (RHEL), Novell (SLES) and Debian are willing to agree in advance on a date to the nearest month, and thereby on a combination of kernel, compiler toolchain, GNOME/KDE, X and OpenOffice versions, and agree to a six-month and 2-3 year long term cycle, then I would happily realign Ubuntu’s short and long-term cycles around that.

  15. Ideas to Copy from Red Hat | etbe Says:

    […] Comments Wendy Jakobson on fair trade is the Linux wayMark Shuttleworth » Blog Archive » Discussing free software syncronicity on Release Dates for DebianDon Marti on Release Dates for Debianetbe on Release Dates for […]

  16. Boycott Novell » Links 16/05/2008: Fedora 9 Still in the Headlines, GTK+Qt Intersection Says:

    […] Discussing free software syncronicity […]

  17. Pascal Bleser Says:

    Sorry, I don’t agree. Most arguments are overly simplified, and imposing further burden on upstream is not a good idea.

    Think about why the quality of FOSS is usually so high, why we have so many contributors in so many projects: for some it’s money, for almost everyone else it’s *fun*. And fun is not incompatible with quality, stability, enterprise use. It’s quite the opposite actually: the more fun upstream developers and architects have, the better the software. (ok, arguably, there are many more aspects, but almost everyone underestimates the fun factor).

    And it must also be mentioned that “KDE” and “GNOME” aren’t just two big consistent building block that can conveniently sound like being easy to coordinate (they’re just 2 items, right ?). The same problems exist at every depth level inside those projects. KDE and GNOME each consist of dozens of subprojects and components (e.g. glib2, gtk2, atk, pango, gnome-ui, bonobo, etc etc etc). It’s already a huge task to synch development internally between subprojects, and that’s between components that depend directly upon each other.

    Even if you’d be willing to pay the price of having 50% less features and bugfixes for the sake of coordinated freeze and release cycles, the individual developers upstream most probably wouldn’t. That’s how it is, and no one is going to force them to anything if it isn’t beneficial to them. Coordinated release cycles are clearly beneficial to distributors, but I fail to see what big advantage it would bring to developers. The only way to influence it into that direction is having lots of upstream developers on your payroll. Which is only good for everyone up to a certain point (or percentage of distributor driven developers).

    Mark, you can’t be that far away from the reality of software development, and especially FOSS, how this whole thing works at the level of developers. Not the businesses, not the vendors, not the distributors, but at the level of upstream developers. We can’t push everything on upstream, everyone has to play his part (upstream for development, patches, proper distribution and version management; downstream for integration, testing, feedback to upstream; users for testing, contributing, support).

    OTOH I’m all for working closer together across packagers and distributors to have common tooling, standards, cross-pollination, single contact points for upstream, etc… Lots of ideas, lots of things to do together. That would already reduce efforts considerably. So don’t push it on upstream, push it on and open up to your peers (other distros). We all still have huge potential in that direction.
    (mind you, I’m doing a lot more downstream than upstream, not that you’d think I have an agenda with upstream here :))

    My take on Sean Michael Kerner’s take on your post: http://dev-loki.blogspot.com/2008/05/re-ubuntus-pipe-dream-true-free.html

    Mark Shuttleworth says:
    Pascal, thanks for your comments. I hope I understand the “scratch your own itch” motivational forces at work in free software, that’s what brought me to Linux and FLOSS in the first place, and that’s what’s kept me here, and that’s why I think we have the most powerful recipe for software innovation (compared to stodgy corporate development). In my post, I described how new technology allows us to separate out the “feature development itch scratching” from the management of the trunk, and release, branches. I totally agree with you that reducing people’s motivation to contribute and participate would be a terrible thing for free software. What I hope will happen is that more projects will attract people who love integration and release management, separately from all the people they already attract for feature development. And we’ll see projects (even very large projects) simultaneously get better at feature development AND release management. GNOME did it, KDE is in the process of doing it, OpenOffice does it…. our definition of a “well-run” free software project is increasingly professional – it makes easy to use, well documented software, and delivers it on schedule. That’s a long way from the early days of itch-scratchin’-works-for-me-release-it-someday-maybe development which was, if not the norm, certainly the stereotype!


  18. Paul Kishimoto Says:

    To the commenter from May 16th, 2008 at 1:15 am:
    I don’t see what point your list makes. Mark identified “improved opportunities” and says that having matching versions is “more likely”… but the reason we’re having this discussion is that the opportunities aren’t capitalized on, and likelihood hasn’t yet been replaced by certainty.

    >Fedora wants to work with upstream project
    This is to suggest that Ubuntu does NOT want work with upstream? Why put so much effort into Launchpad, then? This is a silly allegation.

    To Mark:
    Lean manufacturing makes for a more tightly coupled economy. There are efficiencies gained, true; but because there are few warehoused reserves, as soon as “upstream” fails (e.g. auto parts suppliers strike) the effect is felt immediately, severely and uncontrollably (e.g. cars already on the production line cannot be finished, shipped or sold).

    I like the syncronicity idea, but this analogy should be considered. What happens if a scheduled upstream fails to meet a freeze date… and two distributions have different ideas about how to treat this (i.e. keep the old version vs. use unreleased code from the new but late version)? An agreement beforehand would help ensure that such an event is not disruptive.

  19. Nathan DBB Says:

    F = Fedora 9 (May 2008)
    U = Ubuntu 8-04 (April 2008)
    R = Red Hat Enterprise Linux (5.1, Nov 2007)

    F : gcc 4.3
    U : gcc 4.2.1
    R : gcc 4.1.1

    F : Xorg server 1.5 (1.4.99)
    U : Xorg server 1.3
    R : Xorg server 1.1.1

    F : Linux 2.6.25
    U : Linux 2.6.24
    R : Linux 2.6.18

    F : glibc 2.8
    U : glibc 2.7
    R : glibc 2.5

    F : KDE 4.0
    U : KDE 3.5.9
    R : KDE 3.5.4

    The newest distros use the newest packages!

  20. Me Says:

    Paul Kishimoto Says (May 15th, 2008 at 9:45 pm)
    > This is to suggest that Ubuntu does NOT want work with upstream? Why put so much effort into Launchpad, then? This is a silly allegation.

    Launchpad ?
    The not yet, perhaps in near futur, who knows, open source projet ?

    *Today* Fedora can use Linux 2.6.25, Xorg 1.4.99 because Fedora don’t care if the NVidea driver does not work with Fedora 9.
    It’s not just about release date, it’s about objectives.
    Ubuntu and Fedora (and RHEL/SLES) don’t share the same objectives.

    Now, suppose Ubuntu sync with Fedora. Then Ubuntu need to wait for NVidia drivers, Flash plugin, etc. Ubuntu will always be released after Fedora.
    Is it what you want ?
    Is it what Mark want ?

  21. Me Says:

    Nathan DBB Says (May 16th, 2008 at 2:47 pm) :
    > F = Fedora 9 (May 2008)
    > U = Ubuntu 8-04 (April 2008)

    You are lucky.
    Fedora 9 have been planned to be release in April 2008 :

    Same release date planned, differents components.

  22. Phil Says:

    Not to be too pedantic, but the advantages you list for having a comprehensive test suite apply whether you use Test-Driven development or not, as long as you have unit tests. I love TDD, but even projects that don’t want to embrace full TDD can still get that extra level of confidence that comes with having a comprehensive test suite.

  23. A. Peon Says:

    Only skimmed, but:

    Technically, once hardware support is encoded as FOSS in a FOSS project, it shouldn’t break until someone else does something — at which point that ‘someone else’ shares responsibility for cleaning up the mess (and the hard part, with any hardware support, is getting device quirks and specifics documented in the first place). A majority of vendors are slowly stepping up to do the right thing here, so pandering to a lingering minority [rhymes with ‘Not kidding ya’] refusing to adapt doesn’t seem like a good or necessary strategy. In fact, it tells them that they don’t have to adapt and can laugh all the way to the bank while their competitors invest resources and ‘give away secrets’ to get good, reliable, maintainable, and ideologically-compliant support. [Hardware that is not documented is broken hardware.]

    Software vendors bearing blobs, on the other hand, aren’t selling a physical product with a claim that it will work; they’re selling blobs or code with licenses and people are ‘free’ to either support their efforts with money or license-consent or choose different software… and of course we all want to run our favorite legacy binaries forever, simply because it’s convenient. The thing with software is that, unlike driver blobs that *need* to link with one particular kernel or ABI (in practice, we know developers of Linux-the-kernel are not inclined to make the kernel ABI more stable to support hardware companies’ efforts to avoid documenting their hardware), software just needs its particular ABI or environment *available* — it’s quite possible to have multiple versions of libraries installed, and plenty of systems even maintain ABI compatibility layers or emulation systems where necessary. [An OS exists to run software, so let’s make it good at running whatever software we want to run.]

    Speaking of rolling releases, how does Ubuntu’s model compare to the BSDs? Culturally, since BSDs tend to huddle around a centralized CVS, someone [someone crazy enough] is generally testing the *entire* N+1 tree as it stands at any one time, through each incremental change until it’s deemed releasable. Development in the GNU world seems to revolve around many different people trying to merge their individual repositories or packages at the last minute, with much less holistic testing (or coordinated ‘someone just broke N+1 for half the planet, let’s drop everything and solve this properly’ reaction to test results) … but I could have a skewed view for not participating directly? [My mind is on detection and reaction to basic kernel bugs/regressions here, so this is certainly no criticism of apt or binary packages, which of course work amazingly well.]

  24. wariola Says:

    I think Fedora 9 objective was greatest, latest and bleeding edge while hardy heron was more to cater for an LTS bulletproof distro. That explain why it uses old stable version of components.

  25. arthurson Says:

    hi mark,

    today i just received 3cd from ubuntu, and i took a photo on forum, to share my happiness with others. ( http://forum.hkepc.com/viewthread.php?tid=984333&extra=page%3D1 )

    yet people living in modern society like hong kong and taiwan always think this decison is very selfish, it destories global environment, it indirectly hurts the linux development financially. for them, to burn the cd by themselves is the only way to use linux correctly. asking cd without contribution is not recommended.

    i like ubuntu, collecting all release cd version becomes one of my hobbit. i installed it to my laptop , reccommend it to my friend, whenever i have problems, i read , i ask and i solve. yet i dare not to share my collection cd to internet. it is quite funny and embarrassing story.

    it seems lots of people have these kind of thought. mark , what do you think this phenomenon?


  26. Jonas Says:

    I would ask everyone to please ignore the comments by the poster using the name “Me” (and Mark, maybe delete them and ban his IP address). I recognize a troll when I see them, and this is one.

  27. Me Says:

    It’s fine to me (Me).
    It’s not my blog.

  28. Peter Says:

    Hi Mark

    I like this dream of synchronizing the distro either release or freeze. I totally agree with you on the benefits of it and certain development methods and the bumpable feature way of thinking of development. The three wave approach certainly has its merits too though maybe its a pipe dream for now at least. Since their is so many views and minds to change I think this may be trying to take an revolutionary approach rather than an evolutionary approach. If we have bumpable features we should have bumpable ideas as well don’t you think?! Start with the distros and when thats done when you talk to upstream it will be a benefit when you talk with them since you will have something to show, that it is possible and helpfull and works. Secondly upstream consists of many more parties than the biggest distros. Please dont only consider the biggest players talk with all distros. Also consider that if you can get Linspire, Mandriva and Gentoo, Knoppix they may or may not weigh upp the loss of not having too big distros support your dram and only one that does.

    What great about deadlines is that it helps push developers and development, Testing, QA. In a more competitive environment it will be even more open market and cut throat competition.

    I dont see the need for the point releases as much as you do.

  29. misc reader Says:

    > Fedora wants to work with upstream project, not after they are done.

    While it sounds a bit strange, I guess he got a point that there is also an advantage for the projects themself to be picked up at different releases since it means that there are bugreports just any time and not only those the “coperate distributor version” has plus the time spend by a single distributor to ship a rocking stable combination of software is also a strong selling-argument. So, why should e.g. Fedora or OpenSuse give up there biggest advantage: the amount of time and resources spend to push out a round product.

  30. Thomas Says:

    But where are the actual gains for end users and the developers in your suggestions?

    I haven’t found any in your blogs yet, just hand waving that everything will be better without any scenarios or examples where this would actually happen.

    If you can, I think you should provide that.

  31. Linux News from Linux Loop » Blog Archive » Optional OSS Synchronization Says:

    […] has come up quite frequently. Stories are being written about it. Shuttleworth (founder of Ubuntu) has stated his opinion on the topic. Seigo (from KDE) has also given his input. Basically, the topic is becoming a point of a lot of […]

  32. twisty Says:

    Don’t concern yourself with unification between distros, especially with Microvell (Novell) (see boycottnovell.com for more). Focus on coming out ahead. Where you and Ubuntu lead, others should follow. When you unite the clans, Microsoft can kick them all easier just as they bought out Corel Linux and have their hands in the puppets of Linspire, Xandros, and others.

  33. Harshad Joshi Says:

    Mr Shuttleworth,

    Ubuntu’s lean and mean design might fascinate a few blokes, but in reality, Ubuntu is just another ‘Windows Vista’ in disguise. However, Vista/XP offers a more attraction v/s ubuntu because

    1. No in-the-box mp3/divx/wma support, what are people supposed to do of the oodles of mp3 and movies they have on their disks? And why do you assume that they are going to download a plugin they have got no clue of? Its not only a question of people having less internet access, but also, why would someone wipe off an OS capable of more media support, even though it might be pirated?

    2. Unlike the .deb or .rpm, .exe files are available more freely at almost every nook and corner..then who needs to understand that synaptic is more good installer of .deb??

    3. No support for gamers – Without good games, its unattractive to have a work-only OS.

    Inspite of all these flaws, your support has made Ubuntu a distro worth seeing for, but to make it appeal able to common masses, you must provide some facilities they need.

    Best wishes to you for the future.

  34. JanC Says:

    @Mr. Harshad Joshi:

    1. Windows XP doesn’t support MP3, DivX/Xvid, etc. without downloading codecs either… Actually, Windows XP requires you to find a codec for DivX yourself instead of downloading it automaticly!

    2. True, there are more .exe & .msi installers than .deb installers for now, but this will improve over time. The main feature of .deb is when you want to _uninstall_ an application anyway… 😉

    3. There is support for gamers, although it’s true that for new games, especially commercial games, Windows is still a better option in almost all cases.

  35. Alex Shenoy Says:


    I understand your approach, and I agree. It is very important for projects like KDE and GNOME to focus on a release schedule that coincides with distributions. I do have a question, though. Does this work for third party applications? For example, do you think that this approach would work or be useful for a multimedia application.

    This question refers to smaller applications. Especially applications that might not be candidates for default release in distributions.There are plenty of applications that solve a problem for very few individuals. For example, MPD clients. Is it worth it for these applications to adopt this philosophy?


  36. arthurson Says:

    hi Mark,

    following the last post, more and more people complain my decision of ordering free cd without any contribution is selfish and meaningless.

    i just feel hurt when i share received cd to others and then got attacted by many linux user.

  37. Mark Shuttleworth on Testing Driven Development(TDD) | it's unix, not eunuchs Says:

    […] Development(TDD) Mark Shuttleworth, father of the de facto linux desktop OS — Ubuntu, recently weighed in on a few different issues. He wanted to address some recent moves towards getting the Gnome and KDE crowd to talk nice to one […]

  38. mph Says:

    I am wondering what about a unfied production release for the Back To school 2009 time frame.
    This would be that the production versions (LTS, RHEL 6/7, SUSE 12) Around Labor Day (September 7 2009) I believe that this would mean that there should be a test distributions about the April-May 2009 (a Fedora, regular Ubuntu, and OpenSUSE) meaning that the Apps stuff should be locked in March-April and the kernel coming out in Jan-March 2009.

    Is this a reasonable schedule to ask for?

  39. Prashant Says:

    Something like this will be really awesome ! Will take collaboration to new levels. Cant wait for it to come true 😀

    Even if we can get RHEL with Ubuntu LTS it would rock 😀

  40. Vincenzo Dentamaro Says:

    Hi Mark, i sent you an email about the SFS Technology, but without reply.
    I want to talk you about my idea.
    What can I do to talk to you?

  41. Discussion sur la synchronicité du Logiciel Libre Says:

    […] française de l’article “Discussing free software synchronicity“. Auteur : Mark Shuttleworth – Traducteur : Bernard Opic — Il y a eu une vague de […]

  42. Premio PFC con Sw Libre » Blog Archive » Sincronización dos esforzos: seleccionando o ciclo de release Says:

    […] este respecto, Mark Shuttleworth, fundador de Ubuntu, publicou recentemente un post sobre a sincronización da pila completa de aplicacións libres no que falaba sobre a estratexia de release de Ubuntu: […] at Ubuntu we have converged around […]

  43. Rennywenny's Weblog Says:

    Feature Based Versions…

    I am developing software for almost 3 years though I have not done any significant or noticeable piece of software but the most common thing I always faced, my supervisor saying:

    We need to implement this feature.
    We need to omit this we ain’t gonna…