Archive for the 'free software' Category

Notifications, indicators and alerts

Monday, December 22nd, 2008

Let’s talk about notifications! As Ryan Lortie mentioned, there was a lot of discussion across the Ubuntu, Kubuntu, GNOME, KDE and Mozilla communities represented at UDS about the proposals Canonical’s user experience design and desktop experience engineering teams have made for Ubuntu 9.04.


See the mockup as a Flash movie.

There are some fairly bold (read: controversial) ideas that we’d like to explore with, so the opportunity to discuss them with a broader cross-section of the community was fantastic. There were several rough edges and traps that I think we’ll avoid in the first round as a result, thanks to everyone who participated. Some of the things we work on in these teams are done directly with partners for their devices, so they don’t see this level of discussion before they ship, but it’s wonderful when we do get the opportunity to do so.

Some of these ideas are unproven, they boil down to matters of opinion, but since our commitment to them is based on a desire to learn more I think of them as constructive experiments. Experiments are just that – experiments. They may succeed and they may fail. We should judge them carefully, after we have data. We are putting new ideas into the free desktop without ego. We know those ideas could be better or worse than similar work being done in other communities, and we want to gather real user feedback to help find the best mix for everyone. The best ideas, and the best code, will ultimately form part of the digital free software commons and be shared by GNOME, KDE and every distribution. So, for those folks who were upset that we might ship something other than a GNOME or KDE default, I would ask for your patience and support – we want to contribute new ideas and new code, and that means having some delta which can be used as a basis for discussions about the future direction of upstream. In the past, we’ve had a few such delta’s in Ubuntu. Some, like the current panel layout, were widely embraced. Others, like the infamous “Ubuntu spacial mode”, were not. C’est la vie, and we all benefit from the evolution.

Experiments are also not something we should do lightly. The Ubuntu desktop is something I take very personally; I feel personally responsible for the productivity and happiness of every Ubuntu user, so when we bring new ideas and code to the desktop I believe we should do everything we can to make sure of success first time round. We should not inflict bad ideas on our users just because we’re curious or arrogant or stubborn or proud. Despite being occasionally curious, arrogant, stubborn and proud :-)

So, what are we proposing?

First, we are focusing some attention on desktop notifications in this cycle, as part of a broader interest in the “space between applications”.

I think Canonical and Ubuntu can best help the cause of free software by focusing on the cracks between the major components of the desktop. In other words, while there are already great upstreams for individual applications in the free software desktop (Novell for Evolution, Sun for OpenOffice, Mozilla for Firefox, Red Hat for NetworkManager), we think there is a lot of productive and useful work to be done in the gaps between them. Notifications are things that many apps do, and if we can contribute new ideas there then we are helping improve the user experience of all of those applications. That’s a nice force multiplier – we’re hopefully doing work that makes the work of every other community even more valuable.

Nevertheless, expect bumps ahead. Ideas we are exploring may / will / do conflict with assumptions that are present today in various applications. We can address the relevant code in packages in main, but I’m more focused on addressing the potential social disruption that conflict can create, and that’s more a matter of conversation than code.

Notifications are interesting, subtle and complex. There are lots of different approaches on lots of different platforms. There are lots of different use cases. We’re trying to simplify and eliminate complexity, while still making it possible to meet the use cases we know about.

There has been good work in the freedesktop.org community on notifications, and even a spec that is *almost* at 1.0 in that community, with existing open source implementations. Our proposal is based on that specification, but it deprecates several capabilities and features in it. We will likely be compatible with the current API’s for sending notifications, but likely will not display all the notifications that might be sent, if they require features that we deprecate. If this experiment goes well, we would hope to help move that FD.o specification to 1.0, with or without our amendments.

The key proposals we are making are that:

  • There should be no actions on notifications.
  • Notifications should not be displayed synchronously, but may be queued. Our implementation of the notification display daemon will display only one notification at a time, others may do it differently.

That’s pretty much it. There are some subtleties and variations, but these are the key changes we are proposing, and which we will explore in a netbook device with a partner, as well as in the general Ubuntu 9.04 release, schedule gods being willing.

This work will show up as a new notification display agent, not as a fork or patch to the existing GNOME notification daemon. We don’t think the client API – libnotify – needs to be changed for this experiment, though we may not display notifications sent through that API that use capabilities we are suggesting be deprecated. We will try to ensure that packages in main are appropriately tuned, and hope MOTU will identify and update key packages in universe accordingly.

Why a completely new notification display agent? We are designing it to be built with Qt on KDE, and Gtk on GNOME. The idea is to have as much code in common as we can, but still take advantage of the appropriate text display framework on Ubuntu and Kubuntu. We hope to deliver both simultaneously, and have discussed this with both Ubuntu and Kubuntu community members. At the moment, there is some disagreement about the status of the FD.o specification between GNOME and KDE, and we hope our efforts will help build a bridge there. In Ubuntu 9.04, we would likely continue to package and publish the existing notification daemon in addition, to offer both options for users that have a particular preference. In general, where we invest in experimental new work, we plan to continue to offer a standard GNOME or KDE component / package set in the archive so that people can enjoy that experience too.

The most controversial part of the proposal is the idea that notifications should not have actions associated with them. In other words, no buttons, sliders, links, or even a dismissal [x]. When a notification pops up, you won’t be able to click on it, you won’t be able to make it go away, you won’t be able to follow it to another window, or to a web page. Are you loving this freedom? Hmmm? Madness, on the face of it, but there is method in this madness.

Our hypothesis is that the existence of ANY action creates a weighty obligation to act, or to THINK ABOUT ACTING. That make notifications turn from play into work. That makes them heavy responsibilities. That makes them an interruption, not a notification. And interruptions are a bag of hurt when you have things to do.

So, we have a three-prong line of attack.

  1. We want to make notifications truly ephemeral. They are there, and then they are gone, and that’s life. If you are at your desktop when a notification comes by, you will sense it, and if you want you can LOOK at it, and it will be beautiful and clear and easy to parse. If you want to ignore it, you can safely do that and it will always go away without you having to dismiss it. If you miss it, that’s OK. Notifications are only for things which you can safely ignore or miss out on. If you went out for coffee and a notification flew by, you are no worse off. They don’t pile up like email, there is no journal of the ones you missed, you can’t scroll back and see them again, and therefor you are under no obligation to do so – they can’t become work while you are already busy with something else. They are gone like a mystery girl on the bus you didn’t get on, and they enrich your life in exactly the same way!
  2. We think there should be persistent panel indicators for things which you really need to know about, even if you missed the notification because you urgently wanted that coffee. So we are making a list of those things, and plan to implement them.
  3. Everything else should be dealt with by having a window call for attention, while staying in the background, unless it’s critical in which case that window could come to the foreground.

Since this is clearly the work of several releases, we may have glitches and inconsistencies along the way at interim checkpoints. I hope not, but it’s not unlikely, especially in the first iteration. Also, these ideas may turn out to be poor, and we should be ready to adjust our course based on feedback once we have an implementation in the wild.

We had a superb UXD and DEE (user experience design team, and desktop experience engineering team) sprint in San Francisco the week before UDS. Thanks to everyone who took part, especially those who came in from other teams. This notifications work may just be the tip of the iceberg, but it’s a very cool tip :-)

One or more of our early-access OEM partners (companies that we work with on new desktop features) will likely ship this feature as part of a netbook product during the 9.04 cycle. At that point, we would also drop the code into a PPA for testing with a wider set of applications. There are active discussions about updating the freedesktop.org specification based on this work. I think we should be cautious, and gather real user testing feedback and hard data, but if it goes well then we would propose simplifying the spec accordingly, and submit our notification display agent to FreeDesktop.org. Long term collaboration around the code would take place on Launchpad.

With Intrepid on track to hit the wires today I thought I’d blog a little on the process we followed in designing the new user switcher, presence manager and session management experience, and lessons learned along the way. Ted has been blogging about the work he did, and it’s been mentioned in a couple of different forums (briefly earning the memorable title “the new hotness”), but since it’s one of the first pieces of work to go through the user experience design process within Canonical I thought it would be interesting to write it up.

Here is a screenshot of the work itself in action:

New FUSA applet allows you to manage your presence setting, as well as switch to a guest or other user, and logout

New FUSA applet allows you to mange your presence setting, as well as switch to a guest or other user, and logout

In one of the first user experience sessions, we looked in more detail at the way people “stop working”. We thought it interesting to try and group those actions together in a way which would feel natural to users.

We have already done some work in Ubuntu around this – for a long time we have had a button in the top-right corner of the panel which brought up a system modal dialog that gave you the usual “end your session” options of logout, restart, shutdown, hibernate, suspend and switch user. That patch was always a bit controversial and had not been accepted upstream, so we looked at ways to solve the problem differently.

We decided to use the top-right location, because it’s one of the key places in the screen that’s quick and easy to get to (you can throw your mouse into a corner of the screen very easily and accurately) and because there was a strong precedent in the old Ubuntu logout button.

One key insight was that we wanted to make “switching user” less an exercise in guesswork and more direct – we wanted to let people switch directly to the specific user they were interested in rather than have an intermediate step where they login as that other user. So we started with the Fast User Switcher applet, or FUSA, as a base fr the design. Another key idea that emerged was that we wanted to integrate the “presence setting” into the same menu, because “going offline” or “I’m busy” are similar state-of-mind-and-work decisions to “log me off the system” or “shut down”.

Menu order
We discussed at length the right order for the menu items. On the one hand, putting the “other users” at the top of the menu would mean that all the user names – yours and the ones you can switch to – would appear “in the same place” at the top of the menu. On the other, we strongly felt that things that would be used more casually and more easily should be at the top. In the end we settled on putting the presence management options at the top (Available, Away, Busy, Offline). Right next to those (in the same set) we put the “Lock screen” option, because it feels like a presence setting more than a session management setting – you are saying “Away” more than anything else.

Ted did a lot of work to make the presence menu elements work with both Pidgin and Empathy because there was some uncertainty as to which would be used by default in the release. Since it all uses dbus, it should be straightforward to make it work with KDE IM clients too.

We then put the user switching options – including the Guest Session which is a cool new feature in Intrepid that as been widely blogged (check out the YouTube demo) and which uses AppArmor to enforce security.

And finally, the session termination options – log out, suspend, hibernate, restart and shutdown are at the bottom of the menu, because you’re only ever likely to use them once in a session, by definition!

Styling
The design of the menu is deliberately clean. We use very simple colours and shapes for the presence indicators, and replicate those colours and shapes in the actual GNOME panel so that you can see at a glance what your current presence setting is. Ted had to jump through some hoops, I think, to get the presence icons in the menu to line up with the current-presence-status indicator in the panel applet, but it worked out quite nicely. There’s some additional work to tighten up the layout which didn’t make it in time for the release but which might come in as a stable release update (SRU) or in Jaunty.

We decided not to put icons into the menu for each of the different statuses. Our design ethic is to aim for cleaner, less cluttered layouts with fewer icons and better choice of text. A couple of people have said that the menu looks “sparse” or “bare” but I think it sets the right direction and we’ll be continuing with this approach as we touch other parts of the system.

Upstream
This work was discussed at UDS in Prague with a number of members of the GNOME community. I was also very glad to see that there’s a lot of support for a tighter, simpler panel at the GNOME hackfest, an idea that we’ve championed. The FUSA applet itself is going through a bit of a transformation upstream as it’s been merged into the new GDM codebase and the old code – on which our work is based – is more or less EOL’d. But we’ll figure out how to update this work for Jaunty and hopefully it will be easier to get it upstreamed at that point.

In Jaunty, we’ll likely do some more work on the GNOME panel, building on the GNOME user experience discussions. There was a lot of discussion about locking down the panel more tightly, which we may pursue.

Integration into Ubuntu
We realised rather late in the Ubuntu cycle that we hadn’t thought much about packaging. The Ubuntu team had kindly offered to help package and integrate the applet but we definitely learned the value of getting the packaging done earlier rather than later. We had the applet in a PPA for testing between developers fairly early, but we underestimated the difference between that and actual integration into the release.

The Ubuntu team rallied to the cause and helped to smooth the upgrade process for new users, so that we can try to get everyone onto the same footing when they start out with Intrepid whether as a new install or an upgrade. There are some challenges there, because the panel is so customisable, and we had to think hard about how we could ensure there was a consistent experience for something as important as logging out or shutting down while at the same time trying not to stomp on the preferences of folks who have customised their panels. Similarly, we were concerned that people who run different versions of Ubuntu, or different distributions entirely, with the same home directory, would have problems if those other OS’s didn’t have the same version of FUSA – we weren’t really able to address that satisfactorily.

We also realised (DOH!) that we hadn’t thought all the way through the process of integration, because we hadn’t figured out what to do with the old System menu options. It turned out that those were in a state of flux, with the Ubuntu folks having to choose between the current GNOME default which everyone said would change, the patches for the likely NEXT GNOME approach, and the old Ubuntu approach. Ted whipped up some patches to make the GNOME panel more dynamic with its menus, so that we could remove the System menu logout options when people have the same menu in the FUSA applet, but that landed too late for inclusion into Intrepid final.

All in all, I think it’s a neat piece of work and hope other distro’s find it useful too. It’s just a teaser of the work we plan to do around the desktop experience. I’m looking forward to seeing everyone at UDS Jaunty in Mountain View in December, when we can talk about the next round! Thanks and well done to Ted, Martin, Scott, Sebastien and everyone else who helped to make this a reality.

Well done to Team Ubuntu (thousands of people across Ubuntu, Debian and upstreams) who make the magic in 8.10 possible. Happy Release Day everyone!

GNOME usability hackfest

Saturday, October 25th, 2008

The GNOME user experience hackfest in Boston was a great way to spend the worst week in Wall St history!

Though there wasn’t a lot of hacking, there was a LOT of discussion, and we covered a lot of ground. There were at least 7 Canonical folks there, so it was a bit of a mini-sprint and a nice opportunity to meet the team at the same time. We had great participation from a number of organisations and free spirits, there’s a widespread desire to see GNOME stay on the forefront of usability.

Neil Patel of Canonical did a few mockups to try and capture the spirit of what was discussed, but I think the most interesting piece wasn’t really possible to capture in a screenshot because it’s abstract and conceptual – file and content management. There’s a revolution coming as we throw out the old “files and folders” metaphor and leap to something new, and it would be phenomenal if free software were leading the way.

I was struck by the number of different ways this meme cropped up. We had superb presentations of “real life support problems” from a large-scale user of desktop Linux, and a persistent theme was “where the hell did that file just go?” People save an attachment they receive in email, and an hour later have no idea where to find it. They import a picture into F-spot and then have no idea how to attach it to an email. They download a PDF from the web, then want to read it offline and can’t remember where they put it. Someone else pointed out that most people find it easier to find something on the Internet – through Google – than they do on their hard drives.

The Codethink guys also showed off some prototype experience work with Wizbit, which is a single-file version control system that draws on both Git and Bazaar for ideas about how you do efficient, transparent versioning of a file for online and offline editing.

We need to rearchitect the experience of “working with your content”, and we need to do it in a way that will work with the web and shared content as easily as it does locally.

My biggest concern on this front is that it be done in a way that every desktop environment can embrace. We need a consistent experience across GNOME, KDE, OpenOffice and Firefox so that content can flow from app to app in a seamless fashion and the user’s expectations can be met no matter which app or environment they happen to use. If someone sends a file to me over Empathy, and I want to open it in Amarok, then I shouldn’t have to work with two completely different mental models of content storage. Similarly, if I’ve downloaded something from the web with Firefox, and want to edit it in OpenOffice, I shouldn’t have to be super-aware or super-smart to be able to connect the apps to the content.

So, IMO this is work that should be championed in a forum like FreeDesktop.org, where it can rise above some of the existing rivalries of desktop linux. There’s a good tradition of practical collaboration in that forum, and this is a great candidate for similar treatment.

At the end of the day, bling is less transformational than a fundamental shift in content management. Kudos to the folks who are driving this!

Update: thanks mjg59 for pointing out my thinko. The Collabora guys do great stuff, but Codethink does Wizbit.

I had the opportunity to present at the Linux Symposium on Friday, and talked further about my hope that we can improve the coordination and cadence of the entire free software stack. I tried to present both the obvious benefits and the controversies the idea has thrown up.

Afterwards, a number of people came up to talk about it further, with generally positive feedback.

Christopher Curtis, for example, emailed to say that the idea of economic clustering in the motor car industry goes far further than the location of car dealerships. He writes:

Firstly, every car maker releases their new models at about the same time. Each car maker has similar products – economy, sedan, light truck. They copy each other prolifically. Eventually, they all adopt a certain baseline – seatbelts, bumpers, airbags, anti-lock brakes. Yet they compete fiercely (OnStar from GM; Microsoft Sync from Ford) and people remain brand loyal. This isn’t going to change in the Linux world. Even better, relations like Debian->Ubuntu match car maker relations like Toyota->Lexus.

I agree with him wholeheartedly. Linux distributions and car manufacturers are very similar: we’re selling products that reach the same basic audience (there are niche specialists in real-time or embedded or regional markets) with a similar range (desktop, workstation, server, mobile), and we use many of the same components just as the motor industry uses common suppliers. That commonality and coordination benefits the motor industry, and yet individual brands and products retain their identity.

Let’s do a small thought experiment. Can you name, for the last major enterprise release of your favourite distribution, the specific major versions of kernel, gcc, X, GNOME, KDE, OpenOffice.org or Mozilla that were shipped? And can you say whether those major versions were the same or different to any of the enterprise releases of Ubuntu, SLES, Debian, or RHEL which shipped at roughly the same time? I’m willing to bet that any particular customer would say that they can’t remember either which versions were involved, or how those stacked up against the competition, and don’t care either. So looking backwards, differences in versions weren’t a customer-differentiating item.  We can do the same thought experiment looking forwards. WHAT IF you knew that the next long-term supported releases of Ubuntu, Debian, Red Hat and Novell Linux would all have the same major versions of kernel, GCC, X, GNOME, KDE, OO.o and Mozilla. Would that make a major difference for you? I’m willing to bet not – that from a customer view, folks who prefer X will still prefer X. A person who prefers Red Hat will stick with Red Hat. But from a developer view, would that make it easier to collaborate? Dramatically so.

Another member of the audience came up to talk about the fashion industry. That’s also converged on a highly coordinated model – fabrics and technologies “release” first, then designers introduce their work simultaneously at fashion shows around the world. “Spring 2009″ sees new collections from all the major houses, many re-using similar ideas or components. That hasn’t hurt their industry, rather it helps to build awareness amongst the potential audience.

The ultimate laboratory, nature, has also adopted release coordination. Anil Somayaji, who was in the audience for the keynote, subsequently emailed this:

Basically, trees of a given species will synchronize their seed releases in time and in amount, potentially to overwhelm predators and to coordinate with environmental conditions. In effect, synchronized seed releases is a strategy for competitors to work together to ensure that they all have the best chance of succeeding. In a similar fashion, if free software were to “release its seeds” in a synchronized fashion (with similar types of software or distributions having coordinated schedules, but software in different niches having different schedules), it might maximize the chances of all of their survival and prosperity.

There’s no doubt in my mind that the stronger the “pulse” we are able to create, by coordinating the freezes and releases of major pieces of the free software stack, the stronger our impact on the global software market will be, and the better for all companies – from MySQL to Alfresco, from Zimbra to OBM, from Red Hat to Ubuntu.

Netbooks pre-loaded with Ubuntu

Monday, June 9th, 2008

The Canonical OEM team has been approached by a number of OEM’s who want to sell netbooks (small, low-cost laptops with an emphasis on the web) based on Ubuntu. Almost universally, they’ve asked for standard Ubuntu packages and updates, with an app launcher that’s more suited to new users and has the feeling of a “device” more than a PC.

There are some very cool launchers out there – AWN is a current favourite of mine – but people seem to prefer the more 2-dimensional tabbed approach, so the OEM team implemented a lightweight but still very classy launcher for this use case. The work received a detailed review in Ars Technica and has been covered in Free Software Magazine and elsewhere.

The aim was to do something very simple that could be tested easily, work with touch devices and made shippable very quickly. It also needed to be efficient on lower-power devices, and work well with Intel hardware, which seems to be the preferred platform for this generation of devices and allows us to slip a few nice effects in that would be hard without the right hardware support. Here’s a screenshot of a recent version:

The Ubuntu Netbook Remix launcher is laid out for new users

The new launcher is free software – so far, everything Canonical has funded, written and published for general public use on Ubuntu has been under the GPL. Currently we use GPLv3. You can grab the relevant packages from a public PPA, just add the following entry to your /etc/apt/sources.list:

deb http://ppa.launchpad.net/netbook-remix-team/ubuntu hardy main

The PPA contains a number of packages for the launcher, some GNOME panel applets, window manager tweaks and themes. These bits and pieces are small but improve the experience of Ubuntu run with the netbook launcher on screens with lower vertical resolution. There’s also some code in there specific to the Intel netbook hardware platforms, don’t install ume-config-netbook unless you are on the right hardware! This is all code produced by Canonical and published on Launchpad under free software licenses:

https://launchpad.net/netbook-remix
https://launchpad.net/netbook-remix-launcher

I’m particularly happy with the way it gives you more screen space for web browsing, which is probably the major use case on these form factors:

The screen layout is optimised for screens with fewer vertical pixels

There are still plenty of interesting corner cases, Ars calls out issues with the GiMP’s palettes, for example, so please do take the opportunity to test it with the apps you think you’d run on a small laptop (or as El Reg would say, laptot).  And feel free to push up and submit for inclusion a branch or two if you’re up to a bit of Clutter hackery!

For the rest, the netbook remix uses standard ubuntu packages from the standard ubuntu archive, with standard security updates. So it meets all of our usual commitments around security and compatibility. You can recreate the netbook remix just by installing 8.04, adding the PPA to your list of repositories, fetching the packages and configuring them appropriately for your system.

The netbook remix is not part of the “official Ubuntu editions”, it’s not like Kubuntu or Ubuntu or Ubuntu Server. It’s a separate remix published by the Canonical OEM team. It will probably get revved in October when Ubuntu 8.10 is released, but that’s up to the Canonical OEM team and their customers, and not the responsibility of the Ubuntu project team.

In working with manufacturers, the OEM team creates custom install images which are specific to hardware from those OEM’s. They have the free software packages I’ve described, and they may also include third-party software selected by OEM’s which Canonical cannot redistribute, so we can’t publish the custom installers that are produced under contract. Those images typically are hand-customised for a faster boot time, which means they will only work on the particular device for which they were intended, unlike standard Ubuntu which should auto-detect and configure itself for whatever hardware it is being booted on.

We specifically wanted to do this project as an Ubuntu Remix – based on standard Ubuntu 8.04 packages, with modified package selection and some additional code, but leaving the core platform packages unmodified. In terms of the trademark guidelines for an Ubuntu Remix companies cannot call their platform Ubuntu if they have modified packages (especially the kernel and desktop packages) but they can if they are just re-arranging standard Ubuntu packages. Canonical is in a privileged position as the Ubuntu trademark owner – we can certify a custom kernel if we believe it has been done in an appropriate way that won’t conflict with standard Ubuntu maintenance processes, and if we can keep the custom kernel up to date to the same standard as the normal Ubuntu kernel. So these are certified Ubuntu devices from Canonical, even though they are more customized than other people can within the Remix guidelines.

We’re also working with two companies that want more radical user interface innovation. Canonical is participating directly in the design and implementation of one of those UI’s, and we’re integrating someone else’s UI on an Ubuntu base for the second project. I haven’t seen either of those UI’s, for confidentiality reasons, but I’m told that the teams working on them think they have great ideas that will elevate, in different ways, the state of the art. All in all it will be exciting to see how the netbook era stimulates innovation in the Linux user experience, because there are a lot of companies wanting to build differentiated UI’s on a standard Linux base. And directly or indirectly Canonical will help to bring that innovation to KDE and GNOME and hence to the wider Linux ecosystem.

Discussing free software syncronicity

Thursday, May 15th, 2008

There’s been a flurry of discussion around the idea of syncronicity in free software projects. I’d like to write up a more comprehensive view, but I’m in Prague prepping for FOSSCamp and the Ubuntu Developer Summit (can’t wait to see everyone again!) so I’ll just contribute a few thoughts and responses to some of the commentary I’ve seen so far.

Robert Knight summarized the arguments I made during a keynote at aKademy last year. I’m really delighted by the recent announcement of that the main GNOME and KDE annual developer conferences (GUADEC and aKademy) will be held at the same time, and in the same place, in 2009. This is an important step towards even better collaboration. Initiatives like FreeDesktop.org have helped tremendously in recent years, and a shared conference venue will accelerate that process of bringing the best ideas to the front across both projects. Getting all of the passionate and committed developers from both of these into the same real-space will pay dividends for both projects.

Aaron Seigo of KDE Plasma has taken a strong position against synchronized release cycles, and his three recent posts on the subject make interesting reading.

Aaron raises concerns about features being “punted” out of a release in order to stick to the release cycle. It’s absolutely true that discipline about “what gets in” is essential in order to maintain a commitment on the release front. It’s unfortunate that features don’t always happen on the schedule we hope they might. But it’s worth thinking a little bit about the importance of a specific feature versus the whole release. When a release happens on time, it builds confidence in the project, and injects a round of fresh testing, publicity, enthusiasm and of course bug reports. Code that is new gets a real kicking, and improves as a result. Free software projects are not like proprietary projects – they don’t have to ship new releases in order to get the money from new licenses and upgrades.  We can choose to slip a particular feature in order to get a new round of testing and feedback on all the code which did make it.

Some developers are passionate about specific features, others are passionate about the project as a whole. There are two specific technologies, or rather methodologies, that have hugely helped to separate those two and empower them both. They are very-good-branching VCS, and test-driven development (TDD).

We have found that the developers who are really focused on a specific feature tend to work on that feature in a branch (or collaborative set of branches), improving it “until it is done” regardless of the project release cycle. They then land the feature as a whole, usually after some review. This of course depends on having a VCS that supports branching and merging very well. You need to be able to merge from trunk continuously, so that your feature branch is always mergeable *back* to trunk. And you need to be able to merge between a number of developers all working on the same features. Of course, my oft-stated preference in VCS is Bazaar, because the developers have thought very carefully about how to support collaborative teams across platforms and projects and different workflows, but any VCS, even a centralised one, that supports good branches will do.

A comprehensive test suite, on the other hand, lets you be more open to big landings on trunk, because you know that the tests protect the functionality that people had *before* the landing. A test suite is like a force-field, protecting the integrity of code that was known to behave in a particular way yesterday, in the face of constant change. Most of the projects I’m funding now have adopted a tests-before-landings approach, where landings are trunk are handled by a robot who refuses to commit the landing unless all tests passed. You can’t argue with the robot! The beauty of this is that your trunk is “always releasable”. That’s not *entirely* true, you always want to do a little more QA before you push bits out the door, but you have the wonderful assurance that the test suite is always passing. Always.

So, branch-friendly VCS’s and test-driven development make all the difference.  Work on your feature till it’s done, then land it on the trunk during the open window. For folks who care about the release, the freeze window can be much narrower if you have great tests.

There’s a lot of discussion about the exact length of cycle that is “optimal”, with some commentary about the windows of development, freeze, QA and so on.  I think that’s a bit of a red herring, when you factor in good branching, because feature development absolutely does not stop when the trunk is frozen in preparation for a release. Those who prefer to keep committing to their branches do so, they scratch the itch that matters most to them.

I do think that cycle lengths matter, though. Aaron speculates that a 4-month cycle might be good for a web site. I agree, and we’ve converged on a 4-month planning cycle for Launchpad after a few variations on the theme. The key difference for me with a web site is that one has only one deployment point of the code in question, so you don’t have to worry as much about update and cross-version compatibility. The Launchpad team has a very cool system, where they roll out fresh code from trunk every day to a set of app servers (called “edge.launchpad.net”), and the beta testers of LP use those servers by default. Once a month, they roll out a fresh drop from tip to all the app servers, which is also when they rev the database and can introduce substantial new features. It’s tight, but it does give the project a lot of rhythm. And we plan in “sets of 4 months”, at least, we are for the next cycle. The last planning cycle was 9 months, which was just way too long.

I think the cycles-within-cycles idea is neat. Aaron talks about how 6 months is too long for quick releases, and too short to avoid having to bump features from one cycle to the next. I’ve already said that a willingness to bump a feature that is not ready is a strength and not a weakness. It would be interesting to see if the Plasma team adopted a shorter “internal” cycle, like 2 months or 3 months, and fit that into a 6 month “external” cycle, whether Aaron’s concerns were addressed.

For large projects, the fact that a year comes around every, well, year, turns out to be quite significant. You really want a cycle that divides neatly into a year, because a lot of external events are going to happen on that basis. And you want some cohesion between the parts. We used to run the Canonical sprints on a 4-month cycle (3 times a year) and the Ubuntu releases on a six month cycle (twice a year) and it was excessively complex. As soon as we all knew each other well enough not to need to meet up every 4 months, we aligned the two and it’s been much smoother ever since.

Some folks feel that distributions aren’t an important factor in choosing an upstream release cycle. And to a certain extent that’s true. There will always be a “next” release of whatever distribution you care about, and hopefully, an upstream release that misses “this” release will make it into the next one. But I think that misses the benefit of getting your work to a wider audience as fast as possible. There’s a great project management methodology called “lean”, which we’ve been working with. And it says that any time that the product of your work sits waiting for someone else to do something, is “waste”. You could have done that work later, and done something else before that generated results sooner. This is based on the amazing results seen in real-world production lines, like cars and electronics.

So while it’s certainly true that you could put out a release that misses the “wave” of distribution releases, but catches the next wave in six months time, you’re missing out on all the bug reports and patches and other opportunities for learning and improvement that would have come if you’d been on the first wave. Nothing morally wrong with that, and there may be other things that are more important for sure, but it’s worth considering, nonetheless.

Some folks have said that my interest in this is “for Canonical”, or “just for Ubuntu”. And that’s really not true. I think it’s a much more productive approach for the whole free software ecosystem, and will help us compete with the proprietary world. That’s good for everyone. And it’s not just Ubuntu that does regular 6-month releases, Fedora has adopted the same cycle, which is great because it improves the opportunities to collaborate across both distributions – we’re more likely to have the same versions of key components at any given time.

Aaron says:

Let’s assume project A depends on B, and B releases at the same time as A. That means that A is either going to be one cycle behind B in using what B provides, or will have to track B’s bleeding edge for the latter part of their cycle allowing some usage. What you really want is a staggered approach where B releases right about when A starts to work on things.

This goes completely counter to the “everyone on the same month, every 6 months” doctrine Mark preaches, of course.

I have never suggested that *everyone* should release at the same time. In fact, at Ubuntu we have converged around the idea of releasing about one month *after* our biggest predictable upstream, which happens to be GNOME. And similarly, the fact that the kernel has their own relatively predictable cycle is very useful. We don’t release Ubuntu on the same day as a kernel release that we will ship, of course, but we are able to plan and communicate meaningfully with the folks at kernel.org as to which version makes sense for us to collaborate around.

Rather than try and release the entire stack all at the same time, it makes sense to me to offset the releases based on a rough sense of dependencies.

Just to be clear, I’m not asking the projects I’ll mention below to change anything, I’m painting a picture or a scenario for the purposes of the discussion. Each project should find their own pace and scratch their itch in whatever way makes them happiest. I think there are strong itch-scratching benefits to syncronicity, however, so I’ll sketch out a scenario.

Imagine we aimed to have three waves of releases, about a month apart.

In the first wave, we’d have the kernel, toolchain, languages and system libraries, and possibly components which are performance- and security-critical. Linux, GCC, Python, Java, Apache, Tomcat… these are items which likely need the most stabilisation and testing before they ship to the innocent, and they are also pieces which need to be relatively static so that other pieces can settle down themselves. I might also include things like Gtk in there.

In the second wave, we’d have applications, the desktop environments and other utilities. AbiWord and KOffice, Gnumeric and possibly even Firefox (though some would say Firefox is a kernel and window manager so… ;-)).

And in the third wave, we’d have the distributions – Ubuntu, Fedora, Gentoo, possibly Debian, OpenSolaris. The aim would be to encourage as much collaboration and discussion around component versions in the distributions, so that they can effectively exchange information and patches and bug reports.

I’ll continue to feel strongly that there is value to projects in getting their code to a wider audience than those who will check it out of VCS-du-jour, keep it up to date and build it. And the distributions are the best way to get your code… distributed! So the fact that both Fedora and Ubuntu have converged on a rhythm bodes very well for upstreams who can take advantage of that to get wider testing, more often, earlier after their releases. I know every project will do what suits it, and I hope that projects will feel it suits them to get their code onto servers and desktops faster so that the bug fixes can come faster, too.

Stepping back from the six month view, it’s clear that there’s a slower rhythm of “enterprise”, “LTS” or “major” releases. These are the ones that people end up supporting for years and years. They are also the ones that hardware vendors want to write drivers for, more often than not. And a big problem for them is still “which version of X, kernel, libC, GCC” etc should we support? If the distributions can articulate, both to upstreams and to the rest of the ecosystem, some clear guidance in that regard then I have every reason to believe people would respond to it appropriates. I’ve talked with kernel developers who have said they would LOVE to know which kernel version is going to turn into RHEL or an Ubuntu LTS release, and ideally, they would LOVE it if those were the same versions, because it would enable them to plan their own work accordingly. So let’s do it!

Finally, in the comments on Russell Coker’s thoughtful commentary there’s a suggestion that I really like – that it’s coordinated freeze dates more than coordinated release dates that would make all the difference. Different distributions do take different views on how they integrate, test and deploy new code, and fixing the release dates suggests a reduction in the flexibility that they would have to position themselves differently. I think this is a great point. I’m primarily focused on creating a pulse in the free software community, and encouraging more collaboration. If an Ubuntu LTS release, and a Debian release, and a RHEL release, used the same major kernel version, GCC version and X version, we would be able to improve greatly ALL of their support for today’s hardware. They still wouldn’t ship on the same date, but they would all be better off than they would be going it alone. And the broader ecosystem would feel that an investment in code targeting those key versions would be justified much more easily.

Playing nicely with Windows

Wednesday, April 9th, 2008

Windows is a very important platform, and our justifiable pride in Linux and the GNU stack shouldn’t blind us to the importance of delivering software that is widely useful. I believe in bringing free software to people in a way that is exciting and empowering to them, and one of the key ways to do that is to show them amazing free software running on their familiar platform, whether that’s Windows or the MacOS.

Firefox, for example, is an inspiring free software success story, and I’m certain that a key driver of that success is their excellent support for the Windows environment. It’s a quick download and an easy install that Just Works, after which people can actually FEEL that free software delivers an innovative and powerful browsing experience that is plainly better than the proprietary alternatives. I’ve noticed that many of the best free software projects have a good Windows story. MySQL and PostgreSQL both do. Bazaar works well too. And users love it – users that may then be willing to take a step closer to living in the GNU world entirely.

So, I was absolutely delighted with the way Agostino Russo and Evan Dandrea steered the Windows-native installer for Ubuntu into 8.04 LTS. What I think is really classy about it is the way it uses the Windows Boot Manager sensibly to offer you the Ubuntu option. If I was a Windows user who was intrigued but nervous about Linux, this would be a really great way to get a taste of it, at low risk. Being able to install and uninstall a Linux OS as if it were a Windows app is a brilliant innovation. Kudos to Agostino and Evan, and of course also to the guys who pioneered this sort of thinking (it’s been done in a number of different ways). It looks crisp, clean and very professional:

Ubuntu being installed through Windows

I’m a little daunted at something as new as WUBI being the very first experience that people have of Linux, free software and Ubuntu, but initial reports are positive.  I did have a question from the media that started with “it didn’t work for me but…” which makes me a wee bit nervous.

So – yesterday I suggested folks hammer on the Heron for servers, today, here’s a call for folks who have a Windows machine and would like to see WUBI in action to test it out and let the developers know if there are any last-minute gotchas. Happy hunting!

Good architectural layering, and Bzr 1.1

Wednesday, January 9th, 2008

I completely failed to blog the release of Bzr 1.0 last year, but it was an excellent milestone and by all accounts, very well received. Congratulations to the Bazaar community on their momentum! I believe that the freeze for 1.1 is in place now so it’s great to see that they are going to continue to deliver regular releases.

I’ve observed a surge in the number of contributors to Bazaar recently, which has resulted in a lot of small but useful branches with bugfixes for various corner cases, operating systems and integrations with other tools. One of the most interesting projects that’s getting more attention is BzrEclipse, integrating Bzr into the Eclipse IDE in a natural fashion.

I think open source projects go through an initial phase where they work best with a tight group of core contributors who get the basics laid out to the point where the tool or application is usable by a wider audience. Then, they need to make the transition from being “closely held” to being open to drive-by contributions from folks who just want to fix a small bug or add a small feature. That’s quite a difficult transition, because the social skills required to run the project are quite different in those two modes. It’s not only about having good social skills, but also about having good processes that support the flow of new, small contributions from new, unproven contributors into the code-base.

It seems that one of the key “best practices” that has emerged is the idea of plug-in architectures, that allow new developers to contribute an extension, plug-in or add-on to the codebase without having to learn too much about the guts of the project, or participate in too many heavyweight processes. I would generalize that and say that good design, with clearly though-through and pragmatic layers, allow new contributors to make useful contributions to the code-base quickly because they present useful abstractions early on.

Firefox really benefited from their decision to support cross-platform add-ons. I’m delighted to hear that OpenOffice is headed in the same direction.

Bazaar is very nicely architected. Not only is there a well-defined plug-in system, but there’s also a very useful and pragmatic layered architecture which keeps the various bits of complexity contained for those who really need to know. I’ve observed how different teams of contributors, or individuals, have introduced whole new on-disk formats with new performance characteristics, completely orthogonally to the rest of the code. So if you are interested in the performance of status and diff, you can delve into working tree state code without having to worry about long-term revision storage or branch history mappings.

Layering can also cause problems, when the layers are designed too early and don’t reflect the pragmatic reality of the code. For example, witness the “exchange of views” between the ZFS folks and the Linux filesystem community, who have very different opinions on the importance and benefits of layering.

Anyhow, kudos to the Bazaar guys for the imminent 1.1, and for adopting an architecture that makes it easier for contributors to get going.

I’m absolutely thrilled to see this chart of untriaged bugs in Inkscape since the project moved to Launchpad:

Untriaged Inkscape bugs after move to LP

As you can see, the Inkscape community has been busy triaging and closing bugs, radically reducing the “new and unknown” bug count and giving the developers a tighter, more focused idea of where the important issues are that need to be addressed.

A lot of my personal interest in free software is motivated by the idea that we can be more efficient if we collaborate better. If we want free software to be the norm for personal computing software, then we have to show, among other things, that the open, free software approach taps into the global talent pool in a healthier, more dynamic way than the old proprietary approach to building software does. We don’t have money on our side, but we do have the power of collaboration.

I put a lot of personal effort into Launchpad because I love the idea that it can help lead the way to better collaboration across the whole ecosystem of free software development. I look for the practices which the best-run projects follow, and encourage the Launchpad guys to make it easy for everyone to do those things. These improvements and efficiencies will help each project individually, but it also helps every Linux distribution as well. This sort of picture gives me a sense of real accomplishment in that regard.

Bryce Harrington, who happens to work for Canonical and is a member of the Inkscape team, told me about this and blogged the experience. I’ve asked a few other Inkscape folks, and they seem genuinely thrilled at the result. I’m delighted. Thank you!

Is it possible to have training materials that are developed in partnership with the community, available under a CC license, AND make those same materials available through formal training providers? We’re trying to find out at Canonical with our Ubuntu Desktop Course.

Billy Cina @Canonical has been making steady progress towards the goal of having a full portfolio of training options available for commercial users of Ubuntu. Companies that want to ensure that their staff are rigorously trained, and individuals who want to present their Ubuntu credentials in a formal setting, need to have a certified and trusted framework for skills assurance.

Most of the work we are doing in this line is following the traditional model, where content is funded as a private investment, and the content is then licensed to authorized training providers who sell courses to their local markets. These courses are usually sold to companies that have adopted a platform or tool and want to ensure a consistent level of skills across the organization. Many companies are moving to Ubuntu for both desktop and server, so demand is hotting up for this capability. We have a system builder course, and a system administrator course are now available from authorized training providers.

But we wanted also to try a different approach, that might be more accessible to the Ubuntu community and might also result in even higher quality materials. We think the key ingredients are:

  • Use of an open format (Docbook)
  • Content source available in a public Bazaar repository (here)
  • Licensing under open terms (CC-BY-NC-SA)
  • Working with the Ubuntu doc-team, who have a wealth of experience

The license is copyleft and non-commercial, so that it is usable by any person for their own education and edification with the requirement that commercial use will involve some contribution back to the core project.

It’s already a 400 page book which gives a great overview of the Ubuntu desktop experience, a very valuable resource for folks who are new to Linux and Ubuntu.

We are getting to the point where we can publish a “daily PDF” which will have the very latest version (“trunk”) compiled overnight. So anyone has free access to the very latest version, and of course anyone can bzr branch the content to make changes that suit them.

If you want to have a look at the latest content, try this:

Type:

bzr launchpad-login <your-lp-username
bzr branch lp:ubuntu-desktop-course

The source is huge (712MB, lots of images in a large book), so grab a cup of tea, and when you get back you will have the latest version of the content, hot and well-brewed :-) This is a great set of materials if you are offering informal training. Corrections and additions would be most welcome, just push your branch up to Launchpad and request a merge of your changes.