Holistic UI is smarter UX

Tuesday, March 27th, 2012

In the open source community, we celebrate having pieces that “do one thing well”, with lots of orthogonal tools compounding to give great flexibility. But that same philosophy leads to shortcomings on the GUI / UX front, where we want all the pieces to be aware of each other in a deeper way.

For example, we consciously place the notifications in the top right of the screen, avoiding space that is particularly precious (like new tab titles, and search boxes). But the indicators are also in the top right, and they make menus, which drop down into the same space a notification might occupy.

Since we know that notifications are queued, no notification is guaranteed to be displayed instantly, so a smarter notification experience would stay out of the way while you were using indicator menus, or get out of the way when you invoke them. The design story of focusayatana, where we balance the need for focus with the need for awareness, would suggest that we should suppress awareness-oriented things in favour of focus things. So when you’re interacting with an indicator menu, we shouldn’t pop up the notification. Since the notification system, and the indicator menu system, are separate parts, the UNIX philosophy sells us short in designing a smart, smooth experience because it says they should each do their thing individually.

Going further, it’s silly that the sound menu next/previous track buttons pop up a notification, because the same menu shows the new track immediately anyway. So the notification, which is purely for background awareness, is distracting from your focus, which is conveying exactly the same information!

But it’s not just the system menus. Apps can play in that space too, and we could be better about shaping the relationship between them. For example, if I’m moving the mouse around in the area of a notification, we should be willing to defer it a few seconds to stay out of the focus. When I stop moving the mouse, or typing in a window in that region, then it’s OK to pop up the notification.

It’s only by looking at the whole, that we can design great experiences. And only by building a community of both system and application developers that care about the whole, that we can make those designs real. So, thank you to all of you who approach things this way, we’ve made huge progress, and hopefully there are some ideas here for low-hanging improvements too 🙂

Note to the impatient: this is a long post and it only gets to free software ecosystem dynamics towards the end. The short version is that we need to empower software companies to participate in the GNU/Linux ecosystem, and not fear them. Rather than undermining their power, we need to balance it through competition.

Church schools in apartheid South Africa needed to find creative ways to teach pupils about the wrongs of that system. They couldn’t actively foment revolt, but they could teach alternative approaches to governance. That’s how, as a kid in South Africa, I spent a lot of time studying the foundations of the United States, a system of governance defined by underdogs who wanted to defend not just against the abuses of the current power, but abuses of power in general.

My favourite insight in that regard comes from James Madison in the Federalist Papers, where he describes the need to understand and harness human nature as a force: to pit ambition against ambition, as it is often described. The relevant text is worth a read if you don’t have time for the whole letter:

But the great security against a gradual concentration of the several powers in the same department, consists in giving to those who administer each department the necessary constitutional means and personal motives to resist encroachments of the others. The provision for defense must in this, as in all other cases, be made commensurate to the danger of attack. Ambition must be made to counteract ambition. The interest of the man must be connected with the constitutional rights of the place. It may be a reflection on human nature, that such devices should be necessary to control the abuses of government. But what is government itself, but the greatest of all reflections on human nature? If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself. A dependence on the people is, no doubt, the primary control on the government; but experience has taught mankind the necessity of auxiliary precautions.

When we debate our goals, principles and practices in the FLOSS community, we devote a great deal of energy to “how things should be”, and to the fact that “men are not angels”. I think the approach of James Madison is highly relevant to those discussions.

The conservation of power

Just as energy, momentum, charge and other physical properties of a system are conserved, so in a sense is power. If your goal is to reduce the power of one agency in government, the most effective strategy is to strengthen the position of another. We know that absolute monarchies are bad: they represent unbalanced power.

Within a system, power will tend to consolidate. We have antitrust agencies specifically to monitor the consolidation of economic power and to do something about it. We setup independent branches of government to ensure that some kinds of power simply cannot be consolidated.

Undermining power in one section of an ecosystem inevitably strengthens the others.

Since we humans tend to think the grass is greener on the other side of the fence, and since power takes a little while to get properly abused, you can often see societies oscillate in the allocation of power. When things seem a little out of control, we give more power to the police and other securocrats. Then, when they become a little thuggish, we squeeze their power through regulation and oversight, and civil liberties gain in power, until the pendulum swings again.

The necessity of concentrated power

Any power can be abused. I had a very wise headmaster at that same school who used to say that the only power worth having was power that was worth abusing. This was not a call to the abuse of power, you understand, merely a reflection on the fact that power comes with the real responsibility of restraint.

So, if power can be abused, why do we tolerate it at all? Why not dissolve authority down to the individual? Because the absence of power leads to chaos, which ironically is an easy place to establish despotic authority. Power isn’t seized – it’s given. We give people power over us. And in a state of chaos, all it takes is a few people to gain some power and they have a big advantage over everyone else. That’s why early leaders in new ecosystems tend to become unbeatable very quickly.

Also, power clears the path for action. In a world with no power, little gets done at all. We are better off with large companies that have the power to organise themselves around a goal than trying to achieve the same goal with a collection of individuals; try making a Boeing from an equivalent group of artisans, and you’ll see what I mean. Artisans form guilds and companies to increase their reach and impact. Individual volunteers join professional institutions to get more effective: consider the impact of handing out food yourself, versus helping sustain a network of soup kitchens, even in the purely non-profit world. Having some clout on your side is nothing to sniff at, even if you have purely philanthropic goals.

Power and innovation

If you have all the power already, there’s no spur to innovate. So kingdoms stagnate, eventually.

But power makes space for good things, too. It’s the powerful (and rich) who fund the arts in most societies. Innovation needs breathing space; companies with economic power can incubate new ideas to the point where they become productive.

Too much competition can thus limit innovation: look how difficult it has been for the Windows-based PC manufacturers, who live in a brutally competitive world and have little margin, to innovate. They are trapped between a highly efficient parts supply ecosystem, which feeds them all the same stuff at the same price, and a consumer market that requires them all to provide PC’s which run the same stuff the same way. As a result, they have little power, little margin, little innovation.

The trick is not to fear power itself, but instead, to shape, balance and channel it. You don’t want to aim for the absence of power, you want the Goldilocks effect of having “just enough”. And that was James Madison’s genius.

Verticals, competition and the balance of power

Of course, competition between rivals is the balance of power in business. We resent monopolies because they are either abusing their power, or stagnating.

In economics, we talk about “verticals” as the set of supply dependencies needed for a particular good. So, to make an aircraft, you need various things like engines and alloys, and those suppliers all feed the same pool of aircraft manufacturers.

In order to have a healthy ecosystem, you need a balance of power both between suppliers at the same level of the stack, and vertically, between the providers of parts and providers of the finished product. That’s because innovation needs both competition AND margin to stimulate and nurture it.

In the PC case, the low margins in the PC sector helped reinforce the Windows monopoly. Not only was there no competition for Microsoft, there was no ability for a supplier further down the chain to innovate around them. The only player in that ecosystem that had the margin to innovate was Microsoft, and since they faced no competition, there was little stimulus to embrace their own R&D, no matter how much they spent on it.

Power in the FLOSS ecosystem: upstreams and distributions

So, where do we stand in the free software and open source ecosystem?

The lines between upstreams and distributions aren’t perfectly clear, of course. Simplistic versions of that picture are often used to prove points, but in fact, all the distributions are also in some sense upstreams, and even derivative distributions end up being leaders of those they derive from in some pieces or markets. Nevertheless, I think it’s worth looking at the balance of power between upstream projects and distributions, as it is today and as it could be.

Also, I think it’s worth looking at related parties, companies and institutions which work a lot with FLOSS but have orthogonal interests.

If one uses margin, or profit, as an indicator of power, it’s clear that the distributions today are in a far stronger position than most individual projects or upstreams. The vast majority of software-related revenue in the FLOSS ecosystem goes to distributions.

Within that segment, Red Hat claims 80% market share of paid Linux, a number that is probably accurate. Novell, the de facto #2, is in the midst of some transition, but indicators are that it continues to weaken. Oracle’s entry into the RHEL market has had at best marginal impact on RHEL economics (the substantial price rises in RHEL 6 are a fairly clear signal of the degree to which Red Hat believes it faces real competition). The existence of “unpaid RHEL” in the form of CentOS, as well as OEL, essentially strengthens the position of RHEL itself. Ubuntu and Debian have large combined levels of adoption, but low revenue.

So clearly, there is work to do just to balance power in the distribution market. And it will take work – historically, platforms tend to monopolies, and in the absence of a definitive countervailing force that establishes strength outside the RHEL gravity well, that’s what we’ll have. But that’s not the most interesting piece. What’s more interesting is the dynamic between distributions and upstreams.

Today, most upstreams are weak. They have little institutional strength. It’s generally difficult to negotiate and do business with an upstream. In many cases, that’s by design – the teams behind a project are simply not interested, or they are explicitly non-profit, as in the case of the FSF, which makes them good leaders of specific values, but difficult to engage with commercially.

As a result, those who need to do business with open source go to distributions, even in cases where they really want to be focused on a particular component. This greatly amplifies the power of the distributions: they essentially are the commercial vehicles for ALL of open source. The weakness of individual upstreams turns into greater strength for distributions.

You can imagine that distributions like it that way, and it would be surprising to see a distribution, or company that backs a distribution, arguing for stronger upstreams. But that’s exactly the position I take: FLOSS needs stronger upstreams, and as a consequence, weaker distributions.

Stronger upstreams will result in more innovation in FLOSS than stronger distributions. Essentially, like Microsoft, a distribution receives cash for the whole platform and allocates it to specific areas of R&D. That means the number of good ideas that receive funding in our ecosystem, today, is dependent on the insights of a very few companies. Just as Microsoft invested a lot in R&D and yet seemed to fall behind, upstream innovation will be strangled if it’s totally dependent on cash flow via distributions.

It’s not just innovation that suffers because we don’t have more power, or economic leverage, in the hands of upstreams. It’s also the myriad of things beyond code itself. When you have a company behind a project, they tend to take care of a lot more than just the code: QA, documentation, testing, promotion. It’s easy, as a developer, to undervalue those things, or to see them as competing for resources with the “real work” of code. But that competition is necessary, and they make a great contribution to the dynamism of the final product.

Consider the upstream projects which have been very successful over the long term. Qt and MySQL, for example, both had companies behind them that maintained strong leverage over the product. That leverage was often unpopular, but the result was products available to all of us under a free license that continued to grow in stature, quality and capability despite the ups and downs of the broader market, and without being too dependent on the roving spotlight of “coolness”, which tends to move quickly from project to project.

There are of course successful upstream projects which do not have such companies. The best example is probably the Linux kernel itself. However, those projects fall into a rather unusual category: they are critical to some large number of companies that make money in non-software ways, and those companies are thus forced to engage with the project and contribute. In the case of the kernel, hardware companies directly and indirectly underwrite the vast majority of the boring but critical work that, in other projects, would be covered by the sponsoring institution. And despite that, there are many gaps in the kernel. You don’t have to dig very hard to find comments from key participants bemoaning the lack of testing and documentation. Nevertheless, it gets by quite well under the circumstances.

But most ecosystems will have very few projects that are at such a confluence. Most upstream projects are the work of a few people, the “coolness” spotlight shines on them briefly if at all. They need either long term generosity from core contributors, or an institution to house and care for them, if they want to go the distance. The former rarely works for more than a few years.

Projects which depend on indirect interests, such as those sponsored by hardware companies, have another problem. Their sponsoring institutions are generally not passionate about software. They don’t really need or want to produce GREAT software. And if you look at the projects which get a lot of such contributions, that becomes very obvious. Compare and contrast the quality of apps from companies which deeply care about software from those which come from hardware companies, and you see what I mean.

We FLOSS folk like to tell ourselves that the Windows hegemony was purely a result of the manipulations of its sponsor, and the FLOSS as we do it today is capable of doing much more if it only had a fair chance. I don’t think, having watched the success of iOS and Android as new ecosystems, that we can justify that position any longer. I think we have to be willing to think hard about what we are willing to change if we want to have the chance of building an ecosystem as strong, but around GNU/Linux. Since that’s my goal, I’m thinking very hard about that, and creatively. I think it’s possible, but not without challenging some sacred cows and figuring out what values we want to preserve and which we can remould.

Power is worth having in your ecosystem, despite its occasional abuse

There’s no doubt that power created will be abused. That’s true of a lot of important rights and powers. For example, we know that free speech is often abused, but we nevertheless value it highly in many societies that are also big contributors to FLOSS. You probably know the expression, “I disagree with what you are saying entirely, but I will defend to the death your right to say it”.

Similarly, in our ecosystem, power will be abused. But it’s still worth helping institutions acquire it, even those we dislike or distrust, or those we compete with. At Canonical, we’ve directly and indirectly helped lots of institutions that you could describe that way – Oracle, Novell, Red Hat, Intel and many others. The kneejerk reaction is usually “no way”, but upon deeper thought, we figured that it is better to have an ecosystem of stronger players, considering the scale of the battle with the non-FLOSS world.

I often find people saying “I would help an institution if I thought I could trust it”. And I think that’s a red herring, because just as power will be abused, trust will be abused too. If you believe that this is a battle of ecosystems and platforms, you want to have as many powerful competitors in your ecosystem as possible, even though you probably cannot trust any of them in the very long term. It’s the competition between them that really creates long term stability, to come back to the thinking of James Madison. It’s pitting ambition against ambition, not finding angels, which makes that ecosystem a winner. If you care about databases, don’t try to weaken MySQL, because you need it strong when you need it. Rather figure out how to strengthen PostGRES alongside it.

How Canonical fits in

Canonical is in an interesting position with regard to all of this. As a distribution, we could stay silent on the issue, and reasonably expect to grow in power over time, on the same basis that Red Hat has. And there are many voices in Canonical that say exactly that: don’t rock the boat, essentially.

However, perhaps unlike other Linux distributions, Canonical very much wants to see end users running free software, and not just IT professionals. That raises the bar dramatically in terms of the quality of the individual pieces. It means that it’s not good enough for us to work in an ecosystem which produces prototype or rough cut products, which we then aggregate and polish at the distribution level. Unlike those who have gone before, we don’t want to be the sole guarantor of quality in our ecosystem, because that will not scale.

For that reason, looking at the longer term, it’s very important to me that we figure out how to give more power to upstreams, so that they in turn can invest in producing components or works which have the completeness and quality that end-users expect. I enjoy working with strong commercial institutions in the open source ecosystem – while they always represent some competitive tension, they also represent the opportunity to help our ecosystem scale and out-compete the proprietary world. So I’d like to find ways to strengthen the companies that have products under free software, and encourage more that have proprietary projects to make them available under free licenses, even if that’s not the only way they publish them.

If you’ve read this far, you probably have a good idea where I’m going with this. But I have a few more steps before actually getting there. More soon.

Till then, I’m interested in how people think we can empower upstream projects to be stronger institutionally.

There are a couple of things that are obvious and yet don’t work. For example, lots of upstreams think they should form a non-profit institution to house their work. The track record of those is poor: they get setup, and they fail as soon as they have to file their annual paperwork, leaving folks like the SFLC to clean up the mess. Not cool. At the end of the day, such new institutions add paperwork without adding funding or other sources of energy. They don’t broaden out the project the same way a company writing documentation and selling services usually does. On the other hand, non-profits like the FSF which have critical mass are very important, though, which is why on occasion we’ve been happy to contribute to them in various ways.

Also, I’m interested in how we can reshape our attitudes to power. Today, the tenor of discussion in most FLOSS debates is simplistic: we fear power, and we attempt to squash it always, celebrating the individual. But that misses the point that we are merely strengthening the power elsewhere; in distributions, in other ecosystems. We need a richer language for describing “the Goldilocks power” balance, and how we can move beyond FUD.

So, what do you think we could do to create more Mozilla’s, more MySQL’s, more Qt’s and more OpenStacks?

I’ll summarise interesting comments and threads in the next post.

Competition is tough on the contestants, but it gets great results for everyone else. We like competitive markets, competitive technologies, competitive sports, because we feel the end result for consumers or the audience is as good as it possibly could be.

In larger organisations, you get an interesting effect. You get *internal* competition as well as external competition. And that’s healthy too. You get individuals competing for responsibility, and of course you want to make sure that the people who will make the wisest choices carry the responsibilities that demand wisdom, while those who have the most energy carry the responsibilities for which that’s most needed. You get teams competing with each other too – for resources, for attention, for support elsewhere, for moral authority, for publicity. And THAT’s the hardest kind of competition to manage, because it can be destructive to the organisation as a whole.

Even though it’s difficult to manage, internal competition is extremely important, and should not be avoided out of fear. The up side is that you get to keep the best ideas because you allowed them to compete internally. If you try and avoid it, you crowd out new ideas, and end up having to catch up to them. Usually, what goes wrong is that one group gets control of the organisation’s thinking, and takes the view that any ideas which do not come from within that group are a threat, and should be stopped. That’s very dangerous – it’s how great civilisations crash; they fail to embrace new ideas which are not generated at the core.

In Ubuntu, we have a lot of internal competition. Ubuntu and Kubuntu and Xubuntu and Edubuntu and *buntu-at-large have to collaborate and also, to a certain extent, compete. We handle that very well, I think, though occasionally some muppet calls Kubuntu the blue-headed-stepchild etc etc. It’s absolutely clear to everyone, though, that we have a shared interest in delivering ALL these experiences together with as much shared vision and commonality as possible. I consider the competition between these teams healthy and constructive and worth maintaining, even though it requires some fancy footwork and causes occasional strains.

The challenge for Gnome leadership

The sound and fury writ large in blog comments this week is all about how competition is managed.

Gnome is, or should be, bigger than any of the contributing individuals or companies. Gnome leadership should be in a position to harness competition effectively for the good of the project. That does, however, require very firm leadership, and very gutsy decisions. And it requires vigilance against inward thinking. For example, I’ve seen the meme reiterated multiple times that “one should not expect Gnome to embrace ideas which were not generated and hosted purely within Gnome”. That’s chronic inward thinking. Think of all the amazing bits of goodness in the free software stack which were NOT invented in Gnome but are a part of it today. Think how much better it is when goodness is adopted across multiple desktop environments, and how much harder it is to achieve that when something is branded “K” or “G”.

When we articulated our vision for Unity, we were very clear that we wanted to deliver it under the umbrella of Gnome. We picked Gnome-friendly technologies by and large, and where we felt we needed to do something different, that decision required substantial review. We described Unity as “a shell for Gnome” from the beginning, and we have been sincere in that view. We have worked successfully and happily with many, many Gnome projects to integrate Unity API’s into their codebase.

This is because we wanted to be sure that whatever competitive dynamics arose were *internal* to Gnome, and thus contributing to a better result overall in Gnome in the long term.

We’ve failed.

Much of the language, and much of the decision making I’ve observed within Gnome, is based on the idea that Unity is competition WITH Gnome, rather than WITHIN Gnome.

The key example of that is the rejection of Unity’s indicator API’s as external dependencies. That was the opportunity to say “let’s host this competition inside Gnome”. Even now, there’s a lack of clarity as to what was intended by that rejection, with some saying “it was just a reflection of the fact that the API’s were new and not used in any apps”. If that were the case, there would be no need for prior approval as an external dependency; the rejection was clearly an attempt to prevent Gnome applications from engaging around these API’s. It’s substantially failed, as many apps have happily done the work to blend in beautifully in the Unity environment, but there has been a clear attempt to prevent that by those who feel that Unity is a threat to Gnome rather than an opportunity for it.

Dave Neary has to his credit started to ask “what’s really going on here”?

In his blog post, he quoted the rationale given for the rejection of Canonical’s indicator API’s, which I’ll re-quote here and analyze in this light:

it doesn’t integrate with gnome-shell

That’s it – right there. Remember, this was a proposal for the indicator API’s to be an *external* dependency for Gnome. That means, Gnome apps can use those API’s *optionally* when they are being run on a platform where they are useful. It has NOTHING to do with the core Gnome vision. External API’s exist precisely BECAUSE it’s useful to encourage people to use Gnome apps on all sorts of platforms, including proprietary ones like Windows and MacOS and Solaris, and they should shine there too.

So the premier reason given for the rejection of these API’s is a reason that, as best we can tell, has never been used against an external dependency proposal before: “it’s different to Gnome”. At the heart of this statement is something deeper: “it’s competition with an idea someone in Gnome wants to pursue”.

What made this single statement heartbreaking for me to see was that it spoke clearly to the end of one of Gnome’s core values: code talks. Here we had API’s which were real, tested code, with patches to many Gnome apps available, that implemented a spec that had been extensively discussed on FreeDesktop.org. This was real code. Yet it was blocked because someone – a Gnome Shell designer – wanted to explore other ideas, ideas which at the time were not working code at all. There’s been a lot of commentary on that decision. Most recently, Aaron Seigo pointed out that this decision was as much a rejection of cross-desktop standards as it was a rejection of Canonical’s code.

Now, I can tell you that I was pretty disgusted with this result.

We had described the work we wanted to do (cleaning up the panel, turning panel icons into menus) to the Gnome Shell designers at the 2008 UX hackfest. McCann denies knowledge today, but it was a clear decision on our part to talk about this work with him at the time, it was reported to me that the conversation had happened, and that we’d received the assurance that such work would be “a valued contribution to the shell”. Clearly, by the time it was delivered, McCann had decided that such assurances were not binding, and that his interest in an alternative panel story trumped both that assurance and the now-extant FreeDesktop.org discussions and spec.

But that’s not the focus of this blog. My focus here is on the management of healthy competition. And external dependencies are the perfect way to do so: they signal that there is a core strategy (in this case whatever Jon McCann wants to do with the panel) and yet there are also other, valid approaches which Gnome apps can embrace. This decision failed to grab that opportunity with both hands. It said “we don’t want this competition WITHIN Gnome”. But the decision cannot remove the competitive force. What that means is that the balance was shifted to competition WITH Gnome.

probably depends on GtkApplication, and would need integration in GTK+ itself

Clearly, both of these positions are flawed. The architecture of the indicator work was designed both for backward compatibility with the systray at the time, and for easy adoption. We have lots of apps using the API’s without either of these points being the case.

we wished there was some constructive discussion around it, pushed by the libappindicator developers; but it didn’t happen

We made the proposal, it was rejected. I can tell you that the people who worked on the proposal consider themselves Gnome people, and they feel they did what was required, and stopped when it was clear they were not going to be accepted. I’ve had people point to this bullet and say “you should have pushed harder”. But proposing an *external* dependency is not the same as trying to convince Shell to adopt something as the mainstream effort. It’s saying “hey, here’s a valid set of API’s apps might want to embrace, let’s let them do so”.

there’s nothing in GNOME needing it

This is a very interesting comment. It’s saying “no Gnome apps have used these API’s”. But the Gnome apps in question were looking to this very process for approval of their desire to use the API’s. You cannot have a process to pre-approve API’s, then decline to do so because “nobody has used the API’s which are not yet approved”. You’re either saying “we just rubber stamp stuff here, go ahead and use whatever you want”, or you’re being asinine.

It’s also saying that Unity is not “in GNOME”. Clearly, a lot of Unity work depends on the adoption of these API’s for a smooth and well-designed panel experience. So once again, we have a statement that Unity is “competition with Gnome” and not “competition within Gnome”.

And finally, it’s predicating this decision on the idea being “in Gnome” is the sole criterion of goodness. There is a cross-desktop specification which defines the appindicator work clearly. The fact that KDE apps Just Work on Unity is thanks to the work done to make this a standard. Gnome devs participated in the process, but appeared not to stick with it. Many or most of the issues they raised were either addressed in the spec or in the implementations of it. They say now that they were not taken seriously, but a reading of the mailing list threads suggests otherwise.

It’s my view that cross-desktop standards are really important. We host both Kubuntu and Ubuntu under our banner, and without such collaboration, that would be virtually impossible. I want Banshee to work as well under Kubuntu as Amarok can under Ubuntu.

What can be done?

This is a critical juncture for the leadership of Gnome. I’ll state plainly that I feel the long tail of good-hearted contributors to Gnome and Gnome applications are being let down by a decision-making process that has let competitive dynamics diminish the scope of Gnome itself. Ideas that are not generated “at the core” have to fight incredibly and unnecessarily hard to get oxygen. Ask the Zeitgeist team. Federico is a hero, but getting room for ideas to be explored should not feel like a frontal assault on a machine gun post.

This is no way to lead a project. This is a recipe for a project that loses great people to environments that are more open to different ways of seeing the world. Elementary. Unity.

Embracing those other ideas and allowing them to compete happily and healthily is the only way to keep the innovation they bring inside your brand. Otherwise, you’re doomed to watching them innovate and then having to “relayout” your own efforts to keep up, badmouthing them in the process.

We started this with a strong, clear statement: Unity is a shell for Gnome. Now Gnome leadership have to decide if they want the fruit of that competition to be an asset to Gnome, or not.

A blessing in disguise

Aaron’s blog post made me think that the right way forward might be to bolster and strengthen the forum for cross-desktop collaboration: FreeDesktop.org.

I have little optimism that the internal code dynamics of Gnome can be fixed – I have seen too many cases where a patch which implements something needed by Unity is dissed, then reimplemented differently, or simply left to rot, to believe that the maintainers in Gnome who have a competitive interest on one side or the other will provide a level playing field for this competition.

However, we have shown a good ability to collaborate around FD.o with KDE and other projects. Perhaps we could strengthen FreeDesktop.org and focus our efforts at collaboration around the definition of standards there. Gnome has failed to take that forum seriously, as evidenced by the frustrations expressed elsewhere. But perhaps if we had both Unity and KDE working well there, Gnome might take a different view. And that would be very good for the free software desktop.

Qt apps on Ubuntu

Tuesday, January 18th, 2011

As part of our planning for Natty+1, we’ll need to find some space on the CD for Qt libraries, and we will evaluate applications developed with Qt for inclusion on the CD and default install of Ubuntu.

Ease of use, and effective integration, are key values in our user experience. We care that the applications we choose are harmonious with one another and the system as a whole. Historically, that has meant that we’ve given very strong preference to applications written using Gtk, because a certain amount of harmony comes by default from the use of the same developer toolkit. That said, with OpenOffice and Firefox having been there from the start, Gtk is clearly not an absolute requirement. What I’m arguing now is that it’s the values which are important, and the toolkit is only a means to that end. We should evaluate apps on the basis of how well they meet the requirement, not prejudice them on the basis of technical choices made by the developer.

In evaluating an app for the Ubuntu default install, we should ask:

* is it free software?
* is it best-in-class?
* does it integrate with the system settings and preferences?
* does it integrate with other applications?
* is it accessible to people who cannot use a mouse, or keyboard?
* does it look and feel consistent with the rest of the system?

Of course, the developer’s choice of Qt has no influence on the first two. Qt itself has been available under the GPL for a long time, and more recently became available under the LGPL. And there’s plenty of best-in-class software written with Qt, it’s a very capable toolkit.

System settings and prefs, however, have long been a cause of friction between Qt and Gtk. Integration with system settings and preferences is critical to the sense of an application “belonging” on the system. It affects the ability to manage that application using the same tools one uses to manage all the other applications, and the sorts of settings-and-preference experience that users can have with the app. This has traditionally been a problem with Qt / KDE applications on Ubuntu, because Gtk apps all use a centrally-manageable preferences store, and KDE apps do things differently.

To address this, Canonical is driving the development of dconf bindings for Qt, so that it is possible to write a Qt app that uses the same settings framework as everything else in Ubuntu. We’ve contracted with Ryan Lortie, who obviously knows dconf very well, and he’ll work with some folks at Canonical who have been using Qt for custom development work for customers. We’re confident the result will be natural for Qt developers, and a complete expression of dconf’s semantics and style.

The Qt team have long worked well in the broader Ubuntu community – we have great Qt representation at UDS every six months, the Kubuntu team have deep experience and interest in Qt packaging and maintenance, there is lots of good technical exchange between Qt upstream and various parts of the Ubuntu community, including Canonical. For example, Qt folks are working to integrate uTouch.

I’d draw a distinction between “Qt” and “KDE” in the obvious places. A KDE app doesn’t know anything about the dconf system configuration, and can’t easily integrate with the Ubuntu desktop as a result. So we’re not going to be proposing Amarok to replace Banshee any time soon! But I think it’s entirely plausible that dconf, once it has great Qt bindings, be considered by the KDE community. There are better people to lead that conversation if they want, so I’ll not push the idea further here :-). Nevertheless, should a KDE app learn to talk dconf in addition to the standard KDE mechanisms, which should be straightforward, it would be a candidate for the Ubuntu default install.

The decision to be open to Qt is in no way a criticism of GNOME. It’s a celebration of free software’s diversity and complexity. Those values of ease of use and integration remain shared values with GNOME, and a great basis for collaboration with GNOME developers and project members. Perhaps GNOME itself will embrace Qt, perhaps not, but if it does then our willingness to blaze this trail would be a contribution in leadership. It’s much easier to make a vibrant ecosystem if you accept a certain amount of divergence from the canonical way, so to speak 😉 Our work on design is centered around GNOME, with settings and preferences the current focus as we move to GNOME 3.0 and gtk3.

Of course, this is a perfect opportunity for those who would poke fun at that relationship to do so, but in my view what matters most is the solid relationship we have with people who actually write applications under the GNOME banner. We want to be the very best way to make the hard work of those free software developers *matter*, by which we mean, the best way to ensure it makes a real difference in millions of lives every day, and the best way to connect them to their users.

To the good folks at Trolltech, now Nokia, who have made Qt a great toolkit – thank you. To developers who wish to use it and be part of the Ubuntu experience – welcome.

A kind invitation

Thursday, September 23rd, 2010

Delighted to receive this today, and to proxy it through to Planets U and G:

Dear Ubuntu Community Council members,

on behalf of the openSUSE Board, I would like to extend this
invitation to you and your community to join us at the
openSUSE Conference in Nuremberg, Germany October 20-23, 2010.

This year more than seventy talks and workshops explore the theme of
‘Collaboration Across Borders‘ in Free and Open Source software
communities, administration and development. We believe that the
program, which includes tracks about distributions, the free desktop and
community, reaches across the borders between our projects and we
would like to ask you to encourage your community to visit the
conference so we get the chance to meet face to face, talk to and
inspire each other.

More information including the program and details about the event you
can find in our announcement at http://bit.ly/oconf2010

Thank you in advance and see you in October! 🙂

Henne Vogelsang
openSUSE Board Member

We’ll gladly sponsor a member of the Ubuntu community council to go, busy finding out if anyone can make it. I can’t, but appreciate the sentiment and the action and think it would be great if members of the Ubuntu community can take up the invitation.

Regardless, best wishes for the conference!

Prompted in part by the critique of Canonical’s code contributions to the kernel and core GNOME infrastructure, I’ve been pondering whether or not I feel good about what I do every day, and how I do it. It’s important for me to feel that what I do is of service to others and makes the world a better place for it having been done. And in my case, that it’s a contribution commensurate with the good fortune I’ve had in life.

Two notes defined for me what I feel I contribute, in this last month. One was a thank-you from New Zealand, from someone who is watching Ubuntu 10.04 make a real difference in their family’s life. For them it seems like a small miracle of human generosity that this entire, integrated, working environment exists and is cared for by thousands of people. The other was a support contract for tens of thousands of desktops running Ubuntu 10.04 in a company. Between those two, we have the twin pillars of the Ubuntu Project and Canonical: to bring all the extraordinary generosity of the free software community to the world at large, as a gift, free of charge, unencumbered and uncrippled, and to do so sustainably.

The first story, from New Zealand, is about someone who is teaching their children to use computers from a young age, and who has observed how much more they get done with Ubuntu than with Windows, and how much more affordable it is to bring computing to all the kids in their community with Ubuntu. For them, the fact that Ubuntu brings them this whole world of free software in one neat package is extraordinary, a breakthrough, and something for which they are very grateful.

It’s a story that I hope to see replicated a hundred million times. And it’s a story which brings credit and satisfaction not just to me, and not just to the people who make Ubuntu the focus of their love and energy, but to all of those who participate in free software at large. Ubuntu doesn’t deserve all the credit, it’s part of a big and complex ecosystem, but without it that delivery of free software just wouldn’t have the same reach and values.

We all understand that the body of free software needs many organs, many cells, each of which has their own priorities and interests. The body can only exist thanks to all of them. We are one small part of the whole, it’s a privilege for us to take up the responsibilities that we do as a distribution. We have the responsibility of choosing a starting point for those who will begin their free software journey with Ubuntu, and we work hard to make sure that all of those pieces fit well together.

Ubuntu, and the possibilities it creates, could not have come about without the extraordinary Linux community, which wouldn’t exist without the GNU community, and couldn’t have risen to prominence without the efforts of companies like IBM and Red Hat. And it would be a very different story if it weren’t for the Mozilla folks and Netscape before them, and GNOME and KDE, and Debian, and Google and everyone else who have exercised that stack in so many different ways, making it better along the way. There are tens of thousands of people who are not in any way shape or form associated with Ubuntu, who make this story real. Many of them have been working at it for more than a decade – it takes a long time to make an overnight success 🙂 while Ubuntu has only been on the scene six years. So Ubuntu cannot be credited solely for the delight of its users.

Nevertheless, the Ubuntu Project does bring something unique, special and important to free software: a total commitment to everyday users and use cases, the idea that free software should be “for everyone” both economically and in ease of use, and a willingness to chase down the problems that stand between here and there. I feel that commitment is a gift back to the people who built every one of those packages. If we can bring free software to ten times the audience, we have amplified the value of your generosity by a factor of ten, we have made every hour spent fixing an issue or making something amazing, ten times as valuable. I’m very proud to be spending the time and energy on Ubuntu that I do. Yes, I could do many other things, but I can’t think of another course which would have the same impact on the world.

I recognize that not everybody will feel the same way. Bringing their work to ten times the audience without contributing features might just feel like leeching, or increasing the flow of bug reports 10x. I suppose you could say that no matter how generous we are to downstream users, if upstream is only measuring code, then any generosity other than code won’t be registered. I don’t really know what to do about that – I didn’t found Ubuntu as a vehicle for getting lots of code written, that didn’t seem to me to be what the world needed. It needed a vehicle for getting it out there, that cares about delivering the code we already have in a state of high quality and reliability. Most of the pieces of the desktop were in place – and code was flowing in – it just wasn’t being delivered in a way that would take it beyond the server, or to the general public.

The second email I can’t quote from, but it was essentially a contract for services from Canonical to help a company move more than 20,000 desktops from Windows to Ubuntu. There have been several engagements recently of a similar scale, the pace is accelerating as confidence in Ubuntu grows. While Linux has long proven itself a fine desktop for the inspired and self-motivated developer, there is a gap between that and the needs of large-scale organisations. There isn’t another company that I’m aware of which is definitively committed to the free software desktop, and so I’m very proud that Canonical is playing that role in the free software ecosystem. It would be sad for me if all the effort the free software community puts into desktop applications didn’t have a conduit to those users.

There’s nothing proprietary or secret that goes into the desktops that Canonical supports inside large organisations. The true wonder for me is that the story from New Zealand, and the corporate story, both involve exactly the same code. That to me is the true promise of free software; when I have participated in open source projects myself, I’ve always been delighted that my work might serve my needs but then also be of use to as many other people as possible.

Ubuntu is a small part of that huge ecosystem, but I feel proud that we have stepped up to tackle these challenges.

Canonical takes a different approach to the other companies that work in Linux, not as an implicit criticism of the others, but simply because that’s the set of values we hold. Open source is strengthened by the fact that there are so many different companies pursuing so many different, important goals.

In recent weeks it’s been suggested that Canonical’s efforts are self-directed and not of benefit to the broader open source community. That’s a stinging criticism because most of us feel completely the opposite, we’re motivated to do as much as we can to further the cause of free software to the benefit both of end-users and the community that makes it, and we’re convinced that building Ubuntu and working for Canonical are the best ways to achieve that end. It’s prompted a lot of discussion and consideration for each of us and for Canonical as a whole. And this post is a product of that consideration: a statement for myself of what I feel I contribute, and why I feel proud of the effort I put in every day.

What do we do for free software? And what do I do myself?

For a start, we deliver it. We reduce the friction and inertia that prevent people trying free software and deciding for themselves if they like it enough to immerse themselves in it. Hundreds of today’s free software developers, translators, designers, advocates got the opportunity to be part of our movement because it was easy for them to dip their toe in the water. And that’s not easy work. Consider the effort over many years to produce a simple installer for Linux like http://www.techdrivein.com/2010/08/massive-changes-coming-to-ubuntu-1010.html which is the culmination of huge amounts of work from many groups, but which simply would not have happened without Canonical and Ubuntu.

There are thousands of people who are content to build free software for themselves, and that’s no crime. But the willingness to shape it into something that others will find, explore and delight in needs to be celebrated too. And that’s a value which is celebrated very highly in the Ubuntu community: if you read planet.ubuntu.com you’ll see a celebration of *people using free software*. As a community we are deeply satisfied to see people *using* it to solve problems in their lives. That’s more satisfying to us than stories about how we made it faster or added a feature. Of course we do bits of both, but this is a community that measures impact in the world rather than impact on the code. They are very generous with their time and expertise, with that as the reward. I’m proud of the fact that Ubuntu attracts people who are generous in their contributions: they feel their contributions are worth more if they are remixed by others, not less. So we celebrate Kubuntu and Xubuntu and Puppy and Linux Mint. They don’t ride on our coattails, they stand on our shoulders, just as we stand on the shoulders of giants. And that’s a good thing. Our work is more meaningful and more valuable because their work reaches users that ours alone could not.

What else?

We fix it, too. Consider the https://wiki.ubuntu.com/PaperCut Papercuts project, born of the recognition that all the incredible technology and effort that goes into making something as complex as the Linux kernel is somehow diminished if the average user gets an incomprehensible result when they do something that should Just Work. Hundreds of Papercuts have been fixed, across many different applications, benefiting not just Ubuntu but also every other distribution that ships those applications. If you think that’s easy, consider the effort involved to triage and consider each of thousands of suggestions, coordinating a fix and the sharing of it. The tireless efforts of a large team have made an enormous difference. Consider this: saving millions of users one hour a week is a treasury of energy saved to do better things with free software. While the Canonical Design team played a leading role in setting up the Papercuts project, the real stars are people like http://www.omgubuntu.co.uk/2010/06/maverick-papercut-hunting-season-opens.html Vish and Sense who rally the broader papercuts team to make a difference. Every fix makes a difference, on the desktop http://ubuntuserver.wordpress.com/2010/01/20/ubuntu-server-papercuts-project/ and the server.

At a more personal level, a key thing I put energy into is leadership, governance and community structure. When we started Ubuntu, I spent a lot of time looking at different communities that existed at the time, and how they managed the inevitable tensions and differences of opinion that arise when you have lots of sharp people collaborating. We conceived the idea of a code of conduct that would ensure that our passions for the technology or the work never overwhelmed the primary goal of bringing diverse people together to collaborate on a common platform. I’m delighted that the idea has spread to other projects: we don’t want to hoard ideas or designs or concepts, that would be contrary to our very purpose.

We setup a simple structure: a technical board and a community council. That approach is now common in many other projects too. As Ubuntu has grown, so that governance has evolved, there are now multiple leadership teams for groups like Kubuntu and the Forums and IRC, who provide counsel and guidance for teams of LoCo’s and moderators and ops and developers, who in turn strive for technical perfection and social agility as part of an enormous global community. That’s amazing. When people start participating in Ubuntu they are usually motivated as much by the desire to be part of a wonderful community as they are to fix a specific problem or ease a specific burden. Over time, some of those folks find that they have a gift for helping others be more productive, resolving differences of opinion, doing the work to organise a group so that much more can be achieved than any one individual could hope to do. Ubuntu’s governance structures create opportunities for those folks to shine: they provide the backbone and structure which makes this community able to scale and stay productive and happy.

A project like Ubuntu needs constant care in order to defend its values. When you are tiny and you put up a flag saying “this is what we care about” you tend to attract only people who care about those things. When the project grows into something potent and visible, though, you tend to attract EVERYONE, because people want to be where the action is. And then the values can easily be watered down. So I continue to put energy into working with the Ubuntu community council, and the Canonical community team, both of which are profoundly insightful and hard-working which makes that part of my work a real pleasure. The Ubuntu community council take their responsibility as custodian of the projects community values very seriously indeed. The CC is largely composed of people who are not affiliated with Canonical, but who nevertheless believe that the Ubuntu project is important to free software as a whole. And the awesome Jono Bacon, the delightful Daniel Holbach, and unflappable Jorge Castro are professionals who understand how to make communities productive and happy places to work.

Something as big as the Ubuntu community cannot be to the credit of me or any other individual, but I’m proud of the role I’ve played, and motivated to continue to play a role as needed.

In more recent years I’ve come to focus more on championing the role of design in free software. I believe that open source produces the best quality software over time, but I think we need a lot more cogent conversations about the experiences we want to create for our users, whether it’s on the desktop, the netbook or the server. So I’ve put a lot of my leadership energy into encouraging various communities – both Ubuntu and upstream – to be welcoming of those who see software through the eyes of the new user rather than the experienced hacker. This is a sea change in the values of open source, and is not something I can hope to achieve alone, but I’m nevertheless proud to be a champion of that approach and glad that it’s steadily becoming accepted.

There were designers working in free software before we made this push. I hope they feel that Canonical’s emphasis on the design-lead approach has made their lives easier, and the community at large more appreciative of their efforts and receptive to their ideas. But still, if you *really* care about design in free software, the Canonical design team is the place to be.

I do some design work myself, and have participated most heavily in the detailed design of Unity, the interface for Ubuntu Netbook Edition 10.10. That’s an evolution of the older UNR interface; most importantly, it’s a statement that Linux desktops don’t need to be stuck in the 90’s, we can and will attempt to build new and efficient ways of working with computers. I’ve been delighted with the speed at which some of Unity’s facilities have been adopted by hundreds of projects, their goal is to make using Linux easier and classier for everyone, so that pace of adoption is a measure of the speed at which we are reducing the friction for new users discovering a better way to use their PC’s.

Design without implementation would leave us open to accusations of wanting others to do our work for us, so I’m proud also to lead a wonderful team that is doing the implementation of some of those key components. Things like dbusmenu have proven useful for bringing consistency to the interfaces of both GNOME and KDE applications running under Unity, and I very much hope they are adopted by other projects that need exactly the facilities they provide. I’d credit that engineering team with their focus on quality and testability and their desire to provide developers with clean API’s and good guidance on how best to use them. If you’ve used the full set of Indicators in 10.10 then you know how this quiet, persistent work that has engaged many different projects has transformed the panel into something crisp and efficient. Utouch is coming up for its first release, and will continue to evolve, so that Ubuntu and GNOME and KDE can have an easy road to multi-touch gesture interface goodness.

Beyond my own personal time, I also support various projects through funding. Putting money into free software needs to meet a key test: could that money achieve a better outcome for more people if it were directed elsewhere? There are lots of ways to help people: $100,000 can put a lot of people through school, clothed and fed. So I really need to be confident that the money is having a real, measurable impact on people’s lives. The thank you notes I get every week for Ubuntu help sustain that confidence. More importantly, my own observation of the catalytic effect that Ubuntu has had on the broader open source ecosystem, in terms of new developers attracted, new platforms created, new businesses launched and new participants acknowledged, make me certain that the funding I provide is having a meaningful consequence.

When Ubuntu was conceived, the Linux ecosystem was in a sense fully formed. We had a kernel. We had GNOME and KDE. We had X and libc and GCC and all the other familiar tools. Sure they had bugs and they had shortcomings and they had roadmaps to address them. But there was something missing: sometimes it got articulated as “marketing”, sometimes as “end-user focus”. I remember thinking “that’s what I could bring”. So Ubuntu, and Canonical, have quite explicitly NOT put effort into things which are obviously working quite well, instead, we’ve tried to focus on new ideas and new tools and new components. I see that as an invigorating contribution to the broader open source ecosystem, and I hear from many people that they perceive it the same way. Those who say “but Canonical doesn’t do X” may be right, but that misses all the things we do, which weren’t on the map beforehand. Of course, there’s little that we do exclusively, and little that we do that others couldn’t if they made that their mission, but I think the passion of the Ubuntu community, and the enthusiasm of its users, reflects the fact that there is something definitively new and distinctive about the project. That’s something to celebrate, something to be proud of, and something to motivate us to continue.

Free software is bigger than any one project. It’s bigger than the Linux kernel, it’s bigger than GNU, it’s bigger than GNOME and KDE, it’s bigger than Ubuntu and Fedora and Debian. Each of those projects plays a role, but it is the whole which is really changing the world. So when we start to argue with one another from the perspective of any one slice of free software, we run the risk of missing the bigger picture. That’s a bit like an auto-immune disease, where the body starts to attack itself. By definition, someone else who is working hard all day long to bring free software to a wider audience is on the same side as me, compared to 99% of the rest of the world, if I want to think in terms of sides. I admire and respect everyone who puts energy into advancing the cause of free software, even if occasionally I might differ on the detail of how it can be done.

Six-month cycles are great. Now let’s talk about meta-cycles: broader release cycles for major work. I’m very interested in a cross-community conversation about this, so will sketch out some ideas and then encourage people from as many different free software communities as possible to comment here. I’ll summarise those comments in a follow-up post, which will no doubt be a lot wiser and more insightful than this one 🙂

Background: building on the best practice of cadence

The practice of regular releases, and now time-based releases, is becoming widespread within the free software community. From the kernel, to GNOME and KDE, to X, and distributions like Ubuntu, Fedora, the idea of a regular, predictable cycle is now better understood and widely embraced. Many smarter folks than me have articulated the benefits of such a cadence: energising the whole community, REALLY releasing early and often, shaking out good and bad code, rapid course correction.

There has been some experimentation with different cycles. I’m involved in projects that have 1 month, 3 month and 6 month cycles, for different reasons. They all work well.

..but addressing the needs of the longer term

But there are also weaknesses to the six-month cycle:

  • It’s hard to communicate to your users that you have made some definitive, significant change,
  • It’s hard to know what to support for how long, you obviously can’t support every release indefinitely.

I think there is growing insight into this, on both sides of the original “cadence” debate.

A tale of two philosophies, perhaps with a unifying theory

A few years back, at AKademy in Glasgow, I was in the middle of a great discussion about six month cycles. I was a passionate advocate of the six month cycle, and interested in the arguments against it. The strongest one was the challenge of making “big bold moves”.

“You just can’t do some things in six months” was the common refrain. “You need to be able to take a longer view, and you need a plan for the big change.” There was a lot of criticism of GNOME for having “stagnated” due to the inability to make tough choices inside a six month cycle (and with perpetual backward compatibility guarantees). Such discussions often become ideological, with folks on one side saying “you can evolve anything incrementally” and others saying “you need to make a clean break”.

At the time of course, KDE was gearing up for KDE 4.0, a significant and bold move indeed. And GNOME was quite happily making its regular releases. When the KDE release arrived, it was beautiful, but it had real issues. Somewhat predictably, the regular-release crowd said “see, we told you, BIG releases don’t work”. But since then KDE has knuckled down with regular, well managed, incremental improvements, and KDE is looking fantastic. Suddenly, the big bold move comes into focus, and the benefits become clear. Well done KDE 🙂

On the other side of the fence, GNOME is now more aware of the limitations of indefinite regular releases. I’m very excited by the zest and spirit with which the “user experience MATTERS” campaign is being taken up in Gnome, there’s a real desire to deliver breakthrough changes. This kicked off at the excellent Gnome usability summit last year, which I enjoyed and which quite a few of the Canonical usability and design folks participated in, and the fruits of that are shaping up in things like the new Activities shell.

But it’s become clear that a change like this represents a definitive break with the past, and might take more than a single six month release to achieve. And most important of all, that this is an opportunity to make other, significant, distinctive changes. A break with the past. A big bold move. And so there’s been a series of conversations about how to “do a 3.0”, in effect, how to break with the tradition of incremental change, in order to make this vision possible.

It strikes me that both projects are converging on a common set of ideas:

  • Rapid, predictable releases are super for keeping energy high and code evolving cleanly and efficiently, they keep people out of a deathmarch scenario, they tighten things up and they allow for a shakeout of good and bad ideas in a coordinated, managed fashion.
  • Big releases are energising too. They are motivational, they make people feel like it’s possible to change anything, they release a lot of creative energy and generate a lot of healthy discussion. But they can be a bit messy, things can break on the way, and that’s a healthy thing.

Anecdotally, there are other interesting stories that feed into this.

Recently, the Python community decided that Python 3.0 will be a shorter cycle than the usual Python release. The 3.0 release is serving to shake out the ideas and code for 3.x, but it won’t be heavily adopted itself so it doesn’t really make sense to put a lot of effort into maintaining it – get it out there, have a short cycle, and then invest in quality for the next cycle because 3.x will be much more heavily used than 3.0. This reminds me a lot of KDE 4.0.

So, I’m interesting in gathering opinions, challenges, ideas, commitments, hypotheses etc about the idea of meta-cycles and how we could organise ourselves to make the most of this. I suspect that we can define a best practice, which includes regular releases for continuous improvement on a predictable schedule, and ALSO defines a good practice for how MAJOR releases fit into that cadence, in a well structured and manageable fashion. I think we can draw on the experiences in both GNOME and KDE, and other projects, to shape that thinking.

This is important for distributions, too

The major distributions tend to have big releases, as well as more frequent releases. RHEL has Fedora, Ubuntu makes LTS releases, Debian takes cadence to its logical continuous integration extreme with Sid and Testing :-).

When we did Ubuntu 6.06 LTS we said we’d do another LTS in “2 to 3 years”. When we did 8.04 LTS we said that the benefits of predictability for LTS’s are such that it would be good to say in advance when the next LTS would be. I said I would like that to be 10.04 LTS, a major cycle of 2 years, unless the opportunity came up to coordinate major releases with one or two other major distributions – Debian, Suse or Red Hat.

I’ve spoken with folks at Novell, and it doesn’t look like there’s an opportunity to coordinate for the moment. In conversations with Steve McIntyre, the current Debian Project Leader, we’ve identified an interesting opportunity to collaborate. Debian is aiming for an 18 month cycle, which would put their next release around October 2010, which would be the same time as the Ubuntu 10.10 release. Potentially, then, we could defer the Ubuntu LTS till 10.10, coordinating and collaborating with the Debian project for a release with very similar choices of core infrastructure. That would make sharing patches a lot easier, a benefit both ways. Since there will be a lot of folks from Ubuntu at Debconf, and hopefully a number of Debian developers at UDS in Barcelona in May, we will have good opportunities to examine this opportunity in detail. If there is goodwill, excitement and broad commitment to such an idea from Debian, I would be willing to promote the idea of deferring the LTS from 10.04 to 10.10 LTS.

Questions and options

So, what would the “best practices” of a meta-cycle be? What sorts of things should be considered in planning for these meta-cycles? What problems do they cause, and how are those best addressed? How do short term (3 month, 6 month) cycles fit into a broader meta-cycle? Asking these questions across multiple communities will help test the ideas and generate better ones.

What’s a good name for such a meta-cycle? Meta-cycle seems…. very meta.

Is it true that the “first release of the major cycle” (KDE 4.0, Python 3.0) is best done as a short cycle that does not get long term attention? Are there counter-examples, or better examples, of this?

Which release in the major cycle is best for long term support? Is it the last of the releases before major new changes begin (Python 2.6? GNOME 2.28?) or is it the result of a couple of quick iterations on the X.0 release (KDE 4.2? GNOME 3.2?) Does it matter? I do believe that it’s worthwhile for upstreams to support an occasional release for a longer time than usual, because that’s what large organisations want.

Is a whole-year cycle beneficial? For example, is 2.5 years a good idea? Personally, I think not. I think conferences and holidays tend to happen at the same time of the year every year and it’s much, much easier to think in terms of whole number of year cycles. But in informal conversations about this, some people have said 18 months, others have said 30 months (2.5 years) might suit them. I think they’re craaaazy, what do you think?

If it’s 2 years or 3 years, which is better for you? Hardware guys tend to say “2 years!” to get the benefit of new hardware, sooner. Software guys say “3 years!” so that they have less change to deal with. Personally, I am in the 2 years camp, but I think it’s more important to be aligned with the pulse of the community, and if GNOME / KDE / Kernel wanted 3 years, I’d be happy to go with it.

How do the meta-cycles of different projects come together? Does it make sense to have low-level, hardware-related things on a different cycle to high-level, user visible things? Or does it make more sense to have a rhythm of life that’s shared from the top to the bottom of the stack?

Would it make more sense to stagger long term releases based on how they depend on one another, like GCC then X then OpenOffice? Or would it make more sense to have them all follow the same meta-cycle, so that we get big breakage across the stack at times, and big stability across the stack at others?

Are any projects out there already doing this?

Is there any established theory or practice for this?

A cross-community conversation

If you’ve read this far, thank you! Please do comment, and if you are interested then please do take up these questions in the communities that you care about, and bring the results of those discussions back here as comments. I’m pretty sure that we can take the art of software to a whole new level if we take advantage of the fact that we are NOT proprietary, and this is one of the key ways we can do it.

New notification work lands in Jaunty

Saturday, February 21st, 2009

Thanks to the concerted efforts of Martin Pitt, Sebastien Bacher and several others, notify-osd and several related components landed in Jaunty last week. Thanks very much to all involved! And thanks to David Barth, Mirco Muller and Ted Gould who lead the development of notify-osd and the related messaging indicator.

Notify-OSD handles both application notifications and keyboard special keys like brightness and volume

Notify-OSD handles both application notifications and keyboard special keys like brightness and volume

MPT has posted an overview of the conceptual framework for “attention management” at https://wiki.ubuntu.com/NotificationDesignGuidelines, which puts ephemeral notification into context as just one of several distinct tools that applications can use when they don’t have the focus but need to make users aware of something. That’s a draft, and when it’s at 1.0 we’ll move it to a new site which will host design patterns on Canonical.com.

There is also a detailed specification for our implementation of the notification display agent, notify-osd, which can be found at https://wiki.ubuntu.com/NotifyOSD and which defines not only the expected behaviour of notify-osd but also all of the consequential updates we need to make across the packages in main an universe to ensure that those applications use notification and other techniques consistently.

There are at least 35 apps that need tweaking, and there may well be others! If you find an app that isn’t using notifications elegantly, please add it to the notification design guidelines page, and if you file a bug on the package, please tag it “notifications” so we can track these issues in a single consistent way.

Together with notify-osd, we’ve uploaded a new panel indicator which is used to provide a way to respond to messaging events, such as email and IRC pings. If someone IM’s you, then you should see an ephemeral notification, and the messaging indicator will give you a way to respond immediately. Same for email. Pidgin and Evolution are the primary focuses of the work, over time we’ll broaden that to the full complement of IM and email apps in the archive – patches welcome 🙂

There will be rough patches. Apps which don’t comply with the FreeDesktop.org spec and send actions on notifications even when the display agent says it does not support them, will have their notifications translated into alerts. That’s the primary focus of the effort now, the find and fix those apps. Also, we know there are several cases where a persistent response framework is required. The messaging indicator gets most of them, we will have additional persistent tools in place for Karmic in October.

When you present yourself on the web, you have 15 seconds to make an impression, so aspiring champions of the web 2.0 industry have converged on a good recipe for success:

  1. Make your site visually appealing,
  2. Do something different and do it very, very well,
  3. Call users to action and give them an immediate, rewarding experience.

We need the same urgency, immediacy and elegance as part of the free software desktop experience, and that’s is an area where Canonical will, I hope, make a significant contribution. We are hiring designers, user experience champions and interaction design visionaries and challenging them to lead not only Canonical’s distinctive projects but also to participate in GNOME, KDE and other upstream efforts to improve FLOSS usability.

Fortunately, we won’t be working in a vacuum. This is an idea that is already being widely explored. It’s great to see that communities like GNOME and KDE have embraced user experience as a powerful driver of evolution in their platforms. Partly because of the web-2.0 phenomenon and the iPhone, there’s a widely held desire to see FLOSS leap forward in usability and design. We want to participate and help drive that forward.

There’s also recognition for the scale of the challenge that faces us. When I laid out the goal of “delivering a user experience that can compete with Apple in two years” at OSCON, I had many questions afterwards about how on earth we could achieve that. “Everyone scratches their own itch, how can you possibly make the UI consistent?” was a common theme. And it’s true – the free software desktop is often patchy and inconsistent. But I see the lack of consistency as both a weakness (GNOME, OpenOffice and Firefox all have different UI toolkits, and it’s very difficult to make them seamless) and as a strength – people are free to innovate, and the results are world-leading. Our challenge is to get the best of both of those worlds.

I don’t have answers to all of those questions. I do, however, have a deep belief in the power of the free software process to solve seemingly intractable problems, especially in the long tail. If we articulate a comprehensive design ethic, a next-generation HIG, we can harness the wisdom of crowds to find corner cases and inconsistencies across a much broader portfolio of applications than one person or company could do alone. That’s why it’s so important to me that Canonical’s design and user experience team also participate in upstream projects across the board.

In Ubuntu we have in general considered upstream to be “our ROCK”, by which we mean that we want upstream to be happy with the way we express their ideas and their work. More than happy – we want upstream to be delighted! We focus most of our effort on integration. Our competitors turn that into “Canonical doesn’t contribute” but it’s more accurate to say we measure our contribution in the effectiveness with which we get the latest stable work of upstream, with security maintenance, to the widest possible audience for testing and love. To my mind, that’s a huge contribution.

Increasingly, though, Canonical is in a position to drive real change in the software that is part of Ubuntu. If we just showed up with pictures and prototypes and asked people to shape their projects differently, I can’t imagine that being well received! So we are also hiring a team who will work on X, OpenGL, Gtk, Qt, GNOME and KDE, with a view to doing some of the heavy lifting required to turn those desktop experience ideas into reality. Those teams will publish their Bzr branches in Launchpad and of course submit their work upstream, and participate in upstream sprints and events. Some of the folks we have hired into those positions are familiar contributors in the FLOSS world, others will be developers with relevant technical expertise from other industries.

One strong meme we want to preserve is the idea that Ubuntu, the platform team, is still primarily focused on integration and distribution. We will keep that team and the upstream work distinct to minimise the conflict of interest inherent in choosing the patches and the changes and the applications that actually ship each six months as part of an Ubuntu release.

Of course, there’s a risk to participation, because you can’t easily participate without expressing opinions, visions, desires, goals, and those can clash with other participants. It’s hard to drive change, even when people agree that change is needed. I hope we can find ways to explore and experiment with new ideas without blocking on consensus across diverse and distributed teams. We have to play to our strengths, which include the ability to diverge for experimental purposes to see what really works before we commit everyone to a course of action. It will be a challenge, but I think it’s achievable.

All of this has me tapdancing to work in the mornings, because we’re sketching out really interesting ideas for user interaction in Launchpad and in the desktop. The team has come together very nicely, and I’m thoroughly enjoying the processes, brainstorming and prototyping. I can’t wait to see those ideas landing in production!

I had the opportunity to present at the Linux Symposium on Friday, and talked further about my hope that we can improve the coordination and cadence of the entire free software stack. I tried to present both the obvious benefits and the controversies the idea has thrown up.

Afterwards, a number of people came up to talk about it further, with generally positive feedback.

Christopher Curtis, for example, emailed to say that the idea of economic clustering in the motor car industry goes far further than the location of car dealerships. He writes:

Firstly, every car maker releases their new models at about the same time. Each car maker has similar products – economy, sedan, light truck. They copy each other prolifically. Eventually, they all adopt a certain baseline – seatbelts, bumpers, airbags, anti-lock brakes. Yet they compete fiercely (OnStar from GM; Microsoft Sync from Ford) and people remain brand loyal. This isn’t going to change in the Linux world. Even better, relations like Debian->Ubuntu match car maker relations like Toyota->Lexus.

I agree with him wholeheartedly. Linux distributions and car manufacturers are very similar: we’re selling products that reach the same basic audience (there are niche specialists in real-time or embedded or regional markets) with a similar range (desktop, workstation, server, mobile), and we use many of the same components just as the motor industry uses common suppliers. That commonality and coordination benefits the motor industry, and yet individual brands and products retain their identity.

Let’s do a small thought experiment. Can you name, for the last major enterprise release of your favourite distribution, the specific major versions of kernel, gcc, X, GNOME, KDE, OpenOffice.org or Mozilla that were shipped? And can you say whether those major versions were the same or different to any of the enterprise releases of Ubuntu, SLES, Debian, or RHEL which shipped at roughly the same time? I’m willing to bet that any particular customer would say that they can’t remember either which versions were involved, or how those stacked up against the competition, and don’t care either. So looking backwards, differences in versions weren’t a customer-differentiating item.  We can do the same thought experiment looking forwards. WHAT IF you knew that the next long-term supported releases of Ubuntu, Debian, Red Hat and Novell Linux would all have the same major versions of kernel, GCC, X, GNOME, KDE, OO.o and Mozilla. Would that make a major difference for you? I’m willing to bet not – that from a customer view, folks who prefer X will still prefer X. A person who prefers Red Hat will stick with Red Hat. But from a developer view, would that make it easier to collaborate? Dramatically so.

Another member of the audience came up to talk about the fashion industry. That’s also converged on a highly coordinated model – fabrics and technologies “release” first, then designers introduce their work simultaneously at fashion shows around the world. “Spring 2009” sees new collections from all the major houses, many re-using similar ideas or components. That hasn’t hurt their industry, rather it helps to build awareness amongst the potential audience.

The ultimate laboratory, nature, has also adopted release coordination. Anil Somayaji, who was in the audience for the keynote, subsequently emailed this:

Basically, trees of a given species will synchronize their seed releases in time and in amount, potentially to overwhelm predators and to coordinate with environmental conditions. In effect, synchronized seed releases is a strategy for competitors to work together to ensure that they all have the best chance of succeeding. In a similar fashion, if free software were to “release its seeds” in a synchronized fashion (with similar types of software or distributions having coordinated schedules, but software in different niches having different schedules), it might maximize the chances of all of their survival and prosperity.

There’s no doubt in my mind that the stronger the “pulse” we are able to create, by coordinating the freezes and releases of major pieces of the free software stack, the stronger our impact on the global software market will be, and the better for all companies – from MySQL to Alfresco, from Zimbra to OBM, from Red Hat to Ubuntu.