Archive for the 'free software' Category

Note to the impatient: this is a long post and it only gets to free software ecosystem dynamics towards the end. The short version is that we need to empower software companies to participate in the GNU/Linux ecosystem, and not fear them. Rather than undermining their power, we need to balance it through competition.

Church schools in apartheid South Africa needed to find creative ways to teach pupils about the wrongs of that system. They couldn’t actively foment revolt, but they could teach alternative approaches to governance. That’s how, as a kid in South Africa, I spent a lot of time studying the foundations of the United States, a system of governance defined by underdogs who wanted to defend not just against the abuses of the current power, but abuses of power in general.

My favourite insight in that regard comes from James Madison in the Federalist Papers, where he describes the need to understand and harness human nature as a force: to pit ambition against ambition, as it is often described. The relevant text is worth a read if you don’t have time for the whole letter:

But the great security against a gradual concentration of the several powers in the same department, consists in giving to those who administer each department the necessary constitutional means and personal motives to resist encroachments of the others. The provision for defense must in this, as in all other cases, be made commensurate to the danger of attack. Ambition must be made to counteract ambition. The interest of the man must be connected with the constitutional rights of the place. It may be a reflection on human nature, that such devices should be necessary to control the abuses of government. But what is government itself, but the greatest of all reflections on human nature? If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself. A dependence on the people is, no doubt, the primary control on the government; but experience has taught mankind the necessity of auxiliary precautions.

When we debate our goals, principles and practices in the FLOSS community, we devote a great deal of energy to “how things should be”, and to the fact that “men are not angels”. I think the approach of James Madison is highly relevant to those discussions.

The conservation of power

Just as energy, momentum, charge and other physical properties of a system are conserved, so in a sense is power. If your goal is to reduce the power of one agency in government, the most effective strategy is to strengthen the position of another. We know that absolute monarchies are bad: they represent unbalanced power.

Within a system, power will tend to consolidate. We have antitrust agencies specifically to monitor the consolidation of economic power and to do something about it. We setup independent branches of government to ensure that some kinds of power simply cannot be consolidated.

Undermining power in one section of an ecosystem inevitably strengthens the others.

Since we humans tend to think the grass is greener on the other side of the fence, and since power takes a little while to get properly abused, you can often see societies oscillate in the allocation of power. When things seem a little out of control, we give more power to the police and other securocrats. Then, when they become a little thuggish, we squeeze their power through regulation and oversight, and civil liberties gain in power, until the pendulum swings again.

The necessity of concentrated power

Any power can be abused. I had a very wise headmaster at that same school who used to say that the only power worth having was power that was worth abusing. This was not a call to the abuse of power, you understand, merely a reflection on the fact that power comes with the real responsibility of restraint.

So, if power can be abused, why do we tolerate it at all? Why not dissolve authority down to the individual? Because the absence of power leads to chaos, which ironically is an easy place to establish despotic authority. Power isn’t seized – it’s given. We give people power over us. And in a state of chaos, all it takes is a few people to gain some power and they have a big advantage over everyone else. That’s why early leaders in new ecosystems tend to become unbeatable very quickly.

Also, power clears the path for action. In a world with no power, little gets done at all. We are better off with large companies that have the power to organise themselves around a goal than trying to achieve the same goal with a collection of individuals; try making a Boeing from an equivalent group of artisans, and you’ll see what I mean. Artisans form guilds and companies to increase their reach and impact. Individual volunteers join professional institutions to get more effective: consider the impact of handing out food yourself, versus helping sustain a network of soup kitchens, even in the purely non-profit world. Having some clout on your side is nothing to sniff at, even if you have purely philanthropic goals.

Power and innovation

If you have all the power already, there’s no spur to innovate. So kingdoms stagnate, eventually.

But power makes space for good things, too. It’s the powerful (and rich) who fund the arts in most societies. Innovation needs breathing space; companies with economic power can incubate new ideas to the point where they become productive.

Too much competition can thus limit innovation: look how difficult it has been for the Windows-based PC manufacturers, who live in a brutally competitive world and have little margin, to innovate. They are trapped between a highly efficient parts supply ecosystem, which feeds them all the same stuff at the same price, and a consumer market that requires them all to provide PC’s which run the same stuff the same way. As a result, they have little power, little margin, little innovation.

The trick is not to fear power itself, but instead, to shape, balance and channel it. You don’t want to aim for the absence of power, you want the Goldilocks effect of having “just enough”. And that was James Madison’s genius.

Verticals, competition and the balance of power

Of course, competition between rivals is the balance of power in business. We resent monopolies because they are either abusing their power, or stagnating.

In economics, we talk about “verticals” as the set of supply dependencies needed for a particular good. So, to make an aircraft, you need various things like engines and alloys, and those suppliers all feed the same pool of aircraft manufacturers.

In order to have a healthy ecosystem, you need a balance of power both between suppliers at the same level of the stack, and vertically, between the providers of parts and providers of the finished product. That’s because innovation needs both competition AND margin to stimulate and nurture it.

In the PC case, the low margins in the PC sector helped reinforce the Windows monopoly. Not only was there no competition for Microsoft, there was no ability for a supplier further down the chain to innovate around them. The only player in that ecosystem that had the margin to innovate was Microsoft, and since they faced no competition, there was little stimulus to embrace their own R&D, no matter how much they spent on it.

Power in the FLOSS ecosystem: upstreams and distributions

So, where do we stand in the free software and open source ecosystem?

The lines between upstreams and distributions aren’t perfectly clear, of course. Simplistic versions of that picture are often used to prove points, but in fact, all the distributions are also in some sense upstreams, and even derivative distributions end up being leaders of those they derive from in some pieces or markets. Nevertheless, I think it’s worth looking at the balance of power between upstream projects and distributions, as it is today and as it could be.

Also, I think it’s worth looking at related parties, companies and institutions which work a lot with FLOSS but have orthogonal interests.

If one uses margin, or profit, as an indicator of power, it’s clear that the distributions today are in a far stronger position than most individual projects or upstreams. The vast majority of software-related revenue in the FLOSS ecosystem goes to distributions.

Within that segment, Red Hat claims 80% market share of paid Linux, a number that is probably accurate. Novell, the de facto #2, is in the midst of some transition, but indicators are that it continues to weaken. Oracle’s entry into the RHEL market has had at best marginal impact on RHEL economics (the substantial price rises in RHEL 6 are a fairly clear signal of the degree to which Red Hat believes it faces real competition). The existence of “unpaid RHEL” in the form of CentOS, as well as OEL, essentially strengthens the position of RHEL itself. Ubuntu and Debian have large combined levels of adoption, but low revenue.

So clearly, there is work to do just to balance power in the distribution market. And it will take work – historically, platforms tend to monopolies, and in the absence of a definitive countervailing force that establishes strength outside the RHEL gravity well, that’s what we’ll have. But that’s not the most interesting piece. What’s more interesting is the dynamic between distributions and upstreams.

Today, most upstreams are weak. They have little institutional strength. It’s generally difficult to negotiate and do business with an upstream. In many cases, that’s by design – the teams behind a project are simply not interested, or they are explicitly non-profit, as in the case of the FSF, which makes them good leaders of specific values, but difficult to engage with commercially.

As a result, those who need to do business with open source go to distributions, even in cases where they really want to be focused on a particular component. This greatly amplifies the power of the distributions: they essentially are the commercial vehicles for ALL of open source. The weakness of individual upstreams turns into greater strength for distributions.

You can imagine that distributions like it that way, and it would be surprising to see a distribution, or company that backs a distribution, arguing for stronger upstreams. But that’s exactly the position I take: FLOSS needs stronger upstreams, and as a consequence, weaker distributions.

Stronger upstreams will result in more innovation in FLOSS than stronger distributions. Essentially, like Microsoft, a distribution receives cash for the whole platform and allocates it to specific areas of R&D. That means the number of good ideas that receive funding in our ecosystem, today, is dependent on the insights of a very few companies. Just as Microsoft invested a lot in R&D and yet seemed to fall behind, upstream innovation will be strangled if it’s totally dependent on cash flow via distributions.

It’s not just innovation that suffers because we don’t have more power, or economic leverage, in the hands of upstreams. It’s also the myriad of things beyond code itself. When you have a company behind a project, they tend to take care of a lot more than just the code: QA, documentation, testing, promotion. It’s easy, as a developer, to undervalue those things, or to see them as competing for resources with the “real work” of code. But that competition is necessary, and they make a great contribution to the dynamism of the final product.

Consider the upstream projects which have been very successful over the long term. Qt and MySQL, for example, both had companies behind them that maintained strong leverage over the product. That leverage was often unpopular, but the result was products available to all of us under a free license that continued to grow in stature, quality and capability despite the ups and downs of the broader market, and without being too dependent on the roving spotlight of “coolness”, which tends to move quickly from project to project.

There are of course successful upstream projects which do not have such companies. The best example is probably the Linux kernel itself. However, those projects fall into a rather unusual category: they are critical to some large number of companies that make money in non-software ways, and those companies are thus forced to engage with the project and contribute. In the case of the kernel, hardware companies directly and indirectly underwrite the vast majority of the boring but critical work that, in other projects, would be covered by the sponsoring institution. And despite that, there are many gaps in the kernel. You don’t have to dig very hard to find comments from key participants bemoaning the lack of testing and documentation. Nevertheless, it gets by quite well under the circumstances.

But most ecosystems will have very few projects that are at such a confluence. Most upstream projects are the work of a few people, the “coolness” spotlight shines on them briefly if at all. They need either long term generosity from core contributors, or an institution to house and care for them, if they want to go the distance. The former rarely works for more than a few years.

Projects which depend on indirect interests, such as those sponsored by hardware companies, have another problem. Their sponsoring institutions are generally not passionate about software. They don’t really need or want to produce GREAT software. And if you look at the projects which get a lot of such contributions, that becomes very obvious. Compare and contrast the quality of apps from companies which deeply care about software from those which come from hardware companies, and you see what I mean.

We FLOSS folk like to tell ourselves that the Windows hegemony was purely a result of the manipulations of its sponsor, and the FLOSS as we do it today is capable of doing much more if it only had a fair chance. I don’t think, having watched the success of iOS and Android as new ecosystems, that we can justify that position any longer. I think we have to be willing to think hard about what we are willing to change if we want to have the chance of building an ecosystem as strong, but around GNU/Linux. Since that’s my goal, I’m thinking very hard about that, and creatively. I think it’s possible, but not without challenging some sacred cows and figuring out what values we want to preserve and which we can remould.

Power is worth having in your ecosystem, despite its occasional abuse

There’s no doubt that power created will be abused. That’s true of a lot of important rights and powers. For example, we know that free speech is often abused, but we nevertheless value it highly in many societies that are also big contributors to FLOSS. You probably know the expression, “I disagree with what you are saying entirely, but I will defend to the death your right to say it”.

Similarly, in our ecosystem, power will be abused. But it’s still worth helping institutions acquire it, even those we dislike or distrust, or those we compete with. At Canonical, we’ve directly and indirectly helped lots of institutions that you could describe that way – Oracle, Novell, Red Hat, Intel and many others. The kneejerk reaction is usually “no way”, but upon deeper thought, we figured that it is better to have an ecosystem of stronger players, considering the scale of the battle with the non-FLOSS world.

I often find people saying “I would help an institution if I thought I could trust it”. And I think that’s a red herring, because just as power will be abused, trust will be abused too. If you believe that this is a battle of ecosystems and platforms, you want to have as many powerful competitors in your ecosystem as possible, even though you probably cannot trust any of them in the very long term. It’s the competition between them that really creates long term stability, to come back to the thinking of James Madison. It’s pitting ambition against ambition, not finding angels, which makes that ecosystem a winner. If you care about databases, don’t try to weaken MySQL, because you need it strong when you need it. Rather figure out how to strengthen PostGRES alongside it.

How Canonical fits in

Canonical is in an interesting position with regard to all of this. As a distribution, we could stay silent on the issue, and reasonably expect to grow in power over time, on the same basis that Red Hat has. And there are many voices in Canonical that say exactly that: don’t rock the boat, essentially.

However, perhaps unlike other Linux distributions, Canonical very much wants to see end users running free software, and not just IT professionals. That raises the bar dramatically in terms of the quality of the individual pieces. It means that it’s not good enough for us to work in an ecosystem which produces prototype or rough cut products, which we then aggregate and polish at the distribution level. Unlike those who have gone before, we don’t want to be the sole guarantor of quality in our ecosystem, because that will not scale.

For that reason, looking at the longer term, it’s very important to me that we figure out how to give more power to upstreams, so that they in turn can invest in producing components or works which have the completeness and quality that end-users expect. I enjoy working with strong commercial institutions in the open source ecosystem – while they always represent some competitive tension, they also represent the opportunity to help our ecosystem scale and out-compete the proprietary world. So I’d like to find ways to strengthen the companies that have products under free software, and encourage more that have proprietary projects to make them available under free licenses, even if that’s not the only way they publish them.

If you’ve read this far, you probably have a good idea where I’m going with this. But I have a few more steps before actually getting there. More soon.

Till then, I’m interested in how people think we can empower upstream projects to be stronger institutionally.

There are a couple of things that are obvious and yet don’t work. For example, lots of upstreams think they should form a non-profit institution to house their work. The track record of those is poor: they get setup, and they fail as soon as they have to file their annual paperwork, leaving folks like the SFLC to clean up the mess. Not cool. At the end of the day, such new institutions add paperwork without adding funding or other sources of energy. They don’t broaden out the project the same way a company writing documentation and selling services usually does. On the other hand, non-profits like the FSF which have critical mass are very important, though, which is why on occasion we’ve been happy to contribute to them in various ways.

Also, I’m interested in how we can reshape our attitudes to power. Today, the tenor of discussion in most FLOSS debates is simplistic: we fear power, and we attempt to squash it always, celebrating the individual. But that misses the point that we are merely strengthening the power elsewhere; in distributions, in other ecosystems. We need a richer language for describing “the Goldilocks power” balance, and how we can move beyond FUD.

So, what do you think we could do to create more Mozilla’s, more MySQL’s, more Qt’s and more OpenStacks?

I’ll summarise interesting comments and threads in the next post.

Celebrating Gnome 3.0

Thursday, April 7th, 2011

Congratulations to everyone who has worked so hard to make Gnome 3.0 a reality. It’s a great accomplishment, excellent work by many people, and worthy of celebration. I know the PPA is popular and I’m sure it will be a hit in 11.10. Well done all!

All the other guys are not wrong

Sunday, March 13th, 2011

The discussion in blogs and comments on collaboration and standards is really important. It’s also easy to cast as “GNOME vs Canonical vs KDE”, and that would be incorrect. My critique here is not of the body of GNOME developers, it’s of specific decisions and processes, which in my view have let GNOME down.

The reason I care is, to state the obvious, a well-functioning GNOME is important to Ubuntu and Canonical. And I don’t think we’re there. Alternatively, a well-functioning FreeDesktop.org is important, and we’re not there either.

Dave Neary, who to his credit has been trying to understand how matters reached this point, blogged his conclusions to date. They warrant a response.

He summarises:

FreeDesktop.org is broken as a standards body

That may be true today, but I think we should fix it. With Meego around, and other efforts that are lower profile, there are now even more reasons to have well specified standards for desktop interoperability. They won’t all work, but they deserve better respect and quality than they have today. So my response to Dave is: let’s fix that, rather than pretending “it’s broke so it don’t matter”.

Mark Shuttleworth doesn’t understand how GNOME works

Fortunately I’m apparently in good company because his next conclusion is GNOME is not easy to understand. Perhaps a more accurate summation would be “Gnome is not self-consistent, or deterministic, so it can often come to two quite contradictory conclusions at the same time, and Mr Shuttleworth didn’t understand the one we’d now like to promote.”

Dave mailed me to say that he’d got a “definitive” perspective on how the appindicator API’s came to be rejected. It includes pointers as to the requirements for submitting external dependencies. These are “make the case for the dependency, which should be a few sentences or so, and wait a short while for people to check it out (e.g. making sure it builds)”. Now, a reading of the correspondence around the API’s suggests that Ted and others did a lot more than the “few sentences and make sure it builds”, yet the proposal was rejected.

In addition, Dave got commentary from two members of the Gnome release team, who make these decisions. The two views disagree with one another.

I’m not sure what to think, then, about Dave’s assertion that I don’t understand Gnome. Seems the follow-on conclusion is closer to the truth.

Dave says:

[...] to get things done in GNOME, you need to talk to the right people. That means, defining your problem, and identifying the stakeholders who are also interested in that problem, and working out a solution with them (am I repeating myself?). Mark seems to want GNOME to behave like a company, so that he can get “his people” to talk to “our people” and make it happen. I think that this misunderstanding of how to wield influence within the GNOME project is a key problem.

But then again, over the years I have heard similar feedback from GNOME Mobile participants, and people in Nokia – so it’s not all Mark’s fault. As Jono says here: GNOME does have a reputation of being hard to work with for companies – no point in denying it (then again, so does the kernel, and they seem to get along fine).

Hold on a sec. There’s been ample documentation of conversations. Dave can’t even point to two stakeholder who agree with each other in the release team!

I also understand that there is an interest in putting on a good face, and not airing your dirty laundry in public (ironic, eh?) – for the past few years, the party line in Canonical has been “We love GNOME, we’re a GNOME shop” while behind the scenes there have been heartfelt conversations about the various problems which exist in GNOME & how to address them. The problem is, because these discussioons happen behind the scenes, they stay there. We never get beyond discussions, agreeing there is a problem, but never working together on a solution.

Yes, the stated line from Canonical has been exactly what Dave describes – we’ve worked hard to stay within the Gnome umbrella, even where we’ve felt that the deck was stacked against us. It’s tiring. After a year or so of being the public whipping boy for cutting commentary from competitors under the Gnome banner, a franker line is needed.

Owen Taylor, desktop lead at Red Hat and the person to whom Jon McCann referred the app indicators API decision, then weighed in. He suggests several things:


Mark argues that GNOME should be a place where we have internal competition. But his idea of internal competition seems to be competition between different end-user experiences. His entrant into the competition is Unity, an environment with a user experience designed completely in isolation from GNOME. The other entrant would, I suppose, be the GNOME 3 desktop that GNOME has created.

This competition doesn’t make sense to me: what would be left of GNOME if Unity “won” that competition? Not even the libraries are left, because every decision that is made about what goes into library should be driven by that same question “what does the user see?” No widget should go into GTK+ unless it makes sense in a GNOME application. GNOME cannot cede the user experience and still work as a project.

This is exactly why we proposed the app indicator API’s as *external* dependencies. They are capabilities which GNOME apps can take advantage of if they are around, but which are not essential if they are not there. Unity could quite easily move to the fore in GNOME, if it won this competition, just like lots of other ideas and pieces of code have started outside the core of GNOME but become essential to it.

Owen’s argument reinforces the idea (which is in my view broken) that the only idea that matter are the ones generated internally to the GNOME project (defined very tightly by folks who maintain core modules or have core responsibilities). It’s precisely this inward view that I think is leading GNOME astray.

Owen’s point that “no widget should go into Gtk if it is not needed by a GNOME application” is unlikely to be comforting to the XFCE folk, or other desktop environments which build on GNOME. If anything, it will make them feel that things in “core GNOME” are likely to be difficult to adopt and collaborate with, because their needs, apparently don’t matter.

He also says “But I’ve never seen Canonical make the leap and realize that they could actually dive in and make GNOME itself better.”.. which is rather insulting to all the people from Canonical who spend a lot of their day doing exactly that.

He talks about App Indicators, saying that “They didn’t even propose changes to core GNOME components to support application indicators.” Actually, we did, and those changes required App Indicators to be an external dependency. So we proposed that, and it was rejected. Repeat ad absurdum.

In the end, in comments, Owen says that “[It] is a common misperception [that Gnome Shell and Gnome 3 can be separated]. That somehow the work we’ve done on GNOME Shell is somehow separable from the rest of GNOME 3. The work on the GTK+ theme and the window manager theme is done on together. The work on GNOME Shell is done together work work on System Settings and on the internal gnome-settings-daemon and gnome-session components.” Well, that’s convenient. Define one piece – your piece – as critical, therefor making it above competition. I expect that Ubuntu will ship Gnome 3 perfectly well with Unity.

Also in comments, Owen points out that the work Shell developers did around messaging was done as an update to an FD.o spec. An update that AFAICT was not discussed, just amended and pushed. He says, in a triumph of understatement, “We certainly haven’t done a good job discussing the small additions to the specification we needed”.

Finally, Owen concludes that having Unity and Gnome Shell as separate desktops may be the only way forward. Seems like he’s worked hard to ensure that’s the case.

Next up, Jeff Waugh is writing a set of related blog posts. One of them walks through the app indicators timeline, and is relatively comprehensive in doing so.

Jeff draws a conclusion that we’re mistaken in feeling that App Indicators were deliberately blocked because Unity presented a risk of competition with Gnome Shell; but he draws that purely based on the timing of the conversation proposing App Indicators as an external dependency, which was four days *before* the name Unity was introduced.

Yes, that’s true. But Unity was simply the new name for work which has been ongoing since 2007: The Ubuntu Netbook Remix interface. That work was very much in the frame throughout, and it’s been suggested that it was that work which catalysed Gnome Shell in the first place.

So even though Jeff is right on the claim that app indicators were discussed *before* the Unity name was revealed, that’s not in any way material to a discussion of the motives of those who rejected app indicators. This was an API from Canonical, related to work in the Ubuntu Netbook interface, and it got rejected for reasons which are unlike any other response to a proposed external dependency.

Perhaps, in the light of changed circumstances, Jeff will change his opinion. Good commentators do.

Jeff also goes on to talk about Ted and Aurelien, who were proposing the app indicators work in GNOME and KDE respectively. KDE apps worked smoothly, Gnome rejected Ted’s proposal. Jeff says “I don’t believe the key difference here is between GNOME and KDE, it is in Canonical’s approach to engagement, and its commitment of developers to the task.” It’s worth pointing out that Ted and Aurelien were both working for the same manager under the same guidance with the same goals. Jeff draws the conclusion that Canonical could have done things differently. I would have drawn the conclusion that Gnome was less open to collaboration than KDE.

Finally, Jeff looks at the requirements for dependencies, and notes that Canonical didn’t need to engage at all, though he (and we recognised the same) says that would have caused other problems. He concludes:

Canonical barely made an effort to have libappindicator “accepted into GNOME” (which, in the context of his criticism, Mark believed was important or necessary)

As I’ve shown above, the stated requirements are a very low bar. We did that, and more, yet the App Indicator API’s were rejected. As I’ve said before, it’s bizarre that such a different standard was applied to this API, and not other API’s. The only rational explanation is that the decision is nothing to do with the API’s, and everything to do with politics. Those politics are bad for Gnome in the long run. I want Gnome to be healthy in the long run, ergo, my critique.

It did not even need to go through this process anyway, because it did not need to be an “external dependency” in order to achieve Mark’s stated goals (ie. application adoption of the API)

Unfortunately for Jeff, we’d been told in no uncertain terms that module owners and core apps were under pressure about these API’s. They wanted to see the external dependency blessed. So we proposed it. Owen says we “didn’t try to propose changes to core apps”… we did, and the external dependency was the blocker.

So the rejection of libappindicator, for all the sturm und drang, is essentially meaningless — it had no bearing on the opportunity for collaboration between Canonical and GNOME

In fact, it’s what’s left that collaboration in limbo. What to do with all the patches produced for GNOME apps that make them work with app indicators? Hmm… that would be collaboration, but the uncertainty created by the rejection as an external dependency creates a barrier to that collaboration. As Jeff says, those patches can land without any problems. But to land such a patch, after the refusal, takes some guts on the part of the maintainer. Lots have done it (thank you!) but some have said they are concerned they will be criticised for doing so.

Unity did not exist in the public sphere when libappindicator was declared irrelevant to GNOME Shell, and was not ever the reason why: (a) there wasn’t much interest in libappindicator, or (b) GNOME Shell developers decided they were on a different path

Correct, *Unity* was not the public name of the work at the time, Ubuntu Netbook Remix was.

Not proven in this part of the series, but worth noting: the person Mark specifically chose to attack, Jon McCann, did not decide to exclude libappindicator because — being a design contributor to GNOME Shell — he felt threatened by Unity. In fact, he had no part in the decision, and didn’t know Unity existed!

Jon certainly knew a great deal of work on interfaces was being done. That became branded Unity later, but the timing of the change in name is irrelevant.

Phew.

There are good faith efforts being made to bridge divides all over the show, for which I’m grateful and to which we’re contributing. My comments here are to address what I see as convenient papering over, which will not stand the test of time. It’s important – to me, to the members of the community working on Unity and Ubuntu (and there are substantial communities in both) that simplistic accusations against us are not left to stand unchallenged.

The goal – for everyone, I think – is great free software. I know we’re committed to that, and doing what we think is needed to achieve it.

Competition is tough on the contestants, but it gets great results for everyone else. We like competitive markets, competitive technologies, competitive sports, because we feel the end result for consumers or the audience is as good as it possibly could be.

In larger organisations, you get an interesting effect. You get *internal* competition as well as external competition. And that’s healthy too. You get individuals competing for responsibility, and of course you want to make sure that the people who will make the wisest choices carry the responsibilities that demand wisdom, while those who have the most energy carry the responsibilities for which that’s most needed. You get teams competing with each other too – for resources, for attention, for support elsewhere, for moral authority, for publicity. And THAT’s the hardest kind of competition to manage, because it can be destructive to the organisation as a whole.

Even though it’s difficult to manage, internal competition is extremely important, and should not be avoided out of fear. The up side is that you get to keep the best ideas because you allowed them to compete internally. If you try and avoid it, you crowd out new ideas, and end up having to catch up to them. Usually, what goes wrong is that one group gets control of the organisation’s thinking, and takes the view that any ideas which do not come from within that group are a threat, and should be stopped. That’s very dangerous – it’s how great civilisations crash; they fail to embrace new ideas which are not generated at the core.

In Ubuntu, we have a lot of internal competition. Ubuntu and Kubuntu and Xubuntu and Edubuntu and *buntu-at-large have to collaborate and also, to a certain extent, compete. We handle that very well, I think, though occasionally some muppet calls Kubuntu the blue-headed-stepchild etc etc. It’s absolutely clear to everyone, though, that we have a shared interest in delivering ALL these experiences together with as much shared vision and commonality as possible. I consider the competition between these teams healthy and constructive and worth maintaining, even though it requires some fancy footwork and causes occasional strains.

The challenge for Gnome leadership

The sound and fury writ large in blog comments this week is all about how competition is managed.

Gnome is, or should be, bigger than any of the contributing individuals or companies. Gnome leadership should be in a position to harness competition effectively for the good of the project. That does, however, require very firm leadership, and very gutsy decisions. And it requires vigilance against inward thinking. For example, I’ve seen the meme reiterated multiple times that “one should not expect Gnome to embrace ideas which were not generated and hosted purely within Gnome”. That’s chronic inward thinking. Think of all the amazing bits of goodness in the free software stack which were NOT invented in Gnome but are a part of it today. Think how much better it is when goodness is adopted across multiple desktop environments, and how much harder it is to achieve that when something is branded “K” or “G”.

When we articulated our vision for Unity, we were very clear that we wanted to deliver it under the umbrella of Gnome. We picked Gnome-friendly technologies by and large, and where we felt we needed to do something different, that decision required substantial review. We described Unity as “a shell for Gnome” from the beginning, and we have been sincere in that view. We have worked successfully and happily with many, many Gnome projects to integrate Unity API’s into their codebase.

This is because we wanted to be sure that whatever competitive dynamics arose were *internal* to Gnome, and thus contributing to a better result overall in Gnome in the long term.

We’ve failed.

Much of the language, and much of the decision making I’ve observed within Gnome, is based on the idea that Unity is competition WITH Gnome, rather than WITHIN Gnome.

The key example of that is the rejection of Unity’s indicator API’s as external dependencies. That was the opportunity to say “let’s host this competition inside Gnome”. Even now, there’s a lack of clarity as to what was intended by that rejection, with some saying “it was just a reflection of the fact that the API’s were new and not used in any apps”. If that were the case, there would be no need for prior approval as an external dependency; the rejection was clearly an attempt to prevent Gnome applications from engaging around these API’s. It’s substantially failed, as many apps have happily done the work to blend in beautifully in the Unity environment, but there has been a clear attempt to prevent that by those who feel that Unity is a threat to Gnome rather than an opportunity for it.

Dave Neary has to his credit started to ask “what’s really going on here”?

In his blog post, he quoted the rationale given for the rejection of Canonical’s indicator API’s, which I’ll re-quote here and analyze in this light:

it doesn’t integrate with gnome-shell

That’s it – right there. Remember, this was a proposal for the indicator API’s to be an *external* dependency for Gnome. That means, Gnome apps can use those API’s *optionally* when they are being run on a platform where they are useful. It has NOTHING to do with the core Gnome vision. External API’s exist precisely BECAUSE it’s useful to encourage people to use Gnome apps on all sorts of platforms, including proprietary ones like Windows and MacOS and Solaris, and they should shine there too.

So the premier reason given for the rejection of these API’s is a reason that, as best we can tell, has never been used against an external dependency proposal before: “it’s different to Gnome”. At the heart of this statement is something deeper: “it’s competition with an idea someone in Gnome wants to pursue”.

What made this single statement heartbreaking for me to see was that it spoke clearly to the end of one of Gnome’s core values: code talks. Here we had API’s which were real, tested code, with patches to many Gnome apps available, that implemented a spec that had been extensively discussed on FreeDesktop.org. This was real code. Yet it was blocked because someone – a Gnome Shell designer – wanted to explore other ideas, ideas which at the time were not working code at all. There’s been a lot of commentary on that decision. Most recently, Aaron Seigo pointed out that this decision was as much a rejection of cross-desktop standards as it was a rejection of Canonical’s code.

Now, I can tell you that I was pretty disgusted with this result.

We had described the work we wanted to do (cleaning up the panel, turning panel icons into menus) to the Gnome Shell designers at the 2008 UX hackfest. McCann denies knowledge today, but it was a clear decision on our part to talk about this work with him at the time, it was reported to me that the conversation had happened, and that we’d received the assurance that such work would be “a valued contribution to the shell”. Clearly, by the time it was delivered, McCann had decided that such assurances were not binding, and that his interest in an alternative panel story trumped both that assurance and the now-extant FreeDesktop.org discussions and spec.

But that’s not the focus of this blog. My focus here is on the management of healthy competition. And external dependencies are the perfect way to do so: they signal that there is a core strategy (in this case whatever Jon McCann wants to do with the panel) and yet there are also other, valid approaches which Gnome apps can embrace. This decision failed to grab that opportunity with both hands. It said “we don’t want this competition WITHIN Gnome”. But the decision cannot remove the competitive force. What that means is that the balance was shifted to competition WITH Gnome.

probably depends on GtkApplication, and would need integration in GTK+ itself

Clearly, both of these positions are flawed. The architecture of the indicator work was designed both for backward compatibility with the systray at the time, and for easy adoption. We have lots of apps using the API’s without either of these points being the case.

we wished there was some constructive discussion around it, pushed by the libappindicator developers; but it didn’t happen

We made the proposal, it was rejected. I can tell you that the people who worked on the proposal consider themselves Gnome people, and they feel they did what was required, and stopped when it was clear they were not going to be accepted. I’ve had people point to this bullet and say “you should have pushed harder”. But proposing an *external* dependency is not the same as trying to convince Shell to adopt something as the mainstream effort. It’s saying “hey, here’s a valid set of API’s apps might want to embrace, let’s let them do so”.

there’s nothing in GNOME needing it

This is a very interesting comment. It’s saying “no Gnome apps have used these API’s”. But the Gnome apps in question were looking to this very process for approval of their desire to use the API’s. You cannot have a process to pre-approve API’s, then decline to do so because “nobody has used the API’s which are not yet approved”. You’re either saying “we just rubber stamp stuff here, go ahead and use whatever you want”, or you’re being asinine.

It’s also saying that Unity is not “in GNOME”. Clearly, a lot of Unity work depends on the adoption of these API’s for a smooth and well-designed panel experience. So once again, we have a statement that Unity is “competition with Gnome” and not “competition within Gnome”.

And finally, it’s predicating this decision on the idea being “in Gnome” is the sole criterion of goodness. There is a cross-desktop specification which defines the appindicator work clearly. The fact that KDE apps Just Work on Unity is thanks to the work done to make this a standard. Gnome devs participated in the process, but appeared not to stick with it. Many or most of the issues they raised were either addressed in the spec or in the implementations of it. They say now that they were not taken seriously, but a reading of the mailing list threads suggests otherwise.

It’s my view that cross-desktop standards are really important. We host both Kubuntu and Ubuntu under our banner, and without such collaboration, that would be virtually impossible. I want Banshee to work as well under Kubuntu as Amarok can under Ubuntu.

What can be done?

This is a critical juncture for the leadership of Gnome. I’ll state plainly that I feel the long tail of good-hearted contributors to Gnome and Gnome applications are being let down by a decision-making process that has let competitive dynamics diminish the scope of Gnome itself. Ideas that are not generated “at the core” have to fight incredibly and unnecessarily hard to get oxygen. Ask the Zeitgeist team. Federico is a hero, but getting room for ideas to be explored should not feel like a frontal assault on a machine gun post.

This is no way to lead a project. This is a recipe for a project that loses great people to environments that are more open to different ways of seeing the world. Elementary. Unity.

Embracing those other ideas and allowing them to compete happily and healthily is the only way to keep the innovation they bring inside your brand. Otherwise, you’re doomed to watching them innovate and then having to “relayout” your own efforts to keep up, badmouthing them in the process.

We started this with a strong, clear statement: Unity is a shell for Gnome. Now Gnome leadership have to decide if they want the fruit of that competition to be an asset to Gnome, or not.

A blessing in disguise

Aaron’s blog post made me think that the right way forward might be to bolster and strengthen the forum for cross-desktop collaboration: FreeDesktop.org.

I have little optimism that the internal code dynamics of Gnome can be fixed – I have seen too many cases where a patch which implements something needed by Unity is dissed, then reimplemented differently, or simply left to rot, to believe that the maintainers in Gnome who have a competitive interest on one side or the other will provide a level playing field for this competition.

However, we have shown a good ability to collaborate around FD.o with KDE and other projects. Perhaps we could strengthen FreeDesktop.org and focus our efforts at collaboration around the definition of standards there. Gnome has failed to take that forum seriously, as evidenced by the frustrations expressed elsewhere. But perhaps if we had both Unity and KDE working well there, Gnome might take a different view. And that would be very good for the free software desktop.

Next after Natty?

Monday, March 7th, 2011

The naming of cats is a difficult matter
It isn’t just one of your holiday games.
- T S Eliot, The Naming of Cats

For the next cycle, I think we’ll leave the oceanic theme behind. The “oddball octopus”, for example, is a great name but not one we’ll adopt this time around. Perhaps in 13 years time, though!

The objective is to capture the essence of our next six months work in a simple name. Inevitably there’s an obliquity, or offbeat opportunism in the result. And perhaps this next release more than most requires something other than orthodoxy – the skunkworks are in high gear right now. Fortunately I’m assured that if one of Natty’s successors is a skunk, it would at least be a sassy skunk!

So we’re looking for a name that conveys mysterious possibility, with perhaps an ounce of overt oracular content too. Nothing too opaque, ornate, odious or orotund. Something with an orderly ring to it, in celebration of the crisp clean cadence by which we the community bring Ubuntu forth.

There’s something neat in the idea that 11.10 will mark eight years since Ubuntu was conceived (it took a little longer to be born). So “octennial” might suit… but that would be looking backwards, and we should have an eye on the future, not the past. Hmm… an eye on the future, perhaps ocular? Or oculate? We’re certainly making our way up the S-curve of adoption, so perhaps ogee would do the trick?

Alternatively, we could celebrate the visual language of Ubuntu with the “orange okapi”, or the welcoming nature of our community with the “osculant orangutan”. Nothing hugs quite like dholbach, though, and he’s no hairy ape.

What we want is something imaginative, something dreamy. Something sleek and neat, too. Something that has all the precision of T S Eliot’s poetry, matched with the “effable ineffability” of our shared values, friendship and expertise. Something that captures both the competence of ubuntu-devel with the imagination of ayatana.

Which leads us neatly to the Oneiric Ocelot.

Oneiric means “dreamy”, and the combination with Ocelot reminds me of the way innovation happens: part daydream, part discipline.

We’ll need to keep up the pace of innovation on all fronts post-Natty. Our desktop has come together beautifully, and in the next release we’ll complete the cycle of making it available to all users, with a 2D experience to complement the OpenGL based Unity for those with the hardware to handle it. The introduction of Qt means we’ll be giving developers even more options for how they can produce interfaces that are both functional and aesthetically delightful.

In the cloud, we’ll have to tighten up and make some firm decisions about the platforms we can support for 12.04 LTS. UDS in Budapest will be full of feisty debate on that front, I’m sure, but I’m equally sure we can reach a pragmatic consensus and start to focus our energies on delivering the platform for widespread cloud computing on free and flexible terms.

Ubuntu is now shipping on millions of systems from multiple providers every year. It makes a real difference in the lives of millions, perhaps tens of millions, of people. As MPT said, “what we do is not only art, it’s performance art”. Every six months the curtains part, and we have to be ready for the performance. I’d like to thank the thousands of people who are actively participating in the production of Natty: take the initiative, take responsibility, take action, and your work will make a difference to all of those users. There are very few places in the world where a personal intellectual contribution can have that kind of impact. And very few places where we have such a strong social fabric around those intellectual challenges, too. We each do what we do for our own reasons, but it’s the global impact of Ubuntu which gives meaning to that action.

Natty is a stretch release: we set out to redefine the look and feel of the free desktop. We’ll need all the feedback we can get, so please test today’s daily, or A3, and file bug reports! Keep up the discipline and focus on the Narwhal, and let’s direct our daydreaming to the Ocelot.

A wit said of Google Wave “if your project depends on reinventing scrollbars, you are doing something wrong.” But occasionally, just occasionally, one gets to do exactly that.

Under the Ayatana banner, we’ve been on a mission to make the desktop have less chrome and more content. The goal is to help people immerse themselves in their stuff, without being burdened with large amounts of widgetry which isn’t relevant to the thing they are trying to concentrate on. And when it comes to widgetry, there are few things left which take up more space than scrollbars.

For example, I spend plenty of time in a full screen terminal, and it’s lovely to see how clean that experience is on Natty today:

…but that scrollbar on the right seems heavy and outdated. We took inspiration from mobile devices, and started exploring the idea of making scrollbars be more symbolic, and less physical. Something like this:

Of course, since the desktop isn’t often a touch device, we need to think through pointer interactions. We wanted to preserve the idea of keeping content exposed as much as possible, while still providing for pointer interaction when needed. We also decided to drop the “one line scroll” capability, while preserving the ability to page up and down. Take a look at the result:

Overlay Scrollbars in Unity – implementation from Canonical Design on Vimeo.

The design work behind this has been done by Christian Giordano, who worked through the corner cases (literally) and provided a mockup for testing purposes. And the heavy lifting for Natty is being done by the indefatigable Andrea Cimitan, who is currently polishing up a gtk implementation of the concept for the release. Christian put together a blog post on the subject, and a great video which talks through the design process and a few of the challenges and solutions found:

Overlay Scrollbars in Unity from Canonical Design on Vimeo.

Code is available on launchpad, bzr branch lp:ayatana-scrollbar and in a PPA:

sudo add-apt-repository ppa:ayatana-scrollbar-team/release; sudo apt-get update
sudo apt-get install liboverlay-scrollbar-0.1-0
LIBOVERLAY_SCROLLBAR=foo gnome-appearance-properties

Well done, guys.

We made some mistakes in our handling of the discussion around revenue share with the Banshee team. Thanks to everyone who helped make sure we were aware of ‘em ;-)

Money is particularly contentious in a community that mixes volunteer and paid effort, we should have anticipated and been extra careful to have the difficult conversations that were inevitable up front and in public, at UDS, when we were talking about the possibility of Banshee being the default media player in Ubuntu. We didn’t, and I apologise for the consequential confusion and upset caused.

The principles from which we derive our policy are straightforward:

The bulk of the direct cost in creating the audience of Ubuntu users is carried by Canonical. There are many, many indirect costs and contributions that are carried by others, both inside the Ubuntu community and in other communities, without which Ubuntu would not be possible. But that doesn’t diminish the substantial investment made by Canonical in a product that is in turn made available free of charge to millions of users and developers.

The business model which justifies this investment, and which we hope will ultimately sustain that effort for the desktop without dependence on me, is that fee-generating services which are optional for users provide revenue to Canonical. This enables us to make the desktop available in a high quality, fully maintained form, without any royalties or license fees. By contrast, every other commercial Linux desktop is a licensed product – you can’t legally use it for free, the terms for binaries are similar to those for Windows or the MacOS. They’re entitled to do it their way, we think it’s good in the world that we choose to do it our way too.

We know that we need a healthy and vibrant ecosystem of application developers. We think services should work for them too, and we’re committed to sharing revenue with them. We want to be entirely aligned in our interests: better code means a better result for both of us, better revenue means more resources to do what we love even better. Our interests, and upstream interests, should be perfectly aligned in this. So we have consistently had the view that revenue we can attribute to a particular upstream should create a revenue share for that upstream. We support Mozilla in this way, for example. The numbers are not vast, but nor are they insubstantial, and while we are not obliged to do so, we do so happily.

Those are the principles, the policy is straightforward: Canonical seeks to earn revenue from services delivered to Ubuntu, and we will share a portion of that revenue with relevant projects who help make that possible. Our interests, and those of the projects, should be aligned to the greatest extent possible.

In engaging with Banshee leads at UDS, we should have been absolutely clear about our expectations and commitment. Apparently, we weren’t, and for that I apologise. There was certainly no conspiring or maliciousness, it apparently just never came up. But it was my expectation that we would share revenue with Banshee, I mentioned it briefly to someone closer to the conversation, but I failed to follow up until I heard rumours of a potential disagreement on the subject in recent days.

We also made a mistake, I believe, as this blew up in private conversations, when a well-meaning person presented a choice to the Banshee developers, who then of course made a choice. But our position isn’t at all what was communicated. Our position is that we’ll deliver the best overall experience to users, we’ll derive services revenue from that, and we’ll share it with upstreams where we can attribute it efficiently. It wasn’t in the mandate of that person to offer a choice outside of that framework, but it was an honest mistake.

So, every free software project out there should be confident of a few things:

Canonical would like you to succeed, would like to make it as easy as possible for many, many users to adopt your software, and is willing to share the benefits of that with you. Whether your software is promoted as the default in Ubuntu, or simply neatly packaged for easy consumption, we’d like our interests to be well aligned. We have a bug tracker that helps us pass issues to you if they are reported in Ubuntu first, we have a revenue model which matches that with passing through a share of revenues, too. And that goes for any kind of revenue that we can attribute to your project; for example, if we offer a support service specially tailored to people using your code, you can reasonably expect to agree a revenue share of that with us.

Canonical invests heavily in creating a big, addressable ecosystem that you can easily reach. That’s worth something. We also want a big, vibrant upstream community that innovates and makes its own investments. We know that contributions come both from volunteers and paid staff, and it’s good to be able to have a bit of both in the mix, for the sake of both the volunteers and the paid staff!

Documenting this position is obviously a priority; we should have done so previously, but we just relied on internal precedent, which is a dumb idea when you’ve grown as quickly as we have in the past few years. So we’ll do that.

As for the revenue share we’ve offered the Banshee team, I would love to see them use that to make Banshee even better. That’s what it should be for. Don’t be shy, don’t be nervous of taking the money and using it for your own project. Canonical has already provided much more in the way of funding to the Gnome Foundation than this is likely to, through initiatives like the bugzilla.gnome.org work that we funded, and many other forms of support. I think money generated by an app should go towards making that app rock even harder. But the offer stands for Banshee devs to take up if they’d like, and use as they’d like. If they don’t want it, we’ll put it to good use.

This certainly won’t be the last word on the subject. I expect these situations to become more common, not less. But I think that represents a great opportunity to see sustained investment in desktop free software, which we have been sorely lacking. I think our model gives projects a nice, clear roadmap: build awesome stuff, partner with Canonical and be confident you will share in the success of Ubuntu. This is the model which catalysed the founding of Ubuntu, seven years ago, this is what we’re here to do: make free software available freely, in the best quality, to the widest audience we can. That’s an opportunity for every project that cares about how many people get to use their stuff, and under what terms.

Qt apps on Ubuntu

Tuesday, January 18th, 2011

As part of our planning for Natty+1, we’ll need to find some space on the CD for Qt libraries, and we will evaluate applications developed with Qt for inclusion on the CD and default install of Ubuntu.

Ease of use, and effective integration, are key values in our user experience. We care that the applications we choose are harmonious with one another and the system as a whole. Historically, that has meant that we’ve given very strong preference to applications written using Gtk, because a certain amount of harmony comes by default from the use of the same developer toolkit. That said, with OpenOffice and Firefox having been there from the start, Gtk is clearly not an absolute requirement. What I’m arguing now is that it’s the values which are important, and the toolkit is only a means to that end. We should evaluate apps on the basis of how well they meet the requirement, not prejudice them on the basis of technical choices made by the developer.

In evaluating an app for the Ubuntu default install, we should ask:

* is it free software?
* is it best-in-class?
* does it integrate with the system settings and preferences?
* does it integrate with other applications?
* is it accessible to people who cannot use a mouse, or keyboard?
* does it look and feel consistent with the rest of the system?

Of course, the developer’s choice of Qt has no influence on the first two. Qt itself has been available under the GPL for a long time, and more recently became available under the LGPL. And there’s plenty of best-in-class software written with Qt, it’s a very capable toolkit.

System settings and prefs, however, have long been a cause of friction between Qt and Gtk. Integration with system settings and preferences is critical to the sense of an application “belonging” on the system. It affects the ability to manage that application using the same tools one uses to manage all the other applications, and the sorts of settings-and-preference experience that users can have with the app. This has traditionally been a problem with Qt / KDE applications on Ubuntu, because Gtk apps all use a centrally-manageable preferences store, and KDE apps do things differently.

To address this, Canonical is driving the development of dconf bindings for Qt, so that it is possible to write a Qt app that uses the same settings framework as everything else in Ubuntu. We’ve contracted with Ryan Lortie, who obviously knows dconf very well, and he’ll work with some folks at Canonical who have been using Qt for custom development work for customers. We’re confident the result will be natural for Qt developers, and a complete expression of dconf’s semantics and style.

The Qt team have long worked well in the broader Ubuntu community – we have great Qt representation at UDS every six months, the Kubuntu team have deep experience and interest in Qt packaging and maintenance, there is lots of good technical exchange between Qt upstream and various parts of the Ubuntu community, including Canonical. For example, Qt folks are working to integrate uTouch.

I’d draw a distinction between “Qt” and “KDE” in the obvious places. A KDE app doesn’t know anything about the dconf system configuration, and can’t easily integrate with the Ubuntu desktop as a result. So we’re not going to be proposing Amarok to replace Banshee any time soon! But I think it’s entirely plausible that dconf, once it has great Qt bindings, be considered by the KDE community. There are better people to lead that conversation if they want, so I’ll not push the idea further here :-). Nevertheless, should a KDE app learn to talk dconf in addition to the standard KDE mechanisms, which should be straightforward, it would be a candidate for the Ubuntu default install.

The decision to be open to Qt is in no way a criticism of GNOME. It’s a celebration of free software’s diversity and complexity. Those values of ease of use and integration remain shared values with GNOME, and a great basis for collaboration with GNOME developers and project members. Perhaps GNOME itself will embrace Qt, perhaps not, but if it does then our willingness to blaze this trail would be a contribution in leadership. It’s much easier to make a vibrant ecosystem if you accept a certain amount of divergence from the canonical way, so to speak ;-) Our work on design is centered around GNOME, with settings and preferences the current focus as we move to GNOME 3.0 and gtk3.

Of course, this is a perfect opportunity for those who would poke fun at that relationship to do so, but in my view what matters most is the solid relationship we have with people who actually write applications under the GNOME banner. We want to be the very best way to make the hard work of those free software developers *matter*, by which we mean, the best way to ensure it makes a real difference in millions of lives every day, and the best way to connect them to their users.

To the good folks at Trolltech, now Nokia, who have made Qt a great toolkit – thank you. To developers who wish to use it and be part of the Ubuntu experience – welcome.

Linaro at work: porting, testing, and Android

Thursday, November 11th, 2010

Congratulations to Team Linaro on their first full release yesterday. For those not yet in the know, Linaro is a collaborative forum with dedicated engineers making sure that Linux rocks on ARM (and potentially other architectures). Staffed by a combination of Canonical and new Linaro engineers, together with secondees from the major ARM silicon vendors, it’s solving the problems of fragmentation in Linux across that ecosystem and reducing the time to market for ARM devices.

Linaro uses the same cadence as Ubuntu and we’re able to collaborate on the selection, integration and debugging of key components like the kernel, toolchain, X.org (still ;-)), and hundreds of small-but-important libraries and tools in between. Team Linaro was @UDS and it was very cool to see the extent to which their sessions drew attendance from the wider Ubuntu community – I think there’s a growing interest in efficient computing across the Ubuntu landscape.

The Linaro team is pleased to announce the release of Linaro 10.11.
10.11 is the first public release that brings together the huge amount
of engineering effort that has occurred within Linaro over the past 6
months. In addition to officially supporting the TI OMAP3 (Beagle
Board and Beagle Board XM) and ARM Versatile Express platforms, the
images have been tested and verified on a total of 7 different platforms
including TI OMAP4 Panda Board, IGEPv2, Freescale iMX51 and ST-E U8500.

The advances that have happened in this cycle are numerous but include a
completely rebuilt archive using GCC 4.4.4 and the latest ARM optimised
tool chain, the Linux kernel version 2.6.35, support for
cross-compiling, a new hardware pack way of building images, 3D
acceleration improvements, u-boot enhancements and initial device tree
support, a new QA tracking structure, the list goes on.

Android in the house

The road ahead looks even more interesting. For the next cycle, the Linaro team is going to build an Android environment on the same kernel and toolchain that we collaborate on with Ubuntu. For folks building devices, picking a board that’s part of the Linaro process means you’ll be able to get either an Ubuntu-style or Android-style core environment up and running at Day 1, which should reduce time to market for everyone.

If the Linaro team pulls this off, it will mean that Linaro provides an intersection point for the majority of the consumer electronics x86 and ARM ecosystem, regardless of the end OS. I’m sure over time we’ll find more groups that are interested in joining the process, and I see no reason why they couldn’t be accommodated in this cadence-driven model.

More players, more diversity in services

It was also good to see folks from Montavista and Mentor at Linaro@UDS this year. Whether the Linaro kernel and toolchain plug into their own distros, or they start to offer their services around the Linaro/Ubuntu/Android BSP’s, the result is a healthier ecosystem with fewer snags and gotchas for device makers.

One group asked me explicitly if Linaro was a Canonical show, and I was glad to say it isn’t. Canonical can’t possibly do everything that embedded Linux needs done, but our competence in cadence and release management makes us good custodians of a public project, which is what we do with Ubuntu itself. Participation and collaboration are at the heart, and they benefit from being partnered with a commitment to delivery and deadlines. We can’t do everything in a single cycle, but we can provide a roadmap for things like kernel defragmentation, the device-tree work, enablement of an ever-increasing cross-section of the ARM ecosystem, and transitions between versions of GCC or Python or X or even Wayland. So Canonical makes a good anchor, but Linaro has room for lots of other service-providers. Having multiple companies participate in Linaro means that the products we’re all shipping get better, faster.

Testing

The Linaro team is also going to focus on repeatable, rigorous testing of the core platform in the next cycle. That harmonises nicely with our growing focus on quality in Ubuntu, and the need for better quality and testing in open source in general. I’m interested to see what tools and results the Linaro team can produce in the next six months. Open source *can* be bulletproof, but it can also degrade in quality if we don’t put the right processes in place upstream and downstream, so this is a very welcome initiative.

Unity on Wayland

Thursday, November 4th, 2010

The next major transition for Unity will be to deliver it on Wayland, the OpenGL-based display management system. We’d like to embrace Wayland early, as much of the work we’re doing on uTouch and other input systems will be relevant for Wayland and it’s an area we can make a useful contribution to the project.

We’re confident we’ll be able to retain the ability to run X applications in a compatibility mode, so this is not a transition that needs to reset the world of desktop free software. Nor is it a transition everyone needs to make at the same time: for the same reason we’ll keep investing in the 2D experience on Ubuntu despite also believing that Unity, with all its GL dependencies, is the best interface for the desktop. We’ll help GNOME and KDE with the transition, there’s no reason for them not to be there on day one either.

Timeframes are difficult. I’m sure we could deliver *something* in six months, but I think a year is more realistic for the first images that will be widely useful in our community. I’d love to be proven conservative on that :-) but I suspect it’s more likely to err the other way. It might take four or more years to really move the ecosystem. Progress on Wayland itself is sufficient for me to be confident that no other initiative could outrun it, especially if we deliver things like Unity and uTouch with it. And also if we make an early public statement in support of the project. Which this is!

In coming to this view, several scenarios were considered.

One is the continued improvement of X, which is a more vibrant project these days than it once was. X will be around a long time, hence the importance of our confidence levels on the idea of a compatibility environment. But we don’t believe X is setup to deliver the user experience we want, with super-smooth graphics and effects. I understand that it’s *possible* to get amazing results with X, but it’s extremely hard, and isn’t going to get easier. Some of the core goals of X make it harder to achieve these user experiences on X than on native GL, we’re choosing to prioritize the quality of experience over those original values, like network transparency.

We considered the Android compositing environment. It’s great for Android, but we felt it would be more difficult to bring the whole free software stack along with us if we pursued that direction.

We considered and spoke with several proprietary options, on the basis that they might be persuaded to open source their work for a new push, and we evaluated the cost of building a new display manager, informed by the lessons learned in Wayland. We came to the conclusion that any such effort would only create a hard split in the world which wasn’t worth the cost of having done it. There are issues with Wayland, but they seem to be solvable, we’d rather be part of solving them than chasing a better alternative. So Wayland it is.

In general, this will all be fine – actually *great* – for folks who have good open source drivers for their graphics hardware. Wayland depends on things they are all moving to support: kernel modesetting, gem buffers and so on. The requirement of EGL is new but consistent with industry standards from Khronos – both GLES and GL will be supported. We’d like to hear from vendors for whom this would be problematic, but hope it provides yet another (and perhaps definitive) motive to move to open source drivers for all Linux work.