Archive for the 'thoughts' Category

The responsibilities of ownership

Friday, July 22nd, 2011

In the open source community we make a very big deal about the rights of ownership. But what about the responsibilities?

Any asset comes with attendant costs, risks and responsibilities. And anybody who doesn’t take those seriously is a poor steward of the asset.

In the physical world, we know this very well. If you own a house, there are taxes to pay every year, there will be some bills for energy and maintenance, and there’s paperwork to fill out. I was rudely reminded of this when I got an SMS at 2am this morning, care of British Gas, helpfully reminding me to settle up the gas bill for a tenant of mine. If we fail to take care of these responsibilities, we’re at risk of having the asset degraded or taken away from our care. An abandoned building will eventually be condemned and demolished rather than staying around as a health hazard. A car which has not been tested and licensed cannot legally be driven on public roads. In short, ownership comes with a certain amount of work, and that work has to be handled well.

In the intellectual and digital world, things are a little different. There isn’t an obvious lawn to trim or wall to paint. But there are still responsibilities. For example, trademarks need to be defended or they are deemed to be lost. Questions need to be answered. Healthy projects grow and adapt over time in a dynamic world; change is inevitable and needs to be accommodated.

Maintaining a piece of free software is a non-trivial effort. The rest of the stack is continuously changing – compilers change, dependencies change, conventions change. The responsibility for maintenance should not be shirked, if you want your project to stay relevant and useful. But maintainership is very often the responsibility of “core” developers, not light contributors. Casual contributors who have scratched their own itch or met a work obligation by writing a patch often give, as a reason for the contribution, their desire to have that maintenance burden carried by the project, and not by themselves.

When a maintainer adds a patch to a work, they are also accepting responsibility for its maintenance, unless they have some special circumstance, like the patch is a plugin and essentially maintained by the contributor. For general cases, adding the patch is like mixing paint – it adds to the general body of maintenance in a way that cannot easily be undone or compartmentalised.

And owning an asset can create real liabilities. For example, in some countries, if you own a house and someone slips on the stairs, you can be held liable. If you own a car and it’s being borrowed, and the brakes fail, you can be held liable. In the case of code, accepting a patch implies, like it or not, accepting some liability for that patch. Whether it turns out to be a real liability, or just a contingent one, is something only time will tell. But ownership requires defence in the case of an attack, and that can be expensive, even if it turns out the attack is baseless.

So, one of the reasons I’m happy to donate (fully and irreversibly) a patch to a maintainer, and why Canonical generally does assign patches to upstreams who ask for it, is that I think the rights and responsibilities of ownership should be matched. If I want someone else to handle the work – the responsibility – of maintenance, then I’m quite happy for them to carry the rights as well. That only seems balanced. In the common case, that maintenance turns out to be as much work as the original crafting of the patch, and frankly, it’s the “boring work” part, while the fun part was solving the problem immediately at hand.

Of course, there are uncommon cases too.

One of the legendary fights over code ownership, between Sun and Novell, revolved around a plugin for OpenOffice that did some very cool stuff. Sun ended up re-creating that work because Novell would not give it to Sun. Frankly, I think Sun was silly. The plugin was a whole work, that served a coherent purpose all by itself. Novell had designed and implemented that component, and was perfectly willing and motivated to maintain it. In that case, it makes sense to me that Sun should have been willing to make space for Novell’s great work, leaving it as Novell’s. Instead, they ended up redoing that work, and lots of people felt hard done by. But that’s an uncommon case. The more usual scenario is that a contribution enhances the core, but is not in itself valuable without the rest of the code in the project being there.

Of course, “value” is relative. A patch that only applies against an existing codebase still has value in its ability to teach others how to do that kind of work. And it has value as art – you can put it on a t-shirt, or a wall.

But contributing – really contributing, actually donating – a patch to a maintainer doesn’t have to reduce those kinds of value to the original creator. I consider it best practice that a donation be matched by a wide license back. In other words, if I give my patch to the maintainer, it’s nice if they grant me a full set of rights back. While does a bad job with many other things, the Canonical contribution agreement does this: when you make a contribution under it, you get a wide license back. So that way, the creator retains all the useful rights, including the ability to publish, relicense, sell, or make a t-shirt, without also carrying all the responsibilities that go with ownership.

So a well-done contribution agreement can make clear who carries which responsibilities, and not materially diminish the rights of those who contribute. And a well-done policy of contribution would recognise that there are uncommon cases, where a contribution is in fact a whole piece in itself, and not require donation of that whole piece in order to be part of an aggregate whole.

What about cases where there is no clear maintainer or owner?

Well, outside of the world of copyright and code, we do have some models to refer to. For example, companies issue shares to their shareholders, reflecting their relative contribution and therefor relative shared ownership in the company. Those companies end up with diverse owners, each of whom is going to have their own opinions, preferences, ideals and constraints.

We would never think to require consensus on every decision of the board, or the company, among all shareholders. That would be unworkable – in fact, much of the apparatus of corporate governance exists specifically to give voice to the desires of shareholders while at the same time keeping institutions functional. That’s not to say that shareholders don’t get abused – there are enough case studies of management taking advantage of their position to fill a long and morbidly interesting book. Rules on corporate governance, and especially the protection of minority interests in companies, as well as the state of the art of constructing shareholder agreements to achieve the same goals, are constantly evolving. But at the end of the day, decisions need to be taken which are binding on the company and thus binding on the shareholders. The rights of ownership extend to the right to show up and be represented, and to participate in the discussion, and usually a vote of some sort. Thereafter, the decision is taken and (usually) the majority will carries.

In our absolutist mentality, we tend to think that a single line of code, or a single small patch, carries the same weight as the rest of a coherence codebase. It’s easy to feel that way: when a small patch is shared, but not donated, the creator retains sole ownership of that patch. So in theory, any change in the state of the whole must require the agreement of every owner. This is more than theory – it’s law in many places.

But in practice, that approach has not withstood any hard tests.

There are multiple cases where huge bodies of work, composed of the aggregate “patches” of many different owners, have been relicensed. Mozilla, the Ubuntu wiki, and I think even Wikipedia have all gone through public processes to figure out how to move the license of an aggregate work to something that the project leadership considered more appropriate.

I’d be willing to bet that, if some fatal legal flaw were discovered in the GPLv2, Linus would lead a process of review and discussion and debate about what to do about the Linux kernel, it would be testy and contentious, but in the end he would take a decision and most would follow to a new and better license. Personally, I’d be an advocate of GPLv3, but in the end it’s well known that I’m not a big shareholder in that particular company, so to speak, so I wouldn’t expect to have any say ;-) Those who did not want to follow would resign themselves to having their contributions replaced, and most would not bother to turn up for the meeting, giving tacit assent.

So our pedantic view that every line of code is sacred just would not hold up to real-world pressure. Projects have GOT to respond to major changes in the world around them. It would be unwise to loan a patch to a project in the belief that the project will never, under any circumstances, take a decision that is different to your personal views. Life’s just not like that. Change is inevitable, and we’re all only going to be thrilled about some subset of that change.

And that’s as it should be. Clinging to something small that’s part of someone else’s life and livelihood just isn’t healthy. It’s better either to commit to a reasonable shared ownership approach, which involves being willing to show up at meetings, contribute to maintenance and accept the will of the majority on major moves that might be unpalatable anyway, or to make a true gift that comes with no strings attached.

Sometimes I see people saying they are happy to make a donation as long as it has some constraints associated with it.

There was a family in SA that lived under weird circumstances for generations because a wealthy ancestor specified that they had to do that if they wanted access to their inheritance. It’s called “ruling from the grave”, and it’s really quite antisocial. Either you give someone what you’re giving them, and accept that they will carry those rights and responsibilities wisely and well, or you don’t give it to them at all. You’re not going to be around after your will is executed, and it’s impossible to anticipate everything that might happen. It’s exceedingly uncool, in my view, to leave people stuck.

It’s difficult to predict, in 50 or 100 years time, what the definition of “openness” will be, and who will have the definition that you might most agree with. In the short term we all have favourites, but every five or ten years the world changes and that precipitates a new round of definitions, licenses, concepts. Consider GPLv2 and GPLv3, where there turned out to be a real need to address new challenges in the way free software is being used. Or the Franklin Street Declaration, on web services. Despite having options like AGPL around, there still isn’t any real consensus on how best to handle those scenarios.

One can pick institutions, but institutions change too. Go back and look at the historical positions of any long-term political party, like the UK Whigs, and you’ll be amazed at how a group can shift their positions over a succession of leaders. I have complete trust in the FSF today, but little idea what they’ll be up to in 100 years time. That’s no insult to the FSF, it’s just a lesson I’ve learned from looking at the patterns of behaviour of institutions over the long term. It’s the same for the OSI or Creative Commons or any other political or ideological or corporate group. People move on, people die, times change, priorities shift, economics ebb and flow, affiliations and alliances and competition shift the terrain to the point that today’s liberal group are tomorrows conservatives or the other way around.

So, if one is going to put strings attached to a donation, what does one do? Pick a particular license? No current license will remain perfectly relevant or useful or attractive indefinitely. Pick an institution? No institution is free of human dynamics in the long term.

If there’s a natural place to put the patch, it’s with the code it patches. And usually, that means with the institution that is the anchor tenant, for better or worse. And yes, that creates real rights which can be really abused, or at least used in ways that one would not choose for ones own projects.

And that brings us to the toughest issue. How should we feel about the fact that a company which owns a codebase can create both proprietary and open products from that code?

And the “grave” scenario really is an issue, in the case of copyright too. When people have discussed changes to codebases that have been around for any length of time, it’s a sad but real likelihood that there are contributors who have died, and left no provision for how their work is to be managed. More often than not, the estate in question isn’t sufficiently funded to cover the cost of legal questions concerned.

The first time I was asked to sign a contribution agreement on behalf of Canonical, it was for a competitor, and I declined. That night, it preyed on my conscience. We had the benefit of a substantial amount of work from this competitor, and yet I had refused to give them back *properly* our own modest contribution. I frankly felt terrible, and the next day signed the agreement, and changed it to be our policy that we will do so, regardless of what we think about the company itself. So we’ve now done them for almost all our competitors, and I feel good about it.

That’s the magical thing about creation and ownership. It creates the possibility for generosity. You can’t really give something you don’t own, but if you do, you’ve made a genuine contribution. A gift is different from a loan. It imposes no strings, it empowers the recipient and it frees the giver of the responsibilities of ownership. We tend to think that solving our own problems to produce a patch which is interesting to us and useful for us is the generosity. It isn’t. The opportunity for generosity comes thereafter. And in our ecosystem, generosity is important. It’s at the heart of the Ubuntu ethic, and it’s important even between competitors, because the competitors outside our ecosystem are impossible to beat if we are not supportive of one another.

Fantastic engineering management is…

Tuesday, July 12th, 2011

I’m going to write a series of posts on different career tracks in software engineering and design over the next few months. This is the first of ‘em, I don’t have a timeline for the rest but will get to them all in due course, and am happy to take requests in the comments ;-)

Recently, I wrote up two Canonical engineering management job descriptions – one for those managing a team of software engineers directly, another for folk coordinating the work of groups of teams – software engineering directors. In both cases, the emphasis is on organisation, social coordination, roadmap planning and inter-team connectivity, and not in any way about engineering prowess. Defining some of these roles for one of my teams got me motivated to blog about the things that make for truly great management, as opposed to other kinds of engineering leadership.

The art of software engineering management is so different from software engineering that it should be an entirely separate career track, with equal kudos and remuneration available on either path.

This is because developing, and managing developers, are at opposite ends of the interrupt scale. Great engineering depends on deep, uninterrupted focus. But great management is all about handling interrupts efficiently so that engineers don’t have to. Companies need to recognise that difference, and create career paths on both sides of that scale, rather than expecting folk to leap from the one end to the other. It’s crazy to think that someone who loves deep focused thought should have to become a multithreaded interrupt driven manager to advance their career.

Very occasionally someone is both a fantastic developer and a fantastic manager, but that’s the exception rather than the rule. In recognition of that, we should design our teams to work well without depending on a miracle each and every time we put one together.

Great engineering managers are like coaches – they get their deepest thrill from seeing a team perform at the top of its game, not from performing vicariously. They understand that they are not going to be on the field between the starting and finishing whistle. They understand that there will be decisions to be taken on the field that the players will have to make for themselves, and their job is to prepare the team physically and mentally for the game, rather than to try and play from the sidelines. A great coach isn’t trying to steer the movement of the ball from during the game, she’s making notes about the coaching and team selections needed between this match and the next. A terrible coach is a player that won’t let go of the game, wants to be out there in the thick of it, and loses themselves in the details of the game itself.

An engineering manager is an organiser and a mentor and a coach, not a veteran star player. They need to love winning, and love the sport, and know that they help most by making the team into a winning team. The way they get code written is by making an environment which is conducive to that; the way they create quality is by fostering a passion for quality and making space in the schedule and the team for work which serves only that goal.

When I’m hiring a manager, I look for people who love to keep other people productive. That means handling all the productivity killers in an engineering team: hiring and firing, inter-team meetings, customer presentations, reporting up and out and sideways, planning, travel coordination, conferences, expenses… all those things which we don’t want engineers to spend much time on or have rattling around at the back of their minds. It also means caring about people, and being that gregarious and nosy type of person who knows what everyone is doing, and why, and also what’s going on outside the workplace.

An engineering manager is doing well if every single member of their team can answer these questions, all the time:

* what are my key goals, in order of importance, in this cycle?
* what are the key delivery dates, in this cycle?
* how am I doing, generally? and what is the company view of my strengths and not-so-strengths?
* how do I fit into the team, who are my counterparts, and how do I complement them?

Also, the manager is doing well if they know, for each member of their team:

* what personal stresses or other circumstances might be a distraction for them,
* what the interpersonal dynamics are between that member, and other counterparts or team members,
* what that member’s best contributions are, and strongest interests outside of the assigned goals

For the team as a whole, the manager should know

* what the team is good at, and weak at, and what their plan is to bolster what needs bolstering,
* what the cycle looks like, in terms of goals and progress against them
* what the next cycle is shaping up to look like, and how that fits with long term goals

Really great management makes a company a joy to work in, as a developer. It’s something we should celebrate and cultivate, teach and select for, not just be the natural upward path for people who have been around a while. If you truly love technology, there are lots of careers that take you to the top of the tech game without having to move into management. And conversely, if you love organising and leading, it’s possible to get started on a management career in software without being the world’s greatest coder first. If you think that’s you, you’ll love being an engineering manager at Canonical.

Note to the impatient: this is a long post and it only gets to free software ecosystem dynamics towards the end. The short version is that we need to empower software companies to participate in the GNU/Linux ecosystem, and not fear them. Rather than undermining their power, we need to balance it through competition.

Church schools in apartheid South Africa needed to find creative ways to teach pupils about the wrongs of that system. They couldn’t actively foment revolt, but they could teach alternative approaches to governance. That’s how, as a kid in South Africa, I spent a lot of time studying the foundations of the United States, a system of governance defined by underdogs who wanted to defend not just against the abuses of the current power, but abuses of power in general.

My favourite insight in that regard comes from James Madison in the Federalist Papers, where he describes the need to understand and harness human nature as a force: to pit ambition against ambition, as it is often described. The relevant text is worth a read if you don’t have time for the whole letter:

But the great security against a gradual concentration of the several powers in the same department, consists in giving to those who administer each department the necessary constitutional means and personal motives to resist encroachments of the others. The provision for defense must in this, as in all other cases, be made commensurate to the danger of attack. Ambition must be made to counteract ambition. The interest of the man must be connected with the constitutional rights of the place. It may be a reflection on human nature, that such devices should be necessary to control the abuses of government. But what is government itself, but the greatest of all reflections on human nature? If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself. A dependence on the people is, no doubt, the primary control on the government; but experience has taught mankind the necessity of auxiliary precautions.

When we debate our goals, principles and practices in the FLOSS community, we devote a great deal of energy to “how things should be”, and to the fact that “men are not angels”. I think the approach of James Madison is highly relevant to those discussions.

The conservation of power

Just as energy, momentum, charge and other physical properties of a system are conserved, so in a sense is power. If your goal is to reduce the power of one agency in government, the most effective strategy is to strengthen the position of another. We know that absolute monarchies are bad: they represent unbalanced power.

Within a system, power will tend to consolidate. We have antitrust agencies specifically to monitor the consolidation of economic power and to do something about it. We setup independent branches of government to ensure that some kinds of power simply cannot be consolidated.

Undermining power in one section of an ecosystem inevitably strengthens the others.

Since we humans tend to think the grass is greener on the other side of the fence, and since power takes a little while to get properly abused, you can often see societies oscillate in the allocation of power. When things seem a little out of control, we give more power to the police and other securocrats. Then, when they become a little thuggish, we squeeze their power through regulation and oversight, and civil liberties gain in power, until the pendulum swings again.

The necessity of concentrated power

Any power can be abused. I had a very wise headmaster at that same school who used to say that the only power worth having was power that was worth abusing. This was not a call to the abuse of power, you understand, merely a reflection on the fact that power comes with the real responsibility of restraint.

So, if power can be abused, why do we tolerate it at all? Why not dissolve authority down to the individual? Because the absence of power leads to chaos, which ironically is an easy place to establish despotic authority. Power isn’t seized – it’s given. We give people power over us. And in a state of chaos, all it takes is a few people to gain some power and they have a big advantage over everyone else. That’s why early leaders in new ecosystems tend to become unbeatable very quickly.

Also, power clears the path for action. In a world with no power, little gets done at all. We are better off with large companies that have the power to organise themselves around a goal than trying to achieve the same goal with a collection of individuals; try making a Boeing from an equivalent group of artisans, and you’ll see what I mean. Artisans form guilds and companies to increase their reach and impact. Individual volunteers join professional institutions to get more effective: consider the impact of handing out food yourself, versus helping sustain a network of soup kitchens, even in the purely non-profit world. Having some clout on your side is nothing to sniff at, even if you have purely philanthropic goals.

Power and innovation

If you have all the power already, there’s no spur to innovate. So kingdoms stagnate, eventually.

But power makes space for good things, too. It’s the powerful (and rich) who fund the arts in most societies. Innovation needs breathing space; companies with economic power can incubate new ideas to the point where they become productive.

Too much competition can thus limit innovation: look how difficult it has been for the Windows-based PC manufacturers, who live in a brutally competitive world and have little margin, to innovate. They are trapped between a highly efficient parts supply ecosystem, which feeds them all the same stuff at the same price, and a consumer market that requires them all to provide PC’s which run the same stuff the same way. As a result, they have little power, little margin, little innovation.

The trick is not to fear power itself, but instead, to shape, balance and channel it. You don’t want to aim for the absence of power, you want the Goldilocks effect of having “just enough”. And that was James Madison’s genius.

Verticals, competition and the balance of power

Of course, competition between rivals is the balance of power in business. We resent monopolies because they are either abusing their power, or stagnating.

In economics, we talk about “verticals” as the set of supply dependencies needed for a particular good. So, to make an aircraft, you need various things like engines and alloys, and those suppliers all feed the same pool of aircraft manufacturers.

In order to have a healthy ecosystem, you need a balance of power both between suppliers at the same level of the stack, and vertically, between the providers of parts and providers of the finished product. That’s because innovation needs both competition AND margin to stimulate and nurture it.

In the PC case, the low margins in the PC sector helped reinforce the Windows monopoly. Not only was there no competition for Microsoft, there was no ability for a supplier further down the chain to innovate around them. The only player in that ecosystem that had the margin to innovate was Microsoft, and since they faced no competition, there was little stimulus to embrace their own R&D, no matter how much they spent on it.

Power in the FLOSS ecosystem: upstreams and distributions

So, where do we stand in the free software and open source ecosystem?

The lines between upstreams and distributions aren’t perfectly clear, of course. Simplistic versions of that picture are often used to prove points, but in fact, all the distributions are also in some sense upstreams, and even derivative distributions end up being leaders of those they derive from in some pieces or markets. Nevertheless, I think it’s worth looking at the balance of power between upstream projects and distributions, as it is today and as it could be.

Also, I think it’s worth looking at related parties, companies and institutions which work a lot with FLOSS but have orthogonal interests.

If one uses margin, or profit, as an indicator of power, it’s clear that the distributions today are in a far stronger position than most individual projects or upstreams. The vast majority of software-related revenue in the FLOSS ecosystem goes to distributions.

Within that segment, Red Hat claims 80% market share of paid Linux, a number that is probably accurate. Novell, the de facto #2, is in the midst of some transition, but indicators are that it continues to weaken. Oracle’s entry into the RHEL market has had at best marginal impact on RHEL economics (the substantial price rises in RHEL 6 are a fairly clear signal of the degree to which Red Hat believes it faces real competition). The existence of “unpaid RHEL” in the form of CentOS, as well as OEL, essentially strengthens the position of RHEL itself. Ubuntu and Debian have large combined levels of adoption, but low revenue.

So clearly, there is work to do just to balance power in the distribution market. And it will take work – historically, platforms tend to monopolies, and in the absence of a definitive countervailing force that establishes strength outside the RHEL gravity well, that’s what we’ll have. But that’s not the most interesting piece. What’s more interesting is the dynamic between distributions and upstreams.

Today, most upstreams are weak. They have little institutional strength. It’s generally difficult to negotiate and do business with an upstream. In many cases, that’s by design – the teams behind a project are simply not interested, or they are explicitly non-profit, as in the case of the FSF, which makes them good leaders of specific values, but difficult to engage with commercially.

As a result, those who need to do business with open source go to distributions, even in cases where they really want to be focused on a particular component. This greatly amplifies the power of the distributions: they essentially are the commercial vehicles for ALL of open source. The weakness of individual upstreams turns into greater strength for distributions.

You can imagine that distributions like it that way, and it would be surprising to see a distribution, or company that backs a distribution, arguing for stronger upstreams. But that’s exactly the position I take: FLOSS needs stronger upstreams, and as a consequence, weaker distributions.

Stronger upstreams will result in more innovation in FLOSS than stronger distributions. Essentially, like Microsoft, a distribution receives cash for the whole platform and allocates it to specific areas of R&D. That means the number of good ideas that receive funding in our ecosystem, today, is dependent on the insights of a very few companies. Just as Microsoft invested a lot in R&D and yet seemed to fall behind, upstream innovation will be strangled if it’s totally dependent on cash flow via distributions.

It’s not just innovation that suffers because we don’t have more power, or economic leverage, in the hands of upstreams. It’s also the myriad of things beyond code itself. When you have a company behind a project, they tend to take care of a lot more than just the code: QA, documentation, testing, promotion. It’s easy, as a developer, to undervalue those things, or to see them as competing for resources with the “real work” of code. But that competition is necessary, and they make a great contribution to the dynamism of the final product.

Consider the upstream projects which have been very successful over the long term. Qt and MySQL, for example, both had companies behind them that maintained strong leverage over the product. That leverage was often unpopular, but the result was products available to all of us under a free license that continued to grow in stature, quality and capability despite the ups and downs of the broader market, and without being too dependent on the roving spotlight of “coolness”, which tends to move quickly from project to project.

There are of course successful upstream projects which do not have such companies. The best example is probably the Linux kernel itself. However, those projects fall into a rather unusual category: they are critical to some large number of companies that make money in non-software ways, and those companies are thus forced to engage with the project and contribute. In the case of the kernel, hardware companies directly and indirectly underwrite the vast majority of the boring but critical work that, in other projects, would be covered by the sponsoring institution. And despite that, there are many gaps in the kernel. You don’t have to dig very hard to find comments from key participants bemoaning the lack of testing and documentation. Nevertheless, it gets by quite well under the circumstances.

But most ecosystems will have very few projects that are at such a confluence. Most upstream projects are the work of a few people, the “coolness” spotlight shines on them briefly if at all. They need either long term generosity from core contributors, or an institution to house and care for them, if they want to go the distance. The former rarely works for more than a few years.

Projects which depend on indirect interests, such as those sponsored by hardware companies, have another problem. Their sponsoring institutions are generally not passionate about software. They don’t really need or want to produce GREAT software. And if you look at the projects which get a lot of such contributions, that becomes very obvious. Compare and contrast the quality of apps from companies which deeply care about software from those which come from hardware companies, and you see what I mean.

We FLOSS folk like to tell ourselves that the Windows hegemony was purely a result of the manipulations of its sponsor, and the FLOSS as we do it today is capable of doing much more if it only had a fair chance. I don’t think, having watched the success of iOS and Android as new ecosystems, that we can justify that position any longer. I think we have to be willing to think hard about what we are willing to change if we want to have the chance of building an ecosystem as strong, but around GNU/Linux. Since that’s my goal, I’m thinking very hard about that, and creatively. I think it’s possible, but not without challenging some sacred cows and figuring out what values we want to preserve and which we can remould.

Power is worth having in your ecosystem, despite its occasional abuse

There’s no doubt that power created will be abused. That’s true of a lot of important rights and powers. For example, we know that free speech is often abused, but we nevertheless value it highly in many societies that are also big contributors to FLOSS. You probably know the expression, “I disagree with what you are saying entirely, but I will defend to the death your right to say it”.

Similarly, in our ecosystem, power will be abused. But it’s still worth helping institutions acquire it, even those we dislike or distrust, or those we compete with. At Canonical, we’ve directly and indirectly helped lots of institutions that you could describe that way – Oracle, Novell, Red Hat, Intel and many others. The kneejerk reaction is usually “no way”, but upon deeper thought, we figured that it is better to have an ecosystem of stronger players, considering the scale of the battle with the non-FLOSS world.

I often find people saying “I would help an institution if I thought I could trust it”. And I think that’s a red herring, because just as power will be abused, trust will be abused too. If you believe that this is a battle of ecosystems and platforms, you want to have as many powerful competitors in your ecosystem as possible, even though you probably cannot trust any of them in the very long term. It’s the competition between them that really creates long term stability, to come back to the thinking of James Madison. It’s pitting ambition against ambition, not finding angels, which makes that ecosystem a winner. If you care about databases, don’t try to weaken MySQL, because you need it strong when you need it. Rather figure out how to strengthen PostGRES alongside it.

How Canonical fits in

Canonical is in an interesting position with regard to all of this. As a distribution, we could stay silent on the issue, and reasonably expect to grow in power over time, on the same basis that Red Hat has. And there are many voices in Canonical that say exactly that: don’t rock the boat, essentially.

However, perhaps unlike other Linux distributions, Canonical very much wants to see end users running free software, and not just IT professionals. That raises the bar dramatically in terms of the quality of the individual pieces. It means that it’s not good enough for us to work in an ecosystem which produces prototype or rough cut products, which we then aggregate and polish at the distribution level. Unlike those who have gone before, we don’t want to be the sole guarantor of quality in our ecosystem, because that will not scale.

For that reason, looking at the longer term, it’s very important to me that we figure out how to give more power to upstreams, so that they in turn can invest in producing components or works which have the completeness and quality that end-users expect. I enjoy working with strong commercial institutions in the open source ecosystem – while they always represent some competitive tension, they also represent the opportunity to help our ecosystem scale and out-compete the proprietary world. So I’d like to find ways to strengthen the companies that have products under free software, and encourage more that have proprietary projects to make them available under free licenses, even if that’s not the only way they publish them.

If you’ve read this far, you probably have a good idea where I’m going with this. But I have a few more steps before actually getting there. More soon.

Till then, I’m interested in how people think we can empower upstream projects to be stronger institutionally.

There are a couple of things that are obvious and yet don’t work. For example, lots of upstreams think they should form a non-profit institution to house their work. The track record of those is poor: they get setup, and they fail as soon as they have to file their annual paperwork, leaving folks like the SFLC to clean up the mess. Not cool. At the end of the day, such new institutions add paperwork without adding funding or other sources of energy. They don’t broaden out the project the same way a company writing documentation and selling services usually does. On the other hand, non-profits like the FSF which have critical mass are very important, though, which is why on occasion we’ve been happy to contribute to them in various ways.

Also, I’m interested in how we can reshape our attitudes to power. Today, the tenor of discussion in most FLOSS debates is simplistic: we fear power, and we attempt to squash it always, celebrating the individual. But that misses the point that we are merely strengthening the power elsewhere; in distributions, in other ecosystems. We need a richer language for describing “the Goldilocks power” balance, and how we can move beyond FUD.

So, what do you think we could do to create more Mozilla’s, more MySQL’s, more Qt’s and more OpenStacks?

I’ll summarise interesting comments and threads in the next post.

Competition is tough on the contestants, but it gets great results for everyone else. We like competitive markets, competitive technologies, competitive sports, because we feel the end result for consumers or the audience is as good as it possibly could be.

In larger organisations, you get an interesting effect. You get *internal* competition as well as external competition. And that’s healthy too. You get individuals competing for responsibility, and of course you want to make sure that the people who will make the wisest choices carry the responsibilities that demand wisdom, while those who have the most energy carry the responsibilities for which that’s most needed. You get teams competing with each other too – for resources, for attention, for support elsewhere, for moral authority, for publicity. And THAT’s the hardest kind of competition to manage, because it can be destructive to the organisation as a whole.

Even though it’s difficult to manage, internal competition is extremely important, and should not be avoided out of fear. The up side is that you get to keep the best ideas because you allowed them to compete internally. If you try and avoid it, you crowd out new ideas, and end up having to catch up to them. Usually, what goes wrong is that one group gets control of the organisation’s thinking, and takes the view that any ideas which do not come from within that group are a threat, and should be stopped. That’s very dangerous – it’s how great civilisations crash; they fail to embrace new ideas which are not generated at the core.

In Ubuntu, we have a lot of internal competition. Ubuntu and Kubuntu and Xubuntu and Edubuntu and *buntu-at-large have to collaborate and also, to a certain extent, compete. We handle that very well, I think, though occasionally some muppet calls Kubuntu the blue-headed-stepchild etc etc. It’s absolutely clear to everyone, though, that we have a shared interest in delivering ALL these experiences together with as much shared vision and commonality as possible. I consider the competition between these teams healthy and constructive and worth maintaining, even though it requires some fancy footwork and causes occasional strains.

The challenge for Gnome leadership

The sound and fury writ large in blog comments this week is all about how competition is managed.

Gnome is, or should be, bigger than any of the contributing individuals or companies. Gnome leadership should be in a position to harness competition effectively for the good of the project. That does, however, require very firm leadership, and very gutsy decisions. And it requires vigilance against inward thinking. For example, I’ve seen the meme reiterated multiple times that “one should not expect Gnome to embrace ideas which were not generated and hosted purely within Gnome”. That’s chronic inward thinking. Think of all the amazing bits of goodness in the free software stack which were NOT invented in Gnome but are a part of it today. Think how much better it is when goodness is adopted across multiple desktop environments, and how much harder it is to achieve that when something is branded “K” or “G”.

When we articulated our vision for Unity, we were very clear that we wanted to deliver it under the umbrella of Gnome. We picked Gnome-friendly technologies by and large, and where we felt we needed to do something different, that decision required substantial review. We described Unity as “a shell for Gnome” from the beginning, and we have been sincere in that view. We have worked successfully and happily with many, many Gnome projects to integrate Unity API’s into their codebase.

This is because we wanted to be sure that whatever competitive dynamics arose were *internal* to Gnome, and thus contributing to a better result overall in Gnome in the long term.

We’ve failed.

Much of the language, and much of the decision making I’ve observed within Gnome, is based on the idea that Unity is competition WITH Gnome, rather than WITHIN Gnome.

The key example of that is the rejection of Unity’s indicator API’s as external dependencies. That was the opportunity to say “let’s host this competition inside Gnome”. Even now, there’s a lack of clarity as to what was intended by that rejection, with some saying “it was just a reflection of the fact that the API’s were new and not used in any apps”. If that were the case, there would be no need for prior approval as an external dependency; the rejection was clearly an attempt to prevent Gnome applications from engaging around these API’s. It’s substantially failed, as many apps have happily done the work to blend in beautifully in the Unity environment, but there has been a clear attempt to prevent that by those who feel that Unity is a threat to Gnome rather than an opportunity for it.

Dave Neary has to his credit started to ask “what’s really going on here”?

In his blog post, he quoted the rationale given for the rejection of Canonical’s indicator API’s, which I’ll re-quote here and analyze in this light:

it doesn’t integrate with gnome-shell

That’s it – right there. Remember, this was a proposal for the indicator API’s to be an *external* dependency for Gnome. That means, Gnome apps can use those API’s *optionally* when they are being run on a platform where they are useful. It has NOTHING to do with the core Gnome vision. External API’s exist precisely BECAUSE it’s useful to encourage people to use Gnome apps on all sorts of platforms, including proprietary ones like Windows and MacOS and Solaris, and they should shine there too.

So the premier reason given for the rejection of these API’s is a reason that, as best we can tell, has never been used against an external dependency proposal before: “it’s different to Gnome”. At the heart of this statement is something deeper: “it’s competition with an idea someone in Gnome wants to pursue”.

What made this single statement heartbreaking for me to see was that it spoke clearly to the end of one of Gnome’s core values: code talks. Here we had API’s which were real, tested code, with patches to many Gnome apps available, that implemented a spec that had been extensively discussed on FreeDesktop.org. This was real code. Yet it was blocked because someone – a Gnome Shell designer – wanted to explore other ideas, ideas which at the time were not working code at all. There’s been a lot of commentary on that decision. Most recently, Aaron Seigo pointed out that this decision was as much a rejection of cross-desktop standards as it was a rejection of Canonical’s code.

Now, I can tell you that I was pretty disgusted with this result.

We had described the work we wanted to do (cleaning up the panel, turning panel icons into menus) to the Gnome Shell designers at the 2008 UX hackfest. McCann denies knowledge today, but it was a clear decision on our part to talk about this work with him at the time, it was reported to me that the conversation had happened, and that we’d received the assurance that such work would be “a valued contribution to the shell”. Clearly, by the time it was delivered, McCann had decided that such assurances were not binding, and that his interest in an alternative panel story trumped both that assurance and the now-extant FreeDesktop.org discussions and spec.

But that’s not the focus of this blog. My focus here is on the management of healthy competition. And external dependencies are the perfect way to do so: they signal that there is a core strategy (in this case whatever Jon McCann wants to do with the panel) and yet there are also other, valid approaches which Gnome apps can embrace. This decision failed to grab that opportunity with both hands. It said “we don’t want this competition WITHIN Gnome”. But the decision cannot remove the competitive force. What that means is that the balance was shifted to competition WITH Gnome.

probably depends on GtkApplication, and would need integration in GTK+ itself

Clearly, both of these positions are flawed. The architecture of the indicator work was designed both for backward compatibility with the systray at the time, and for easy adoption. We have lots of apps using the API’s without either of these points being the case.

we wished there was some constructive discussion around it, pushed by the libappindicator developers; but it didn’t happen

We made the proposal, it was rejected. I can tell you that the people who worked on the proposal consider themselves Gnome people, and they feel they did what was required, and stopped when it was clear they were not going to be accepted. I’ve had people point to this bullet and say “you should have pushed harder”. But proposing an *external* dependency is not the same as trying to convince Shell to adopt something as the mainstream effort. It’s saying “hey, here’s a valid set of API’s apps might want to embrace, let’s let them do so”.

there’s nothing in GNOME needing it

This is a very interesting comment. It’s saying “no Gnome apps have used these API’s”. But the Gnome apps in question were looking to this very process for approval of their desire to use the API’s. You cannot have a process to pre-approve API’s, then decline to do so because “nobody has used the API’s which are not yet approved”. You’re either saying “we just rubber stamp stuff here, go ahead and use whatever you want”, or you’re being asinine.

It’s also saying that Unity is not “in GNOME”. Clearly, a lot of Unity work depends on the adoption of these API’s for a smooth and well-designed panel experience. So once again, we have a statement that Unity is “competition with Gnome” and not “competition within Gnome”.

And finally, it’s predicating this decision on the idea being “in Gnome” is the sole criterion of goodness. There is a cross-desktop specification which defines the appindicator work clearly. The fact that KDE apps Just Work on Unity is thanks to the work done to make this a standard. Gnome devs participated in the process, but appeared not to stick with it. Many or most of the issues they raised were either addressed in the spec or in the implementations of it. They say now that they were not taken seriously, but a reading of the mailing list threads suggests otherwise.

It’s my view that cross-desktop standards are really important. We host both Kubuntu and Ubuntu under our banner, and without such collaboration, that would be virtually impossible. I want Banshee to work as well under Kubuntu as Amarok can under Ubuntu.

What can be done?

This is a critical juncture for the leadership of Gnome. I’ll state plainly that I feel the long tail of good-hearted contributors to Gnome and Gnome applications are being let down by a decision-making process that has let competitive dynamics diminish the scope of Gnome itself. Ideas that are not generated “at the core” have to fight incredibly and unnecessarily hard to get oxygen. Ask the Zeitgeist team. Federico is a hero, but getting room for ideas to be explored should not feel like a frontal assault on a machine gun post.

This is no way to lead a project. This is a recipe for a project that loses great people to environments that are more open to different ways of seeing the world. Elementary. Unity.

Embracing those other ideas and allowing them to compete happily and healthily is the only way to keep the innovation they bring inside your brand. Otherwise, you’re doomed to watching them innovate and then having to “relayout” your own efforts to keep up, badmouthing them in the process.

We started this with a strong, clear statement: Unity is a shell for Gnome. Now Gnome leadership have to decide if they want the fruit of that competition to be an asset to Gnome, or not.

A blessing in disguise

Aaron’s blog post made me think that the right way forward might be to bolster and strengthen the forum for cross-desktop collaboration: FreeDesktop.org.

I have little optimism that the internal code dynamics of Gnome can be fixed – I have seen too many cases where a patch which implements something needed by Unity is dissed, then reimplemented differently, or simply left to rot, to believe that the maintainers in Gnome who have a competitive interest on one side or the other will provide a level playing field for this competition.

However, we have shown a good ability to collaborate around FD.o with KDE and other projects. Perhaps we could strengthen FreeDesktop.org and focus our efforts at collaboration around the definition of standards there. Gnome has failed to take that forum seriously, as evidenced by the frustrations expressed elsewhere. But perhaps if we had both Unity and KDE working well there, Gnome might take a different view. And that would be very good for the free software desktop.

I spent a lot of time observing our community, this release. For some reason I was curious to see how our teams work together, what the dynamic is, how they work and play together, how they celebrate and sadly, also how they mourn. So I spent a fair amount more time this cycle reading lists from various Ubuntu teams, reading minutes from governance meetings for our various councils, watching IRC channels without participating, just to get a finger on the pulse.

Everywhere I looked I saw goodness: organised, motivated, cheerful and constructive conversations. Building a free OS involves an extraordinary diversity of skills, and what’s harder is that it requires merging the contributions from so many diverse disciplines and art forms. And yet, looking around the community, we seem to have found patterns for coordination and collaboration that buffer the natural gaps between all the different kinds of activities that go on.

There are definitely things we can work on. We have to stay mindful of the fact that Ubuntu is primarily a reflection of what gets done in the broader open source ecosystem, and stay committed to transmitting their work effectively, in high quality (and high definition :-)) to the Ubuntu audience. We have to remind those who are overly enthusiastic about Ubuntu that fanboyism isn’t cool, I saw a bit of “We rock you suck” that’s not appropriate. But I also saw folks stepping in and reminding those who cross the line that our values as a community are important, and the code of conduct most important of all.

So I have a very big THANK YOU for everyone. This is our most valuable achievement: making Ubuntu a great place to get stuff done that has a positive impact on literally millions of people. Getting that right isn’t technical, but it’s hard and complex work. And that’s what makes the technical goodness flow.

In particular, I’d like to thank those who have stepped into responsibilities as leaders in large and small portions of our Ubuntu universe. Whether it’s organising a weekly newsletter, coordinating the news team, arranging the venue for a release party, reviewing translations from new translators in your language, moderating IRC or reviewing hard decisions by IRC moderators, planning Kubuntu or leading MOTU’s, the people who take on the responsibility of leadership are critical to keeping Ubuntu calm, happy and productive.

But I’d also like to say that what made me most proud was seeing folks who might not think of themselves as leaders, stepping up and showing leadership skills.

There are countless occasions when something needs to be said, or something needs to get done, but where it would be easy to stay silent or let it slip, and I’m most proud of the fact that many of the acts of leadership and initiative I saw weren’t by designated or recognised leaders, they were just part of the way teams stayed cohesive and productive. I saw one stroppy individual calmly asked to reconsider their choice of words and pointed to the code of conduct by a newcomer to Ubuntu. I saw someone else step up and lead a meeting when the designated chairman couldn’t make it. That’s what makes me confident Ubuntu will continue to grow and stay sane as it grows. That’s the really daunting thing for me – as it gets bigger, it depends on a steady supply of considerate and thoughtful people who are passionate about helping do something amazing that they couldn’t do on their own. It’s already far bigger than one person or one company – so we’re entirely dependent on broader community commitment to the values that define the project.

So, to everyone who participates, thank you and please feel empowered to show leadership whenever you think we could do better as a community. That’s what will keep us cohesive and positive. That’s what will make sure the effort everyone puts into it will reach the biggest possible audience.

With that said, well done everyone on a tight but crisp post-LTS release. Maverick was a challenge, we wanted to realign the cycle slightly which compressed matters but hopefully gives us a more balanced April / October cadence going forward based on real data for real global holiday and weather patterns :-). There was an enormous amount of change embraced and also change deferred, wisely. You all did brilliantly. And so, ladies an gentlemen, I give you Mr Robbie Williamson and the Maverick Release Announcement. Grab your towel and let’s take the Meerkat out on a tour of the Galaxy ;-)

Tribalism is the enemy within

Friday, July 30th, 2010

Tribalism is when one group of people start to think people from another group are “wrong by default”. It’s the great-granddaddy of racism and sexism. And the most dangerous kind of tribalism is completely invisible: it has nothing to do with someone’s “birth tribe” and everything to do with their affiliations: where they work, which sports team they support, which linux distribution they love.

There are a couple of hallmarks of tribal argument:

1. “The other guys have never done anything useful”. Well, let’s think about that. All of us wake up every day, with very similar ambitions and goals. I’ve travelled the world and I’ve never met a single company, or country, or church, where *everybody* there did *nothing* useful. So if you see someone saying “Microsoft is totally evil”, that’s a big red flag for tribal thinking. It’s just like someone saying “All black people are [name your prejudice]“. It’s offensive nonsense, and you would be advised to distance yourself from it, even if it feels like it would be fun to wave that pitchfork for a while.

2. “Evidence contrary to my views doesn’t count.” So, for example, when a woman makes it to the top of her game, “it’s because she slept her way there”. Offensive nonsense. And similarly, when you see someone saying “Canonical didn’t actually sponsor that work by that Canonical employee, that was done in their spare time”, you should realize that’s likely to be offensive nonsense too.

Let’s be clear: tribalism makes you stupid. Just like it would be stupid not to hire someone super-smart and qualified because they’re purple, or because they are female, it would be stupid to refuse to hear and credit someone with great work just because they happen to be associated with another tribe.

The very uncool thing about being a fanboy (or fangirl) of a project is that you’re openly declaring both a tribal affiliation and a willingness to reject the work of others just because they belong to a different tribe.

One of the key values we hold in the Ubuntu project is that we expect everyone associated with Ubuntu to treat people with respect. It’s part of our code of conduct – it’s probably the reason we *pioneered* the use of codes of conduct in open source. I and others who founded Ubuntu have seen how easily open source projects descend into nasty, horrible and unproductive flamewars when you don’t exercise strong leadership away from tribal thinking.

Now, bad things happen everywhere. They happen in Ubuntu – and because we have a huge community, they are perhaps more likely to happen there than anywhere else. If we want to avoid human nature’s worst consequences, we have to work actively against them. That’s why we have strong leadership structures, which hopefully put people who are proven NOT to be tribal in nature into positions of responsibility. It takes hard work and commitment, but I’m grateful for the incredible efforts of all the moderators and council members and leaders in LoCo teams across this huge and wonderful project, for the leadership they exercise in keeping us focused on doing really good work.

It’s hard, but sometimes we have to critique people who are associated with Ubuntu, because they have been tribal. Hell, sometimes I and others have to critique ME for small-minded and tribal thinking. When someone who calls herself “an Ubuntu fan” stands up and slates the work of another distro we quietly reach out to that person and point out that it’s not the Ubuntu way of doing things. We don’t spot them all, but it’s a consistent practice within the Ubuntu leadership team: our values are more important than winning or losing any given debate.

Do not be drawn into a tribal argument on Ubuntu’s behalf

Right now, for a number of reasons, there is a fever pitch of tribalism in plain sight in the free software world. It’s sad. It’s not constructive. It’s ultimately going to be embarrassing for the people involved, because the Internet doesn’t forget. It’s certainly not helping us lift free software to the forefront of public expectations of what software can be.

I would like to say this to everyone who feels associated with Ubuntu: hold fast to what you know to be true. You know your values. You know how hard you work. You know what an incredible difference your work has made. You know that you do it for a complex mix of love and money, some more the former, others the more latter, but fundamentally you are all part of Ubuntu because you think it’s the most profound and best way to spend your time. Be proud of that.

There is no need to get into a playground squabble about your values, your ethics, your capabilities or your contribution. If you can do better, figure out how to do that, but do it because you are inspired by what makes Ubuntu wonderful: free software, delivered freely, in a way that demonstrates real care for the end user. Don’t do it because you feel intimidated or threatened or belittled.

The Gregs are entitled to their opinions, and folks like Jono and Dylan have set an excellent example in how to rebut and move beyond them.

I’ve been lucky to be part of many amazing things in life. Ubuntu is, far and away, the best of them. We can be proud of the way we are providing leadership: on how communities can be a central part of open source companies, on how communities can be organised and conduct themselves, on how the economics of free software can benefit more than just the winning distribution, on how a properly designed user experience combined with free software can beat the best proprietary interfaces any day. But remember: we do all of those things because we believe in them, not because we want to prove anybody else wrong.

Linaro: Accelerating Linux on ARM

Thursday, June 3rd, 2010

At our last UDS in Belgium it was notable how many people were interested in the ARM architecture. There have always been sessions at UDS about lightweight environments for the consumer electronics and embedded community, but this felt tangibly different. I saw questions being asked about ARM in server and cloud tracks, for example, and in desktop tracks. That’s new.

So I’m very excited at today’s announcement of Linaro, an initiative by the ARM partner ecosystem including Freescale, IBM, Samsung, ST-Ericsson and TI, to accelerate and unify the field of Linux on ARM. That is going to make it much easier for developers to target ARM generally, and build solutions that can work with the amazing diversity of ARM hardware that exists today.

The ARM platform has historically been superspecialized and hence fragmented – multiple different ARM-based CPU’s from multiple different ARM silicon partners all behaved differently enough that one needed to develop different software for each of them. Boot loaders, toolchains, kernels, drivers and middleware are all fragmented today, and of course there’s additional fragmentation associated with Android vs mainline on ARM, but Linaro will go a long way towards cleaning this up and making it possible to deliver a consistent platform experience across all of the major ARM hardware providers.

Having played with a prototype ARM netbook, I was amazed at how cool it felt. Even though it was just a prototype it was super-thin, and ran completely cool. It felt like a radical leap forward for the state of the art in netbooks. So I’m a fan of fanless computing, and can’t wait to get one off the shelf :-)

For product developers, the big benefit from Linaro will be reduced time to market and increased choice of hardware. If you can develop your software for “linux on ARM”, rather than a specific CPU, you can choose the right hardware for your project later in the development cycle, and reduce the time required for enablement of that hardware. Consumer electronics product development cycles should drop significantly as a result. That means that all of us get better gadgets, sooner, and great software can spread faster through the ecosystem.

Linaro is impressively open: www.linaro.org has details of open engineering summits, an open wiki, mailing lists etc. The teams behind the work are committed to upstreaming their output so it will appear in all the distributions, sooner or later. The images produced will all be royalty free. And we’re working closely with the Linaro team, so the cadence of the releases will be rigorous, with a six month cycle that enables Linaro to include all work that happens in Ubuntu in each release of Linaro. There isn’t a “whole new distribution”, because a lot of the work will happen upstream, and where bits are needed, they will be derived from Ubuntu and Debian, which is quite familiar to many developers.

The nature of the work seems to break down into four different areas.

First, there are teams focused on enabling specific new hardware from each of the participating vendors. Over time, we’ll see real convergence in the kernel used, with work like Grant Likely’s device tree forming the fabric by which differences can be accommodated in a unified kernel. As an aside, we think we can harness the same effort in Ubuntu on other architectures as well as ARM to solve many of the thorny problems in linux audio support.

Second, there are teams focused on the middleware which is common to all platforms: choosing APIs and ensuring that those are properly maintained and documented so that people can deliver any different user experience with best-of-breed open tools.

Third, there are teams focused on advancing the state of the art. For example, these teams might accelerate the evolution of the compiler technology, or the graphics subsystem, or provide new APIs for multitouch gestures, or geolocation. That work benefits the entire ecosystem equally.

And finally, there are teams aimed at providing out of the box “heads” for different user experiences. By “head” we mean a particular user experience, which might range from the minimalist (console, for developers) to the sophisticated (like KDE for a netbook). Over time, as more partners join, the set of supported “heads” will grow – ideally in future you’ll be able to bring up a Gnome head, or a KDE head, or a Chrome OS head, or an Android head, or a MeeGo head, trivially. We already have goot precedent for this in Ubuntu with support for KDE, Gnome, LXE and server heads, so everyone’s confident this will work well.

The diversity in the Linux ecosystem is fantastic. In part, Linaro grows that diversity: there’s a new name that folks need to be aware of and think about. But importantly, Linaro also serves to simplify and unify pieces of the ecosystem that have historically been hard to bring together. If you know Ubuntu, then you’ll find Linaro instantly familiar: we’ll share repositories to a very large extent, so things that “just work” in Ubuntu will “just work” with Linaro too.

Six-month cycles are great. Now let’s talk about meta-cycles: broader release cycles for major work. I’m very interested in a cross-community conversation about this, so will sketch out some ideas and then encourage people from as many different free software communities as possible to comment here. I’ll summarise those comments in a follow-up post, which will no doubt be a lot wiser and more insightful than this one :-)

Background: building on the best practice of cadence

The practice of regular releases, and now time-based releases, is becoming widespread within the free software community. From the kernel, to GNOME and KDE, to X, and distributions like Ubuntu, Fedora, the idea of a regular, predictable cycle is now better understood and widely embraced. Many smarter folks than me have articulated the benefits of such a cadence: energising the whole community, REALLY releasing early and often, shaking out good and bad code, rapid course correction.

There has been some experimentation with different cycles. I’m involved in projects that have 1 month, 3 month and 6 month cycles, for different reasons. They all work well.

..but addressing the needs of the longer term

But there are also weaknesses to the six-month cycle:

  • It’s hard to communicate to your users that you have made some definitive, significant change,
  • It’s hard to know what to support for how long, you obviously can’t support every release indefinitely.

I think there is growing insight into this, on both sides of the original “cadence” debate.

A tale of two philosophies, perhaps with a unifying theory

A few years back, at AKademy in Glasgow, I was in the middle of a great discussion about six month cycles. I was a passionate advocate of the six month cycle, and interested in the arguments against it. The strongest one was the challenge of making “big bold moves”.

“You just can’t do some things in six months” was the common refrain. “You need to be able to take a longer view, and you need a plan for the big change.” There was a lot of criticism of GNOME for having “stagnated” due to the inability to make tough choices inside a six month cycle (and with perpetual backward compatibility guarantees). Such discussions often become ideological, with folks on one side saying “you can evolve anything incrementally” and others saying “you need to make a clean break”.

At the time of course, KDE was gearing up for KDE 4.0, a significant and bold move indeed. And GNOME was quite happily making its regular releases. When the KDE release arrived, it was beautiful, but it had real issues. Somewhat predictably, the regular-release crowd said “see, we told you, BIG releases don’t work”. But since then KDE has knuckled down with regular, well managed, incremental improvements, and KDE is looking fantastic. Suddenly, the big bold move comes into focus, and the benefits become clear. Well done KDE :-)

On the other side of the fence, GNOME is now more aware of the limitations of indefinite regular releases. I’m very excited by the zest and spirit with which the “user experience MATTERS” campaign is being taken up in Gnome, there’s a real desire to deliver breakthrough changes. This kicked off at the excellent Gnome usability summit last year, which I enjoyed and which quite a few of the Canonical usability and design folks participated in, and the fruits of that are shaping up in things like the new Activities shell.

But it’s become clear that a change like this represents a definitive break with the past, and might take more than a single six month release to achieve. And most important of all, that this is an opportunity to make other, significant, distinctive changes. A break with the past. A big bold move. And so there’s been a series of conversations about how to “do a 3.0″, in effect, how to break with the tradition of incremental change, in order to make this vision possible.

It strikes me that both projects are converging on a common set of ideas:

  • Rapid, predictable releases are super for keeping energy high and code evolving cleanly and efficiently, they keep people out of a deathmarch scenario, they tighten things up and they allow for a shakeout of good and bad ideas in a coordinated, managed fashion.
  • Big releases are energising too. They are motivational, they make people feel like it’s possible to change anything, they release a lot of creative energy and generate a lot of healthy discussion. But they can be a bit messy, things can break on the way, and that’s a healthy thing.

Anecdotally, there are other interesting stories that feed into this.

Recently, the Python community decided that Python 3.0 will be a shorter cycle than the usual Python release. The 3.0 release is serving to shake out the ideas and code for 3.x, but it won’t be heavily adopted itself so it doesn’t really make sense to put a lot of effort into maintaining it – get it out there, have a short cycle, and then invest in quality for the next cycle because 3.x will be much more heavily used than 3.0. This reminds me a lot of KDE 4.0.

So, I’m interesting in gathering opinions, challenges, ideas, commitments, hypotheses etc about the idea of meta-cycles and how we could organise ourselves to make the most of this. I suspect that we can define a best practice, which includes regular releases for continuous improvement on a predictable schedule, and ALSO defines a good practice for how MAJOR releases fit into that cadence, in a well structured and manageable fashion. I think we can draw on the experiences in both GNOME and KDE, and other projects, to shape that thinking.

This is important for distributions, too

The major distributions tend to have big releases, as well as more frequent releases. RHEL has Fedora, Ubuntu makes LTS releases, Debian takes cadence to its logical continuous integration extreme with Sid and Testing :-).

When we did Ubuntu 6.06 LTS we said we’d do another LTS in “2 to 3 years”. When we did 8.04 LTS we said that the benefits of predictability for LTS’s are such that it would be good to say in advance when the next LTS would be. I said I would like that to be 10.04 LTS, a major cycle of 2 years, unless the opportunity came up to coordinate major releases with one or two other major distributions – Debian, Suse or Red Hat.

I’ve spoken with folks at Novell, and it doesn’t look like there’s an opportunity to coordinate for the moment. In conversations with Steve McIntyre, the current Debian Project Leader, we’ve identified an interesting opportunity to collaborate. Debian is aiming for an 18 month cycle, which would put their next release around October 2010, which would be the same time as the Ubuntu 10.10 release. Potentially, then, we could defer the Ubuntu LTS till 10.10, coordinating and collaborating with the Debian project for a release with very similar choices of core infrastructure. That would make sharing patches a lot easier, a benefit both ways. Since there will be a lot of folks from Ubuntu at Debconf, and hopefully a number of Debian developers at UDS in Barcelona in May, we will have good opportunities to examine this opportunity in detail. If there is goodwill, excitement and broad commitment to such an idea from Debian, I would be willing to promote the idea of deferring the LTS from 10.04 to 10.10 LTS.

Questions and options

So, what would the “best practices” of a meta-cycle be? What sorts of things should be considered in planning for these meta-cycles? What problems do they cause, and how are those best addressed? How do short term (3 month, 6 month) cycles fit into a broader meta-cycle? Asking these questions across multiple communities will help test the ideas and generate better ones.

What’s a good name for such a meta-cycle? Meta-cycle seems…. very meta.

Is it true that the “first release of the major cycle” (KDE 4.0, Python 3.0) is best done as a short cycle that does not get long term attention? Are there counter-examples, or better examples, of this?

Which release in the major cycle is best for long term support? Is it the last of the releases before major new changes begin (Python 2.6? GNOME 2.28?) or is it the result of a couple of quick iterations on the X.0 release (KDE 4.2? GNOME 3.2?) Does it matter? I do believe that it’s worthwhile for upstreams to support an occasional release for a longer time than usual, because that’s what large organisations want.

Is a whole-year cycle beneficial? For example, is 2.5 years a good idea? Personally, I think not. I think conferences and holidays tend to happen at the same time of the year every year and it’s much, much easier to think in terms of whole number of year cycles. But in informal conversations about this, some people have said 18 months, others have said 30 months (2.5 years) might suit them. I think they’re craaaazy, what do you think?

If it’s 2 years or 3 years, which is better for you? Hardware guys tend to say “2 years!” to get the benefit of new hardware, sooner. Software guys say “3 years!” so that they have less change to deal with. Personally, I am in the 2 years camp, but I think it’s more important to be aligned with the pulse of the community, and if GNOME / KDE / Kernel wanted 3 years, I’d be happy to go with it.

How do the meta-cycles of different projects come together? Does it make sense to have low-level, hardware-related things on a different cycle to high-level, user visible things? Or does it make more sense to have a rhythm of life that’s shared from the top to the bottom of the stack?

Would it make more sense to stagger long term releases based on how they depend on one another, like GCC then X then OpenOffice? Or would it make more sense to have them all follow the same meta-cycle, so that we get big breakage across the stack at times, and big stability across the stack at others?

Are any projects out there already doing this?

Is there any established theory or practice for this?

A cross-community conversation

If you’ve read this far, thank you! Please do comment, and if you are interested then please do take up these questions in the communities that you care about, and bring the results of those discussions back here as comments. I’m pretty sure that we can take the art of software to a whole new level if we take advantage of the fact that we are NOT proprietary, and this is one of the key ways we can do it.

This is not the end of capitalism

Tuesday, November 4th, 2008

Some of the comments on my last post on the economic unwinding of 2008 suggested that people think we are witnessing the end of capitalism and the beginning of a new socialist era.

I certainly hope not.

I think a world without regulated capitalism would be a bleak one indeed. I had the great privilege to spend a year living in Russia in 2001/2002, and the visible evidence of the destruction wrought by central planning was still very much present. We are all ultimately human, with human failings, whether we work for a state planning agency or a private company, and those failings have consequences either way. To think that moving all private enterprise into state hands will somehow create a panacea of efficiency and sustainability is to ignore the stark lessons of the 20th century.

The leaders and decision makers in a centrally-planned economy are just as fallible as those in a capitalist one – they would probably be the same people! But state enterprises lack the forces of evolution that apply in a capitalist economy – state enterprises are rarely if ever allowed to fail. And hence bad ideas are perpetuated indefinitely, and an economy becomes dysfunctional to the point of systemic collapse. It is the fact that private enterprises fail which keeps industries vibrant. The tension between the imperative to innovate and the consequences of failure drives capitalist economies to evolve quickly. Despite all of the nasty consequences that we have seen, and those we have yet to see, of capitalism gone wrong, I am still firmly of the view that society must tap into its capitalist strengths if it wants to move forward.

But I chose my words carefully when I said “regulated capitalism”. I used to be a fan of Adam Smith’s invisible hand, and great admirer of Ayn Rand’s vision. Now, I feel differently. Left to it’s own devices, the market will tend to reinforce the position of those who were successful in the past, at the exclusion of those who might create future successes. We see evidence of this all the time. The heavyweights that define an industry tend to do everything in their power to prevent innovation from changing the rules that enrich them.

A classic example of that is the RIAA’s behaviour – in the name of “saving the music industry” they have spent the past ten years desperately trying to keep it in the analog era to save their members, with DRM and morally unjustifiable special-interest lobbying around copyright rules that affect the whole of society.

Similarly, patent rules tend to evolve to suit the companies that hold many patents, rather than the people who might generate the NEXT set of innovative ideas. Of course, the lobbying is dressed up in language that describes it as being “in the interests of innovation”, but at heart it is really aimed at preserving the privileged position of the incumbent.

In South Africa, the incumbent monopoly telco, which was a state enterprise until it was partially privatized in 1996, has systematically delayed, interfered, challenged and obstructed the natural process of deregulation and the creation of a healthy competitive sector. Private interests act in their own interest, by definition, so powerful private interests tend to drive the system in ways that make THEM healthier rather than ways that make society healthier.

Left to their own devices, private companies will tend to gobble one another up, and create monopolies. Those monopolies will then undermine every potential new entrant, using whatever tactics they can dream up, from FUD to lobbying to thuggery.

So, I’m a fan of regulated capitalism.

We need regulation to ensure that society’s broader needs, like environmental sustainability, are met while private companies pursue their profits. We also need regulation to ensure that those who manage national and international infrastructure, whether it’s railways or power stations or financial systems, don’t cook the books in a way that lets them declare fat profits and fatter bonuses while driving those systems into crisis.

But effective regulation is not the same as state management and supervision. I would much rather have private companies managing power stations competitively, than state agencies doing so as part of a complacent government monopoly.

Good regulation is very hard. Over the years I’ve interacted with a few different regulatory authorities, and I sympathise with the problems they encounter.

First, to be an effective regulator, you need superb talent. And for that you need to pay – talent follows the money and the lights, whether we like it or not, so to design a system on other assumptions is to design it for failure. My ideal regulator is an insightful genius working for the common good, but since I’m never likely to meet that person, a practical goal is to encourage regulators to be small but very well funded, with key salaries and performance measures that are just behind the industries they are supposed to regulate. Regulators must be able to be fired – no sense in offering someone a private sector salary and public sector accountability. Unfortunately, most regulators end up going the other way, hiring more and more people of average competence, that they become both expensive and ineffective.

Second, a great regulator needs to be independent. You’re the guy who tells people to stop doing what will hurt society; it’s very hard to do that to your friends. A regulatory job is a lonely job, which is why you hear so many stories of regulators being wined and dined by the industries they regulate only to make sure they don’t look too hard in the back room. A great regulator needs to know a lot about an industry, but be independent of that industry. Again, my ideal is someone who has made a good living in a sector, knows it backwards, can justify their high price, but wants to make a contribution to society.

Third, a great regulator needs to have teeth and muscle. It has been very frustrating for me to watch the South African telecomms regulator get tied up in court by Telkom, and stymied by government department inadequacy. Regulators need to be able to drive things forward, they need to be able to change the way companies behave, and they cannot rely on moral suasion to do so.

And fourth, a regulator has to make very tough decisions about innovation, which amount to venture capital decisions – to make them well, you have to be able to tell the future. For example, when an industry changes, as all industries change, how should the rules evolve? When a new need for society is identified, like the need to address climate change early and systemically, how should the rules evolve? Regulators need to move forward as fast as the industries they regulate, and they need to make decisions about things we don’t yet understand. And even when you regulate, you may not be able to stop an impending crisis. It’s very easy to criticize Greenspan for his light touch regulation on hedge funds and derivatives today, but it’s not at all clear to me that regulation would have made a difference, I think it would simply have moved the shadow global financial system offshore.

So regulation is extremely difficult, but also very much worth investing in if you are trying to run a healthy, vibrant, capitalist society.

Coming back to the original suggestion that sparked this blog – I’m sure we will see a lot of failed capitalists in the future. Hell, I might join their ranks, I wouldn’t be the first ;-). But that doesn’t spell the end of capitalism, only the opportunity to start again – smarter.

There is no victor of a flawed election

Monday, February 4th, 2008

The tragedy unfolding in Kenya is a reminder of the fact that a flawed election leaves the “winner” worse off than he would be losing a fair contest.

Whoever is President at the conclusion of this increasingly nasty standoff inherits an economy that is wounded, a parliament that is angry and divided, and a populace that know their will has been disregarded. And he will face a much increased risk of personal harm at the hands of those who see assassination as no worse a crime than electoral fraud. That is at best a Pyrrhic victory. It will be extremely difficult to get anything done under those circumstances.

There is, however, some cause for optimism amidst all the gloom. It seems that many Kenyan MP’s who were fingered for corruption during their previous terms were summarily dismissed by their constituencies, despite tribal affiliations. In other words, if your constituents think you’re a crook, they will vote you out even if you share their ethnicity.

That shows the beginnings of independent-minded political accountability – it shows that voting citizens in Kenya want leaders who are not tainted with corruption, even if that means giving someone from a different tribe their vote. And that is the key shift that is needed in African countries, to give democracy teeth. Ousted MP’s and former presidents are subject to investigation and trial, and no amount of ill-gotten loot in the bank is worth the indignity of a stint in jail at the hands of your successor. As Frederick Chiluba has learned, there’s no such thing as an easy retirement from a corrupt administration.

Of course, that makes it likely that those with skeletons in their closets will try even harder to cling to power, for fear of the consequences if they lose their grip on it. Robert Mugabe is no doubt of the opinion that a bitter time in power is preferable to a bitter time after power. But increasingly, voters in Africa are learning that they really can vote for change. And neighboring countries are learning that it hurts their own investment and economic profiles to certify elections as free and fair when they are far from it. It would be much harder for Robert Mugabe to stay in power illegally if he didn’t have *nearly* enough votes to stay there legally. You can fudge an election a little, but it’s very difficult to fudge it when the whole electorate abandons you, and when nobody will lend your their credibility.

The best hope a current president has of a happy retirement is to ensure that the institutions which will pass judgement on him (or her) in future are independent and competent, to ensure that they will stay that way, and to keep their hands clean. It will take time, but I think we are on track to see healthy changes in governance becoming the norm and not the exception in Africa.