Archive for the 'thoughts' Category

ACPI, firmware and your security

Monday, March 17th, 2014

ACPI comes from an era when the operating system was proprietary and couldn’t be changed by the hardware manufacturer.

We don’t live in that era any more.

However, we DO live in an era where any firmware code running on your phone, tablet, PC, TV, wifi router, washing machine, server, or the server running the cloud your SAAS app is running on, is a threat vector against you.

If you read the catalogue of spy tools and digital weaponry provided to us by Edward Snowden, you’ll see that firmware on your device is the NSA’s best friend. Your biggest mistake might be to assume that the NSA is the only institution abusing this position of trust – in fact, it’s reasonable to assume that all firmware is a cesspool of insecurity courtesy of incompetence of the worst degree from manufacturers, and competence of the highest degree from a very wide range of such agencies.

In ye olden days, a manufacturer would ship Windows, which could not be changed, and they wanted to innovate on the motherboard, so they used firmware to present a standard interface for things like power management to a platform that could not modified to accommodate their innovation.

Today, that same manufacturer can innovate on the hardware and publish a patch for Linux to express that innovation – and Linux is almost certainly the platform that matters. If Windows enters this market then the Windows driver model can evolve to give manufacturers this same ability to innovate in the Windows world, where proprietary unverifiable blobs are the norm.

Arguing for ACPI on your next-generation device is arguing for a trojan horse of monumental proportions to be installed in your living room and in your data centre. I’ve been to Troy, there is not much left.

We’ve spent a good deal of time working towards a world where you can inspect the code that is running on any device you run. In Ubuntu we work hard to make sure that any issues in that code can be fixed and delivered right away to millions of users. Bruce Schneier wisely calls security a process, not a product. But the processes for finding and fixing problems in firmware are non-existent and not improving.

I would very much like to be part of FIXING the security problem we engineers have created in our rush to ship products in the olden days. I’m totally committed to that.

So from my perspective:

  • Upstream kernel is the place to deliver the software portion of the innovation you’re selling. We have great processes now to deliver that innovation to users, and the same processes help us improve security and efficiency too.
  • Declarative firmware that describes hardware linkages and dependencies but doesn’t include executable code is the best chance we have of real bottom-up security. The Linux device tree is a very good starting point. We have work to do to improve it, and we need to recognise the importance of being able to fix declarations over the life of a product, but we must not introduce blobs in order to short cut that process.

Let’s do this right. Each generation gets its turn to define the platforms it wants to pass on – let’s pass on something we can be proud of.

Our mission in Ubuntu is to give the world’s people a free platform they can trust.  I suspect a lot of the Linux community is motivated by the same goal regardless of their distro. That also means finding ways to ensure that those trustworthy platforms can’t be compromised elsewhere. We can help vendors innovate AND ensure that users have a fighting chance of privacy and security in this brave new world. But we can’t do that if we cling to the tools of the past. Don’t cave in to expediency. Design a better future, it really can be much healthier than the present if we care and act accordingly.

 

Mistakes made and addressed

Sunday, November 10th, 2013

Occasionally we make mistakes. When we do it’s appropriate to apologise, address them, and take steps to ensure they don’t happen again.

Last week, someone at Canonical made a mistake in sending the wrong response to a trademark issue out of the range of responses we usually take. That has been addressed, and steps are being taken to reduce the likelihood of a future repeat.

By way of background, there are a number of trademarks around the Ubuntu name and logo which we are required to “enforce” or risk losing them altogether. In normal companies, the rule is that nobody else gets to use your logo. In Canonical, we have a policy that says that there are lots of cases where people DO get to use our name and logo; this is because our policy takes the internet-friendly view that communities need to have rights to a name if they want to feel like they are part of something; we go even further and explicitly allow the use of our name for elements of satire and mirth around Ubuntu. Every country has different rules about trademarks and free speech, we have a global policy that is more generous than most jurisdictions by default.

We do have to “enforce” those trademarks, or we lose them. That means:

  • we have an email address, trademarks@ubuntu.com, where people can request permission to use the name and logo
  • we actively monitor, mostly using standard services, use of the name and logo
  • we aim to ensure that every use of the name and logo is supported by a “license” or grant of permission

As you can imagine, that is a lot of work. A lot of what we find out there is fine, fun, harmless or constructive. Sometimes however it’s pretty nasty: we have had OEMs forging Ubuntu certifications to meet requirements for government tenders, for example.

In order to make the amount of correspondence manageable, we have a range of standard templates for correspondence. They range from the “we see you, what you are doing is fine, here is a license to use the name and logo which you need to have, no need for further correspondence”, through “please make sure you state you are speaking for yourself and not on behalf of the company or the product”, to the “please do not use the logo without permission, which we are not granting unless you actually certify those machines”, and “please do not use Ubuntu in that domain to pretend you are part of the project when you are not”.

Last week, the less-than-a-month-at-Canonical new guy sent out the toughest template letter to the folks behind a “sucks” site. Now, that was not a decision based on policy or guidance; as I said, Canonical’s trademark policy is unusually generous relative to corporate norms in explicitly allowing for this sort of usage. It was a mistake, and there is no question that the various people in the line of responsibility know and agree that it was a mistake. It was no different, however, than a bug in a line of code, which I think most developers would agree happens to the best of us. It just happened to be, in that analogy, a zero-day remote root bug.

The internets went wild, Wired picked a headline accusing Canonical of a campaign to suppress critics, Debian started arguing about whether it should remove all references to the distro-that-shall-not-be-named but then decided to argue about whether it should enforce its own trademarks which lead to an argument about… oh never mind. The point is, people are judging Canonical over this, which is fine and correct in my view, because I am judging Canonical over this too.

Here’s how I’m judging Canonical. Your framework may vary, but I think this is quite a defensible one.

Judge the policy. In this case Canonical has a trademark policy that enables community members to use the marks (good) and allows for satire and sucks sites even in jurisdictions where the local law does not (great!). Failing to have a policy would not be a bonus point in this review :)

Judge the execution of the policy. Canonical does the work needed to maintain the marks; it monitors and responds to requests and notifications around the marks (good). In this case, the wrong action was taken – a new employee was clearly not properly briefed about policy and sensitivities in a key audience for the company (bad).

Judge the response to the incident. Within hours of the publication of a response to our letter, the CEO, COO and legal team reviewed the decision, corrected the action and addressed the matter publicly. I apologised the moment I was made aware of the incident. And I’m reassured that the team in question is taking steps in training and process to minimise the risk of a recurrence.

For those carrying pitchforks and torches on this issue, ask yourself if that would be appropriate to a bug in a line of code in one of many thousands of changes being made monthly by a large team. No? Think about it.

 

On another, more personal note, I made a mistake myself when I used the label “open source tea party” to refer to the vocal non-technical critics of work that Canonical does. That was unnecessary and quite possibly equally offensive to members of the real Tea Party (hi there!) and the people with vocal non-technical criticism of work that Canonical does (hello there!).

For the record, technical critique of open source software is part of what makes open source software so good. It is welcome and appreciated very much at Canonical; getting reviews and feedback and suggestions for improvement from smart people who care is part of why we enjoy writing open source software. There isn’t anything in what I said to suggest that I don’t welcome such technical feedback, but some assumed I was rejecting all feedback including technical commentary. I was not – I was talking about criticism of software which does not centre on the software itself, but rather on some combination of the motivations of the people who wrote it, or the particular free  software license under which it is published, or the policies of the company, or the nationality of the company behind it. Unless critique is focused on improving the software in question it is pretty much a waste of the time of the people who are trying to improve the software in question. That waste of time is what I had in mind with the comment; nevertheless, it was a thoughtless use of an irrelevant label. Please accept my apologies if you have been a vocal non-technical critic of Canonical’s software and felt offended by the label.

Ubuntu in 2013

Wednesday, December 26th, 2012

This is a time of year to ponder what matters most and choose what we’ll focus on in the year to come. Each of us has our own priorities and perspective, so your goals may be very different to mine. Nevertheless, for everyone in the Ubuntu project, here’s what I’ll be working towards in the coming year, and why.

First, what matters most?

It matters that we not exclude people from our audience. From the artist making scenes for the next blockbuster, to the person who needs a safe way to surf the web once a day, it’s important to me, and to the wider Ubuntu community, the people be able to derive some benefit from our efforts. Some of that benefit might be oblique – when someone prefers XFCE to Unity, they are still benefiting from enormous efforts by hundreds of people to make the core Ubuntu platform, as well as the Xubuntu team’s unique flourish. Even in the rare case where the gift is received ungraciously, the joy is in the giving, and it matters that our efforts paid dividends for others.

In this sense, it matters most that we bring the benefits of free software to an audience which would not previously have had the confidence to be different. If you’ve been arguing over software licenses for the best part of 15 years then you would probably be fine with whatever came before Ubuntu. And perhaps the thing you really need is the ability to share your insights and experience with all the people in your life who wouldn’t previously have been able to relate to the things you care about. So we have that interest in common.

It matters that we make a platform which can be USED by anybody. That’s why we’ve invested so much into research and thinking about how people use their software, what kinds of tools they need handy access to, and what the future looks like. We know that there are plenty of smart people who’s needs are well served by what existed in the past. We continue to maintain older versions of Ubuntu so that they can enjoy those tools on a stable platform. But we want to shape the future, which means exploring territory that is unfamiliar, uncertain and easy to criticise. And in this regard, we know, scientifically, that Ubuntu with Unity is better than anything else out there. That’s not to diminish the works of others, or the opinions of those that prefer something else, it’s to celebrate that the world of free software now has a face that will be friendly to anybody you care to recommend it.

It also matters that we be relevant for the kinds of computing that people want to do every day.

That’s why Unity in 2013 will be all about mobile – bringing Ubuntu to phones and tablets. Shaping Unity to provide the things we’ve learned are most important across all form factors, beautifully. Broadening the Ubuntu community to include mobile developers who need new tools and frameworks to create mobile software. Defining new form factors that enable new kinds of work and play altogether. Bringing clearly into focus the driving forces that have shaped our new desktop into one facet of a bigger gem.

It’s also why we’ll push deeper into the cloud, making it even easier, faster and cost effective to scale out modern infrastructure on the cloud of your choice, or create clouds for your own consumption and commerce. Whether you’re building out a big data cluster or a super-scaled storage solution, you’ll get it done faster on Ubuntu than any other platform, thanks to the amazing work of our cloud community. Whatever your UI of choice, having the same core tools and libraries from your phone to your desktop to your server and your cloud instances makes life infinitely easier. Consider it a gift from all of us at Ubuntu.

There will always be things that we differ on between ourselves, and those who want to define themselves by their differences to us on particular points. We can’t help them every time, or convince them of our integrity when it doesn’t suit their world view. What we can do is step back and look at that backdrop: the biggest community in free software, totally global, diverse in their needs and interests, but united in a desire to make it possible for anybody to get a high quality computing experience that is first class in every sense. Wow. Thank you. That’s why I’ll devote most of my time and energy to bringing that vision to fruition. Here’s to a great 2013.

Holistic UI is smarter UX

Tuesday, March 27th, 2012

In the open source community, we celebrate having pieces that “do one thing well”, with lots of orthogonal tools compounding to give great flexibility. But that same philosophy leads to shortcomings on the GUI / UX front, where we want all the pieces to be aware of each other in a deeper way.

For example, we consciously place the notifications in the top right of the screen, avoiding space that is particularly precious (like new tab titles, and search boxes). But the indicators are also in the top right, and they make menus, which drop down into the same space a notification might occupy.

Since we know that notifications are queued, no notification is guaranteed to be displayed instantly, so a smarter notification experience would stay out of the way while you were using indicator menus, or get out of the way when you invoke them. The design story of focusayatana, where we balance the need for focus with the need for awareness, would suggest that we should suppress awareness-oriented things in favour of focus things. So when you’re interacting with an indicator menu, we shouldn’t pop up the notification. Since the notification system, and the indicator menu system, are separate parts, the UNIX philosophy sells us short in designing a smart, smooth experience because it says they should each do their thing individually.

Going further, it’s silly that the sound menu next/previous track buttons pop up a notification, because the same menu shows the new track immediately anyway. So the notification, which is purely for background awareness, is distracting from your focus, which is conveying exactly the same information!

But it’s not just the system menus. Apps can play in that space too, and we could be better about shaping the relationship between them. For example, if I’m moving the mouse around in the area of a notification, we should be willing to defer it a few seconds to stay out of the focus. When I stop moving the mouse, or typing in a window in that region, then it’s OK to pop up the notification.

It’s only by looking at the whole, that we can design great experiences. And only by building a community of both system and application developers that care about the whole, that we can make those designs real. So, thank you to all of you who approach things this way, we’ve made huge progress, and hopefully there are some ideas here for low-hanging improvements too :)

Government use of Ubuntu

Thursday, March 8th, 2012

Governments are making increasingly effective use of Ubuntu in large-scale projects, from big data to little schools. There is growing confidence in  open source in government quarters, and growing sophistication in how they engage with it.

But adopting open source is not just about replacing one kind of part with another. Open source is not just a substitute for shrink-wrapped proprietary software. It’s much more malleable in the hands of industry and users, and you can engage with it very differently as a result.  I’m interested in hearing from thought leaders in the civil service on ways they think governments could get much more value with open source, by embracing that flexibility. For example, rather than one-size-fits-all software, why can’t we deliver custom versions of Ubuntu for different regions or countries or even departments and purposes? Could we enable the city government of Frankfurt to order PC’s with the Ubuntu German Edition pre-installed?

Or could we go further, and enable those governments to participate in the definition and production and certification process? So rather than having to certify exactly the same bits which everyone else is using, they could create a flavour which is still “certified Ubuntu” and fully compatible with the whole Ubuntu ecosystem, can still be ordered pre-installed from global providers like Dell and Lenovo, but has the locally-certified collection of software, customizations, and certifications layered on top?

If we expand our thinking beyond “replacing what went before”, how could we make it possible for the PC companies to deliver much more relevant offerings, and better value to governments by virtue of free software? Most of the industry processes and pipelines were set up with brittle, fixed, proprietary software in mind. But we’re now in a position to drive change, if there’s a better way to do it, and customers to demand it.

So, for a limited time only, you can reach me at governator@canonical.com (there were just too many cultural references there to resist, and it’s not a mailbox that will be needed again soon ;). If you are in the public service, or focused on the way governments and civic institutions can use open source beyond simply ordering large numbers of machines at a lower cost, drop me a note and let’s strike up a conversation.

Here are a few seed thoughts for exploration and consideration.

Local or national Ubuntu editions, certified and pre-installed by global brands

Lots of governments now buy PC’s from the world market with Ubuntu pre-installed. Several Canadian tenders have been won by companies bidding with Ubuntu pre-installed on PC’s. The same is true in Brazil and Argentina, in China and India and Spain and Germany. We’re seeing countries or provinces that previously had their own-brand local Linux, which they had to install build locally and install manually, shifting towards pre-order with Ubuntu.

In part, this is possible because the big PC brands have built up enough experience and confidence working with Canonical and Ubuntu to be able to respond to those tenders. You can call up Dell or Lenovo and order tens of thousands of laptops or desktops with Ubuntu pre-installed, and they will show up on time, certified. The other brands are following. It has been a lot of work to reach that point, but we’ve got the factory processes all working smoothly from Shenzen to Taipei. If you want tens of thousands of units, it all works well.

But Ubuntu, or free software in general, is not Windows. You shouldn’t have to accept the one-size-fits all story. We saw all of those local editions, or “national linux”, precisely because of the desire that regions have to build something that really suits them well. And Ubuntu, with it’s diversity of packages, open culture and remix-friendly licensing is a very good place to start. Many of the Spanish regional distro’s, for example, are based on Ubuntu. They have the advantage of being shaped to suit local needs better than we can with vanilla Ubuntu, but the disadvantage of being hard to certify with major ISV’s or IHV’s.

I’m interested in figuring out how we can formalise that flexibility, so that we can get the best of both worlds: local customizations and preferences expressed in a compatible way with the rest of the Ubuntu ecosystem, so they can take advantage of all the software and skills and certifications that the ecosystem and brand bring. And so they can order it pre-installed from any major global PC company, no problem, and upgrade to the next version of Ubuntu without losing all the customization that work that they did.

Security certifications by local agencies, with policy frameworks and updates

A European defence force has recently adopted Ubuntu widely as part of an agility-enhancing strategy that gives soldiers and office workers secure desktop capabilities from remote locations like… home, or out in the field. There’s some really quite sexy innovation there, but there’s also Ubuntu as we know and love it. In the process of doing the work, it emerged that their government has certified some specific versions of key apps like OpenVPN, and it would be very useful to them if they could ensure that those versions were the ones in use widely throughout the government.

Of course, today, that means manually installing the right version every time, and tracking updates. But Ubuntu could do that work, if it knew enough about the requirements and the policies, and there was a secure way to keep those policies up to date. Could we make the operating system responsive to such policies, even where it isn’t directly managed by some central infrastructure? If Ubuntu “knows” that it’s supposed to behave in a particular way, can we make it do much of the work itself?

The same idea is useful in an organizational setting, too. And the key question is whether we can do that, while still retaining both access  to the wider Ubuntu ecosystem, and compatibility with factory processes, so these machines could be ordered and arrive pre-installed and ready to go.

Local cultural customization

On a less securocratic note, the idea of Ubuntu being tailored to local culture is very appealing. Every region or community has its news sites, it’s languages, it’s preferred apps and protocols and conventions. Can we expand the design and definition of the Ubuntu experience so that it adapts naturally to those norms in a way much richer and more meaningful than we can with Windows today?

What would the key areas of customisation be? Who would we trust to define them? How would we combine the diversity of our LoCo communities with the leadership of Ubuntu and the formality of government or regional authorities? Would we *want* to do that? It’s a very interesting topic, because the value of having officially recognised platforms is just about on a par with the value of having agile, crowdsourced and community-driven customisation. Nevertheless, could we find a model whereby governments or civil groups could underwrite the creation of recognised editions of Ubuntu that adapt themselves to local cultural norms? Would we get a better experience for human beings if we did that?

Local skills development

Many of the “national linux” efforts focus on building small teams of engineers and designers and translators that are tasked with bringing a local flavour to the technology or content in the distro. We have contributors from almost (perhaps actually?) every country, and we have Canonical members in nearly 40 countries. Could those two threads weave together in an interesting way? I’m often struck, when I meet those teams, at the awkwardness of teams that feel like start-ups, working inside government departments – it’s never seemed an ideal fit for either party.

Sometimes the teams are very domain focused; one such local-Linux project is almost entirely staffed by teachers, because the genesis of the initiative was in school computing, and they have done well for that purpose.

But could we bring those two threads together? The Ubuntu-is-distributed-already and the local-teams-hired-to-focus-on-local-work threads seem highly complimentary; could we create teams which are skilled in distro development work, managed as part of the broader Ubuntu effort, but tasked with local priorities?

Public investments in sector leadership

Savvy governments are starting to ensure that research and development that they fund is made available under open licenses. Whether that’s open content licensing, or open source licensing, or RAND-Z terms, there’s a sensible view that information or tools paid for with public money should be accessible to that public on terms that let them innovate further or build businesses or do analysis of their own.

Some of that investment turns out to be software. For example, governments might prioritise genomics, or automotive, or aerospace, and along the way they might commission chunks of software that are relevant. How could we make that software instantly available to anybody running the relevant local flavour of Ubuntu? Would we do the same with content? How do we do that without delivering Newspeak to the desktop? Are there existing bodies of software which could be open sourced, but they don’t have a natural home, they’re essentially stuck on people’s hard drives or tapes?

 

There are multiple factors driving the move of public institutions to open source – mainly the recognition, after many years, of the quality and flexibility that an open platform provides. Austerity is another source of motivation to change. But participation, the fact that open source can be steered and shaped to suit the needs of those who use it simply through participating in open projects, hasn’t yet been fully explored. Food for thought.

And there’s much more to explore. If this is interesting to you, and you’re in a position to participate in building something that would actually get used in such a context, then please get in touch. Directly via The Governator, or via my office.

Cloudy prognosis for mainframes

Monday, October 24th, 2011

The death of the mainframe is about as elusive as the year of the Linux desktop. But cloud computing might finally present a terminal opportunity, so to speak, to those stalwarts of big business computing, by providing a compelling answer to the twin stories of reliability and throughput that have always been highlights of the big iron pitch.

Advocates of big iron talk about reliability. But with public clouds, we’re learning how to build services that achieve very high levels of reliability despite having low individual node reliability. It doesn’t matter if a single node in the cloud fails – cloud-style architectures route around that damage and keep the overall service available. Just as we dial storage reliability up or down by designing RAID arrays for the right balance of performance and resilience to failure, you can dial service reliability up or down in the cloud by allowing for redundancy. That comes at a price, of course, but the price of an extra 9 is substantially lower when you tackle it cloud-style than when you try and achieve it on a single piece of hardware.

The other big strength of big iron was always throughput. Customers will pay for it, so mainframe vendors were always happy to oblige them. But again, it’s hard to beat the throughput of a Hadoop cluster, and even harder to scale the throughput of a mainframe as cost-effectively as one can scale a private cloud infrastructure underneath Hadoop.

I’m not suggesting insurance companies will throw away their mainframes. They’re working, they’re paid for, so they’ll stick around. But the rapid adoption of cloud-based architectures is going to make it very difficult to consolidate future IT onto mainframes (something that happened in every prior generation) and is also going to reduce the incentive for doing so in the first place. After 20 years of imminent irrelevance, there’s finally a real reason to think their time is up.

Innovation and OpenStack: Lessons from HTTP

Thursday, September 8th, 2011

OpenStack is facing an important choice: does define a new set of API’s, one of many such efforts in cloud infrastructure, or does it build around the existing AWS API’s?  So far, OpenStack has had it both ways, with some new API work and also some AWS-based effort. I’m writing to make the case for a tighter definition of mission around the de facto standard infrastructure API’s of EC2, S3 and a few other elements of AWS.

What prompted this blog was my overhearing (or, seeing an email on a list) the statement that cloud infrastructure projects like OpenStack, Eucalyptus and others should “innovate at the level of the API and infrastructure concepts”. I’m of the view that any projects which try to do so will fail and are not worth spending your or my time on. They are going to be about as successful as projects that try to reinvent HTTP to make it better/faster/cleaner/whatever. Which is to say – not successful at all, because no new protocol with the same conceptual goals will match the ecosystem that exists today around HTTP. There will of course be protocol innovation, the last word is never written, but for the web, it’s a done deal. All the proprietary and ad-hoc things that preceded HTTP have died, and good riddance. Similarly, cloud infrastructure will converge around a standard API which will be imperfect but real. Innovation is all about how that API is implemented, not which API it is.

Nobody would say the web server market lacks innovation. There are many, many different companies and communities that make and market web server solutions. And each of those is innovating in some way – focusing on a different audience, or trying a different approach. Yet that entire market is constrained by a public standard: HTTP, which evolves far more slowly than the products that implement it.

There are also a huge number of things that wrap themselves around HTTP, from cache accelerators to 3G content compressors; the standardisation of that thin layer has created a massive ecosystem and driven fantastic innovation, even as many of the core concepts that drove HTTP’s initial design have eroded or softened. For example, HTTP was relentlessly stateless, but we’ve added cookies and cacheing to address issues caused by that (at the time radical) design constraint.

Today, cloud infrastructure is looking for its HTTP. I think that standard already exists in de facto form today at AWS, with EC2, S3 and some of the credential mechanisms being essentially the core primitives of cloud infrastructure management. There is enormous room for innovation in cloud infrastructure *implementations*, even within the constraints of that minimalist API. The hackers and funders and leaders and advocates of OpenStack, and any number of other cloud infrastructure projects both open source and proprietary, would be better off figuring out how to leverage that standardisation than trying to compete with it, simply because no other API is likely to gain the sort of ecosystem we see around AWS today.

It’s true that those API’s would better be defined in a clean, independent forum analogous to the W3C than inside the boiler-room of development at any single cloud provider, but that’s a secondary issue. And over time, it can be engineered to work that way.

More importantly for the moment, those who make an authentic effort to fit into the AWS protocol standard immediately gain access to chunks of the AWS gene pool, effectively gratis. From services like RightScale to tools like ElasticFox, your cloud is going to be more familiar, more effective and more potent if it can ease the barriers to porting from AWS. No two implementations will magically Just Work, but the rough edges and gotchas can get ironed out much more easily if there is a clear standard and reference implementations. So the cost of “porting” will always be lower between clouds that have commonality by design or heritage.

For OpenStack itself, until that standard is codified, I would describe the most successful mission statement as “to be the reference public cloud provider scale implementation of cloud infrastructure compatible with AWS core API’s”. That’s going to give all the public cloud providers who want to compete with Amazon the best result: they’ll be able to compete on service terms, while convincing early adopters that the move to their offering will be relatively painless. All it takes, really, is some humility and the wisdom to recognise the right place to innovate.

There will be many implementations of those core API’s. One or other will be the Apache, the “just start here” option. But it doesn’t matter so much which one that is, frankly. I think OpenStack has the best possible chance to be that, but only if they stick to this crisp mission and don’t allow themselves to be drawn into front-end differentiation for the sake of it. Should that happen, OpenStack will be vulnerable to another open source project which credibly aims to achieve the goals outlined here. Very vulnerable. Witness the ways in which Eucalyptus is rightly pointing out its superior AWS compatibility in comparison with OpenStack.

For the public cloud providers that hope to build on OpenStack, API differentiation is poison in a juicy steak. It looks tasty, but it’s going to cost you the race prematurely. There were lots of technical reasons why alternatives to Windows were *better*, they just failed to become de facto standards. As long as Amazon doesn’t package up AWS as an on-premise solution, it’s possible to establish a de facto standard around something else, but that something else (perhaps OpenStack) needs to be AWS-compatible in some meaningful way to get enough momentum to matter. That means there’s a window of opportunity to get this right, which is not going to stay open indefinitely. Either Amazon, or another open source project, could close that window on OpenStack’s fingers. And that would be a pity, since the community around OpenStack has tons of energy and goodwill. In order to succeed, it will need to channel that energy into innovation on the implementation, not on trying to redefine an existing standard.

Of course, all this would be much easier if there were a real HTTP-like standard defining those API’s. The web had the enormous advantage of being founded by Tim Berners-Lee, in an institution like CERN, with the vision to setup the W3C. In the case of today’s cloud infrastructure, there isn’t the same dynamic or set of motivations. Amazon’s position of vagueness on the AWS API’s is tactically perfect for them right now, and I would expect them to maintain that line while knowing full well there is no real proprietary claim in a public network API, and no real advantage to be had from claiming otherwise. What’s needed is simply to start codifying existing practice as a draft standard in a credible forum of experts, with a roadmap and the prospect of support from multiple vendors. I think that would be relatively easy to arrange, if we could get Rackspace, IBM and HP to sit down and commit to doing it. We already have HP and Rackspace at the OpenStack table, so the signs are encouraging.

A good standard would:

* be pragmatic about the fact that Amazon has already made a bunch of decisions we’ll live with for ever.
* have a commitment from folk like OpenStack and Eucalyptus to aim for compliance
* include a real automated functional test suite that becomes the interop benchmark of choice over time
* be open to participation by Amazon, though that will not likely come for some time
* be well documented and well managed, like HTTP and CSS and HTML
* not be run but the ITU or ISO

I’m quite willing to contribute resources to getting such a standard off the ground. Forget big consortiums or working groups or processes or lobbying forums, what’s needed are a few savvy folk who know AWS, Eucalyptus and OpenStack, together with a very few technical writers. Let me know if you’re interested.

Now, I started out by saying that I was writing to make the case for OpenStack to be focused on a particular area. It’s a bit cheeky for me to write anything of the sort, of course, because OpenStack is a well run project that has an excellent steering group, which recently held a poll of contributors to appoint some new members, none of which was me. I’ve every confidence in the leadership of the project, despite the tremendous pressure they are under to realise the hopes of so many diverse users and companies. I’m optimistic for the potential OpenStack has to accelerate cloud technology, and in Canonical we put a considerable amount of effort into making OpenStack deployment a smooth experience for Ubuntu users and Canonical customers. Ubuntu Cloud Infrastructure depends now on OpenStack. And I have a few old friends who are also leaders in the OpenStack community, so for all those reasons I thought it worth making this perspective public.

The responsibilities of ownership

Friday, July 22nd, 2011

In the open source community we make a very big deal about the rights of ownership. But what about the responsibilities?

Any asset comes with attendant costs, risks and responsibilities. And anybody who doesn’t take those seriously is a poor steward of the asset.

In the physical world, we know this very well. If you own a house, there are taxes to pay every year, there will be some bills for energy and maintenance, and there’s paperwork to fill out. I was rudely reminded of this when I got an SMS at 2am this morning, care of British Gas, helpfully reminding me to settle up the gas bill for a tenant of mine. If we fail to take care of these responsibilities, we’re at risk of having the asset degraded or taken away from our care. An abandoned building will eventually be condemned and demolished rather than staying around as a health hazard. A car which has not been tested and licensed cannot legally be driven on public roads. In short, ownership comes with a certain amount of work, and that work has to be handled well.

In the intellectual and digital world, things are a little different. There isn’t an obvious lawn to trim or wall to paint. But there are still responsibilities. For example, trademarks need to be defended or they are deemed to be lost. Questions need to be answered. Healthy projects grow and adapt over time in a dynamic world; change is inevitable and needs to be accommodated.

Maintaining a piece of free software is a non-trivial effort. The rest of the stack is continuously changing – compilers change, dependencies change, conventions change. The responsibility for maintenance should not be shirked, if you want your project to stay relevant and useful. But maintainership is very often the responsibility of “core” developers, not light contributors. Casual contributors who have scratched their own itch or met a work obligation by writing a patch often give, as a reason for the contribution, their desire to have that maintenance burden carried by the project, and not by themselves.

When a maintainer adds a patch to a work, they are also accepting responsibility for its maintenance, unless they have some special circumstance, like the patch is a plugin and essentially maintained by the contributor. For general cases, adding the patch is like mixing paint – it adds to the general body of maintenance in a way that cannot easily be undone or compartmentalised.

And owning an asset can create real liabilities. For example, in some countries, if you own a house and someone slips on the stairs, you can be held liable. If you own a car and it’s being borrowed, and the brakes fail, you can be held liable. In the case of code, accepting a patch implies, like it or not, accepting some liability for that patch. Whether it turns out to be a real liability, or just a contingent one, is something only time will tell. But ownership requires defence in the case of an attack, and that can be expensive, even if it turns out the attack is baseless.

So, one of the reasons I’m happy to donate (fully and irreversibly) a patch to a maintainer, and why Canonical generally does assign patches to upstreams who ask for it, is that I think the rights and responsibilities of ownership should be matched. If I want someone else to handle the work – the responsibility – of maintenance, then I’m quite happy for them to carry the rights as well. That only seems balanced. In the common case, that maintenance turns out to be as much work as the original crafting of the patch, and frankly, it’s the “boring work” part, while the fun part was solving the problem immediately at hand.

Of course, there are uncommon cases too.

One of the legendary fights over code ownership, between Sun and Novell, revolved around a plugin for OpenOffice that did some very cool stuff. Sun ended up re-creating that work because Novell would not give it to Sun. Frankly, I think Sun was silly. The plugin was a whole work, that served a coherent purpose all by itself. Novell had designed and implemented that component, and was perfectly willing and motivated to maintain it. In that case, it makes sense to me that Sun should have been willing to make space for Novell’s great work, leaving it as Novell’s. Instead, they ended up redoing that work, and lots of people felt hard done by. But that’s an uncommon case. The more usual scenario is that a contribution enhances the core, but is not in itself valuable without the rest of the code in the project being there.

Of course, “value” is relative. A patch that only applies against an existing codebase still has value in its ability to teach others how to do that kind of work. And it has value as art – you can put it on a t-shirt, or a wall.

But contributing – really contributing, actually donating – a patch to a maintainer doesn’t have to reduce those kinds of value to the original creator. I consider it best practice that a donation be matched by a wide license back. In other words, if I give my patch to the maintainer, it’s nice if they grant me a full set of rights back. While does a bad job with many other things, the Canonical contribution agreement does this: when you make a contribution under it, you get a wide license back. So that way, the creator retains all the useful rights, including the ability to publish, relicense, sell, or make a t-shirt, without also carrying all the responsibilities that go with ownership.

So a well-done contribution agreement can make clear who carries which responsibilities, and not materially diminish the rights of those who contribute. And a well-done policy of contribution would recognise that there are uncommon cases, where a contribution is in fact a whole piece in itself, and not require donation of that whole piece in order to be part of an aggregate whole.

What about cases where there is no clear maintainer or owner?

Well, outside of the world of copyright and code, we do have some models to refer to. For example, companies issue shares to their shareholders, reflecting their relative contribution and therefor relative shared ownership in the company. Those companies end up with diverse owners, each of whom is going to have their own opinions, preferences, ideals and constraints.

We would never think to require consensus on every decision of the board, or the company, among all shareholders. That would be unworkable – in fact, much of the apparatus of corporate governance exists specifically to give voice to the desires of shareholders while at the same time keeping institutions functional. That’s not to say that shareholders don’t get abused – there are enough case studies of management taking advantage of their position to fill a long and morbidly interesting book. Rules on corporate governance, and especially the protection of minority interests in companies, as well as the state of the art of constructing shareholder agreements to achieve the same goals, are constantly evolving. But at the end of the day, decisions need to be taken which are binding on the company and thus binding on the shareholders. The rights of ownership extend to the right to show up and be represented, and to participate in the discussion, and usually a vote of some sort. Thereafter, the decision is taken and (usually) the majority will carries.

In our absolutist mentality, we tend to think that a single line of code, or a single small patch, carries the same weight as the rest of a coherence codebase. It’s easy to feel that way: when a small patch is shared, but not donated, the creator retains sole ownership of that patch. So in theory, any change in the state of the whole must require the agreement of every owner. This is more than theory – it’s law in many places.

But in practice, that approach has not withstood any hard tests.

There are multiple cases where huge bodies of work, composed of the aggregate “patches” of many different owners, have been relicensed. Mozilla, the Ubuntu wiki, and I think even Wikipedia have all gone through public processes to figure out how to move the license of an aggregate work to something that the project leadership considered more appropriate.

I’d be willing to bet that, if some fatal legal flaw were discovered in the GPLv2, Linus would lead a process of review and discussion and debate about what to do about the Linux kernel, it would be testy and contentious, but in the end he would take a decision and most would follow to a new and better license. Personally, I’d be an advocate of GPLv3, but in the end it’s well known that I’m not a big shareholder in that particular company, so to speak, so I wouldn’t expect to have any say ;-) Those who did not want to follow would resign themselves to having their contributions replaced, and most would not bother to turn up for the meeting, giving tacit assent.

So our pedantic view that every line of code is sacred just would not hold up to real-world pressure. Projects have GOT to respond to major changes in the world around them. It would be unwise to loan a patch to a project in the belief that the project will never, under any circumstances, take a decision that is different to your personal views. Life’s just not like that. Change is inevitable, and we’re all only going to be thrilled about some subset of that change.

And that’s as it should be. Clinging to something small that’s part of someone else’s life and livelihood just isn’t healthy. It’s better either to commit to a reasonable shared ownership approach, which involves being willing to show up at meetings, contribute to maintenance and accept the will of the majority on major moves that might be unpalatable anyway, or to make a true gift that comes with no strings attached.

Sometimes I see people saying they are happy to make a donation as long as it has some constraints associated with it.

There was a family in SA that lived under weird circumstances for generations because a wealthy ancestor specified that they had to do that if they wanted access to their inheritance. It’s called “ruling from the grave”, and it’s really quite antisocial. Either you give someone what you’re giving them, and accept that they will carry those rights and responsibilities wisely and well, or you don’t give it to them at all. You’re not going to be around after your will is executed, and it’s impossible to anticipate everything that might happen. It’s exceedingly uncool, in my view, to leave people stuck.

It’s difficult to predict, in 50 or 100 years time, what the definition of “openness” will be, and who will have the definition that you might most agree with. In the short term we all have favourites, but every five or ten years the world changes and that precipitates a new round of definitions, licenses, concepts. Consider GPLv2 and GPLv3, where there turned out to be a real need to address new challenges in the way free software is being used. Or the Franklin Street Declaration, on web services. Despite having options like AGPL around, there still isn’t any real consensus on how best to handle those scenarios.

One can pick institutions, but institutions change too. Go back and look at the historical positions of any long-term political party, like the UK Whigs, and you’ll be amazed at how a group can shift their positions over a succession of leaders. I have complete trust in the FSF today, but little idea what they’ll be up to in 100 years time. That’s no insult to the FSF, it’s just a lesson I’ve learned from looking at the patterns of behaviour of institutions over the long term. It’s the same for the OSI or Creative Commons or any other political or ideological or corporate group. People move on, people die, times change, priorities shift, economics ebb and flow, affiliations and alliances and competition shift the terrain to the point that today’s liberal group are tomorrows conservatives or the other way around.

So, if one is going to put strings attached to a donation, what does one do? Pick a particular license? No current license will remain perfectly relevant or useful or attractive indefinitely. Pick an institution? No institution is free of human dynamics in the long term.

If there’s a natural place to put the patch, it’s with the code it patches. And usually, that means with the institution that is the anchor tenant, for better or worse. And yes, that creates real rights which can be really abused, or at least used in ways that one would not choose for ones own projects.

And that brings us to the toughest issue. How should we feel about the fact that a company which owns a codebase can create both proprietary and open products from that code?

And the “grave” scenario really is an issue, in the case of copyright too. When people have discussed changes to codebases that have been around for any length of time, it’s a sad but real likelihood that there are contributors who have died, and left no provision for how their work is to be managed. More often than not, the estate in question isn’t sufficiently funded to cover the cost of legal questions concerned.

The first time I was asked to sign a contribution agreement on behalf of Canonical, it was for a competitor, and I declined. That night, it preyed on my conscience. We had the benefit of a substantial amount of work from this competitor, and yet I had refused to give them back *properly* our own modest contribution. I frankly felt terrible, and the next day signed the agreement, and changed it to be our policy that we will do so, regardless of what we think about the company itself. So we’ve now done them for almost all our competitors, and I feel good about it.

That’s the magical thing about creation and ownership. It creates the possibility for generosity. You can’t really give something you don’t own, but if you do, you’ve made a genuine contribution. A gift is different from a loan. It imposes no strings, it empowers the recipient and it frees the giver of the responsibilities of ownership. We tend to think that solving our own problems to produce a patch which is interesting to us and useful for us is the generosity. It isn’t. The opportunity for generosity comes thereafter. And in our ecosystem, generosity is important. It’s at the heart of the Ubuntu ethic, and it’s important even between competitors, because the competitors outside our ecosystem are impossible to beat if we are not supportive of one another.

Fantastic engineering management is…

Tuesday, July 12th, 2011

I’m going to write a series of posts on different career tracks in software engineering and design over the next few months. This is the first of ‘em, I don’t have a timeline for the rest but will get to them all in due course, and am happy to take requests in the comments ;-)

Recently, I wrote up two Canonical engineering management job descriptions – one for those managing a team of software engineers directly, another for folk coordinating the work of groups of teams – software engineering directors. In both cases, the emphasis is on organisation, social coordination, roadmap planning and inter-team connectivity, and not in any way about engineering prowess. Defining some of these roles for one of my teams got me motivated to blog about the things that make for truly great management, as opposed to other kinds of engineering leadership.

The art of software engineering management is so different from software engineering that it should be an entirely separate career track, with equal kudos and remuneration available on either path.

This is because developing, and managing developers, are at opposite ends of the interrupt scale. Great engineering depends on deep, uninterrupted focus. But great management is all about handling interrupts efficiently so that engineers don’t have to. Companies need to recognise that difference, and create career paths on both sides of that scale, rather than expecting folk to leap from the one end to the other. It’s crazy to think that someone who loves deep focused thought should have to become a multithreaded interrupt driven manager to advance their career.

Very occasionally someone is both a fantastic developer and a fantastic manager, but that’s the exception rather than the rule. In recognition of that, we should design our teams to work well without depending on a miracle each and every time we put one together.

Great engineering managers are like coaches – they get their deepest thrill from seeing a team perform at the top of its game, not from performing vicariously. They understand that they are not going to be on the field between the starting and finishing whistle. They understand that there will be decisions to be taken on the field that the players will have to make for themselves, and their job is to prepare the team physically and mentally for the game, rather than to try and play from the sidelines. A great coach isn’t trying to steer the movement of the ball from during the game, she’s making notes about the coaching and team selections needed between this match and the next. A terrible coach is a player that won’t let go of the game, wants to be out there in the thick of it, and loses themselves in the details of the game itself.

An engineering manager is an organiser and a mentor and a coach, not a veteran star player. They need to love winning, and love the sport, and know that they help most by making the team into a winning team. The way they get code written is by making an environment which is conducive to that; the way they create quality is by fostering a passion for quality and making space in the schedule and the team for work which serves only that goal.

When I’m hiring a manager, I look for people who love to keep other people productive. That means handling all the productivity killers in an engineering team: hiring and firing, inter-team meetings, customer presentations, reporting up and out and sideways, planning, travel coordination, conferences, expenses… all those things which we don’t want engineers to spend much time on or have rattling around at the back of their minds. It also means caring about people, and being that gregarious and nosy type of person who knows what everyone is doing, and why, and also what’s going on outside the workplace.

An engineering manager is doing well if every single member of their team can answer these questions, all the time:

* what are my key goals, in order of importance, in this cycle?
* what are the key delivery dates, in this cycle?
* how am I doing, generally? and what is the company view of my strengths and not-so-strengths?
* how do I fit into the team, who are my counterparts, and how do I complement them?

Also, the manager is doing well if they know, for each member of their team:

* what personal stresses or other circumstances might be a distraction for them,
* what the interpersonal dynamics are between that member, and other counterparts or team members,
* what that member’s best contributions are, and strongest interests outside of the assigned goals

For the team as a whole, the manager should know

* what the team is good at, and weak at, and what their plan is to bolster what needs bolstering,
* what the cycle looks like, in terms of goals and progress against them
* what the next cycle is shaping up to look like, and how that fits with long term goals

Really great management makes a company a joy to work in, as a developer. It’s something we should celebrate and cultivate, teach and select for, not just be the natural upward path for people who have been around a while. If you truly love technology, there are lots of careers that take you to the top of the tech game without having to move into management. And conversely, if you love organising and leading, it’s possible to get started on a management career in software without being the world’s greatest coder first. If you think that’s you, you’ll love being an engineering manager at Canonical.

Note to the impatient: this is a long post and it only gets to free software ecosystem dynamics towards the end. The short version is that we need to empower software companies to participate in the GNU/Linux ecosystem, and not fear them. Rather than undermining their power, we need to balance it through competition.

Church schools in apartheid South Africa needed to find creative ways to teach pupils about the wrongs of that system. They couldn’t actively foment revolt, but they could teach alternative approaches to governance. That’s how, as a kid in South Africa, I spent a lot of time studying the foundations of the United States, a system of governance defined by underdogs who wanted to defend not just against the abuses of the current power, but abuses of power in general.

My favourite insight in that regard comes from James Madison in the Federalist Papers, where he describes the need to understand and harness human nature as a force: to pit ambition against ambition, as it is often described. The relevant text is worth a read if you don’t have time for the whole letter:

But the great security against a gradual concentration of the several powers in the same department, consists in giving to those who administer each department the necessary constitutional means and personal motives to resist encroachments of the others. The provision for defense must in this, as in all other cases, be made commensurate to the danger of attack. Ambition must be made to counteract ambition. The interest of the man must be connected with the constitutional rights of the place. It may be a reflection on human nature, that such devices should be necessary to control the abuses of government. But what is government itself, but the greatest of all reflections on human nature? If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself. A dependence on the people is, no doubt, the primary control on the government; but experience has taught mankind the necessity of auxiliary precautions.

When we debate our goals, principles and practices in the FLOSS community, we devote a great deal of energy to “how things should be”, and to the fact that “men are not angels”. I think the approach of James Madison is highly relevant to those discussions.

The conservation of power

Just as energy, momentum, charge and other physical properties of a system are conserved, so in a sense is power. If your goal is to reduce the power of one agency in government, the most effective strategy is to strengthen the position of another. We know that absolute monarchies are bad: they represent unbalanced power.

Within a system, power will tend to consolidate. We have antitrust agencies specifically to monitor the consolidation of economic power and to do something about it. We setup independent branches of government to ensure that some kinds of power simply cannot be consolidated.

Undermining power in one section of an ecosystem inevitably strengthens the others.

Since we humans tend to think the grass is greener on the other side of the fence, and since power takes a little while to get properly abused, you can often see societies oscillate in the allocation of power. When things seem a little out of control, we give more power to the police and other securocrats. Then, when they become a little thuggish, we squeeze their power through regulation and oversight, and civil liberties gain in power, until the pendulum swings again.

The necessity of concentrated power

Any power can be abused. I had a very wise headmaster at that same school who used to say that the only power worth having was power that was worth abusing. This was not a call to the abuse of power, you understand, merely a reflection on the fact that power comes with the real responsibility of restraint.

So, if power can be abused, why do we tolerate it at all? Why not dissolve authority down to the individual? Because the absence of power leads to chaos, which ironically is an easy place to establish despotic authority. Power isn’t seized – it’s given. We give people power over us. And in a state of chaos, all it takes is a few people to gain some power and they have a big advantage over everyone else. That’s why early leaders in new ecosystems tend to become unbeatable very quickly.

Also, power clears the path for action. In a world with no power, little gets done at all. We are better off with large companies that have the power to organise themselves around a goal than trying to achieve the same goal with a collection of individuals; try making a Boeing from an equivalent group of artisans, and you’ll see what I mean. Artisans form guilds and companies to increase their reach and impact. Individual volunteers join professional institutions to get more effective: consider the impact of handing out food yourself, versus helping sustain a network of soup kitchens, even in the purely non-profit world. Having some clout on your side is nothing to sniff at, even if you have purely philanthropic goals.

Power and innovation

If you have all the power already, there’s no spur to innovate. So kingdoms stagnate, eventually.

But power makes space for good things, too. It’s the powerful (and rich) who fund the arts in most societies. Innovation needs breathing space; companies with economic power can incubate new ideas to the point where they become productive.

Too much competition can thus limit innovation: look how difficult it has been for the Windows-based PC manufacturers, who live in a brutally competitive world and have little margin, to innovate. They are trapped between a highly efficient parts supply ecosystem, which feeds them all the same stuff at the same price, and a consumer market that requires them all to provide PC’s which run the same stuff the same way. As a result, they have little power, little margin, little innovation.

The trick is not to fear power itself, but instead, to shape, balance and channel it. You don’t want to aim for the absence of power, you want the Goldilocks effect of having “just enough”. And that was James Madison’s genius.

Verticals, competition and the balance of power

Of course, competition between rivals is the balance of power in business. We resent monopolies because they are either abusing their power, or stagnating.

In economics, we talk about “verticals” as the set of supply dependencies needed for a particular good. So, to make an aircraft, you need various things like engines and alloys, and those suppliers all feed the same pool of aircraft manufacturers.

In order to have a healthy ecosystem, you need a balance of power both between suppliers at the same level of the stack, and vertically, between the providers of parts and providers of the finished product. That’s because innovation needs both competition AND margin to stimulate and nurture it.

In the PC case, the low margins in the PC sector helped reinforce the Windows monopoly. Not only was there no competition for Microsoft, there was no ability for a supplier further down the chain to innovate around them. The only player in that ecosystem that had the margin to innovate was Microsoft, and since they faced no competition, there was little stimulus to embrace their own R&D, no matter how much they spent on it.

Power in the FLOSS ecosystem: upstreams and distributions

So, where do we stand in the free software and open source ecosystem?

The lines between upstreams and distributions aren’t perfectly clear, of course. Simplistic versions of that picture are often used to prove points, but in fact, all the distributions are also in some sense upstreams, and even derivative distributions end up being leaders of those they derive from in some pieces or markets. Nevertheless, I think it’s worth looking at the balance of power between upstream projects and distributions, as it is today and as it could be.

Also, I think it’s worth looking at related parties, companies and institutions which work a lot with FLOSS but have orthogonal interests.

If one uses margin, or profit, as an indicator of power, it’s clear that the distributions today are in a far stronger position than most individual projects or upstreams. The vast majority of software-related revenue in the FLOSS ecosystem goes to distributions.

Within that segment, Red Hat claims 80% market share of paid Linux, a number that is probably accurate. Novell, the de facto #2, is in the midst of some transition, but indicators are that it continues to weaken. Oracle’s entry into the RHEL market has had at best marginal impact on RHEL economics (the substantial price rises in RHEL 6 are a fairly clear signal of the degree to which Red Hat believes it faces real competition). The existence of “unpaid RHEL” in the form of CentOS, as well as OEL, essentially strengthens the position of RHEL itself. Ubuntu and Debian have large combined levels of adoption, but low revenue.

So clearly, there is work to do just to balance power in the distribution market. And it will take work – historically, platforms tend to monopolies, and in the absence of a definitive countervailing force that establishes strength outside the RHEL gravity well, that’s what we’ll have. But that’s not the most interesting piece. What’s more interesting is the dynamic between distributions and upstreams.

Today, most upstreams are weak. They have little institutional strength. It’s generally difficult to negotiate and do business with an upstream. In many cases, that’s by design – the teams behind a project are simply not interested, or they are explicitly non-profit, as in the case of the FSF, which makes them good leaders of specific values, but difficult to engage with commercially.

As a result, those who need to do business with open source go to distributions, even in cases where they really want to be focused on a particular component. This greatly amplifies the power of the distributions: they essentially are the commercial vehicles for ALL of open source. The weakness of individual upstreams turns into greater strength for distributions.

You can imagine that distributions like it that way, and it would be surprising to see a distribution, or company that backs a distribution, arguing for stronger upstreams. But that’s exactly the position I take: FLOSS needs stronger upstreams, and as a consequence, weaker distributions.

Stronger upstreams will result in more innovation in FLOSS than stronger distributions. Essentially, like Microsoft, a distribution receives cash for the whole platform and allocates it to specific areas of R&D. That means the number of good ideas that receive funding in our ecosystem, today, is dependent on the insights of a very few companies. Just as Microsoft invested a lot in R&D and yet seemed to fall behind, upstream innovation will be strangled if it’s totally dependent on cash flow via distributions.

It’s not just innovation that suffers because we don’t have more power, or economic leverage, in the hands of upstreams. It’s also the myriad of things beyond code itself. When you have a company behind a project, they tend to take care of a lot more than just the code: QA, documentation, testing, promotion. It’s easy, as a developer, to undervalue those things, or to see them as competing for resources with the “real work” of code. But that competition is necessary, and they make a great contribution to the dynamism of the final product.

Consider the upstream projects which have been very successful over the long term. Qt and MySQL, for example, both had companies behind them that maintained strong leverage over the product. That leverage was often unpopular, but the result was products available to all of us under a free license that continued to grow in stature, quality and capability despite the ups and downs of the broader market, and without being too dependent on the roving spotlight of “coolness”, which tends to move quickly from project to project.

There are of course successful upstream projects which do not have such companies. The best example is probably the Linux kernel itself. However, those projects fall into a rather unusual category: they are critical to some large number of companies that make money in non-software ways, and those companies are thus forced to engage with the project and contribute. In the case of the kernel, hardware companies directly and indirectly underwrite the vast majority of the boring but critical work that, in other projects, would be covered by the sponsoring institution. And despite that, there are many gaps in the kernel. You don’t have to dig very hard to find comments from key participants bemoaning the lack of testing and documentation. Nevertheless, it gets by quite well under the circumstances.

But most ecosystems will have very few projects that are at such a confluence. Most upstream projects are the work of a few people, the “coolness” spotlight shines on them briefly if at all. They need either long term generosity from core contributors, or an institution to house and care for them, if they want to go the distance. The former rarely works for more than a few years.

Projects which depend on indirect interests, such as those sponsored by hardware companies, have another problem. Their sponsoring institutions are generally not passionate about software. They don’t really need or want to produce GREAT software. And if you look at the projects which get a lot of such contributions, that becomes very obvious. Compare and contrast the quality of apps from companies which deeply care about software from those which come from hardware companies, and you see what I mean.

We FLOSS folk like to tell ourselves that the Windows hegemony was purely a result of the manipulations of its sponsor, and the FLOSS as we do it today is capable of doing much more if it only had a fair chance. I don’t think, having watched the success of iOS and Android as new ecosystems, that we can justify that position any longer. I think we have to be willing to think hard about what we are willing to change if we want to have the chance of building an ecosystem as strong, but around GNU/Linux. Since that’s my goal, I’m thinking very hard about that, and creatively. I think it’s possible, but not without challenging some sacred cows and figuring out what values we want to preserve and which we can remould.

Power is worth having in your ecosystem, despite its occasional abuse

There’s no doubt that power created will be abused. That’s true of a lot of important rights and powers. For example, we know that free speech is often abused, but we nevertheless value it highly in many societies that are also big contributors to FLOSS. You probably know the expression, “I disagree with what you are saying entirely, but I will defend to the death your right to say it”.

Similarly, in our ecosystem, power will be abused. But it’s still worth helping institutions acquire it, even those we dislike or distrust, or those we compete with. At Canonical, we’ve directly and indirectly helped lots of institutions that you could describe that way – Oracle, Novell, Red Hat, Intel and many others. The kneejerk reaction is usually “no way”, but upon deeper thought, we figured that it is better to have an ecosystem of stronger players, considering the scale of the battle with the non-FLOSS world.

I often find people saying “I would help an institution if I thought I could trust it”. And I think that’s a red herring, because just as power will be abused, trust will be abused too. If you believe that this is a battle of ecosystems and platforms, you want to have as many powerful competitors in your ecosystem as possible, even though you probably cannot trust any of them in the very long term. It’s the competition between them that really creates long term stability, to come back to the thinking of James Madison. It’s pitting ambition against ambition, not finding angels, which makes that ecosystem a winner. If you care about databases, don’t try to weaken MySQL, because you need it strong when you need it. Rather figure out how to strengthen PostGRES alongside it.

How Canonical fits in

Canonical is in an interesting position with regard to all of this. As a distribution, we could stay silent on the issue, and reasonably expect to grow in power over time, on the same basis that Red Hat has. And there are many voices in Canonical that say exactly that: don’t rock the boat, essentially.

However, perhaps unlike other Linux distributions, Canonical very much wants to see end users running free software, and not just IT professionals. That raises the bar dramatically in terms of the quality of the individual pieces. It means that it’s not good enough for us to work in an ecosystem which produces prototype or rough cut products, which we then aggregate and polish at the distribution level. Unlike those who have gone before, we don’t want to be the sole guarantor of quality in our ecosystem, because that will not scale.

For that reason, looking at the longer term, it’s very important to me that we figure out how to give more power to upstreams, so that they in turn can invest in producing components or works which have the completeness and quality that end-users expect. I enjoy working with strong commercial institutions in the open source ecosystem – while they always represent some competitive tension, they also represent the opportunity to help our ecosystem scale and out-compete the proprietary world. So I’d like to find ways to strengthen the companies that have products under free software, and encourage more that have proprietary projects to make them available under free licenses, even if that’s not the only way they publish them.

If you’ve read this far, you probably have a good idea where I’m going with this. But I have a few more steps before actually getting there. More soon.

Till then, I’m interested in how people think we can empower upstream projects to be stronger institutionally.

There are a couple of things that are obvious and yet don’t work. For example, lots of upstreams think they should form a non-profit institution to house their work. The track record of those is poor: they get setup, and they fail as soon as they have to file their annual paperwork, leaving folks like the SFLC to clean up the mess. Not cool. At the end of the day, such new institutions add paperwork without adding funding or other sources of energy. They don’t broaden out the project the same way a company writing documentation and selling services usually does. On the other hand, non-profits like the FSF which have critical mass are very important, though, which is why on occasion we’ve been happy to contribute to them in various ways.

Also, I’m interested in how we can reshape our attitudes to power. Today, the tenor of discussion in most FLOSS debates is simplistic: we fear power, and we attempt to squash it always, celebrating the individual. But that misses the point that we are merely strengthening the power elsewhere; in distributions, in other ecosystems. We need a richer language for describing “the Goldilocks power” balance, and how we can move beyond FUD.

So, what do you think we could do to create more Mozilla’s, more MySQL’s, more Qt’s and more OpenStacks?

I’ll summarise interesting comments and threads in the next post.