Technical Board 2011

Wednesday, October 5th, 2011

After the recent poll of Ubuntu developers I’m delighted to introduce the Technical Board 2011-2013. I think it’s worth noting that three of the members of this generation of technical leaders are not Canonical employees, though admittedly they are all former members of that team. I think there’s cause for celebration on both fronts: broader institutional and independent representation in the senior governance structures of Ubuntu is valuable, and the fact that personal interest persists regardless of company affiliation is also indicative of the character of the whole community, both full-time and volunteer. We’re in this together, for mutual interests.

Without further ado, here they are, in an order you are welcome to guess ;-)

  • Stéphane Graber
  • Kees Cook
  • Martin Pitt
  • Matt Zimmerman
  • Colin Watson
  • Soren Hansen
Please join me in congratulating each of them, and thanking those who were willing to stand, who were nominated, and those who participated in the poll.
From my perspective, it was a very rich field of nominations. We had several candidates with no historic link to Canonical, which was very encouraging in terms of the diversity of engagement in the project. For the first time, I felt we had too many candidates and so I whittled down the final list of nominations – as it happens, all of the non-Canonical nominees made the shortlist, though that was not a criteria for my support.
Welcome aboard, all!

Building clouds for fun and profit

Monday, September 19th, 2011

So you’d like to spin up an internal cloud for hadoop or general development, shifting workloads from AWS to your own infrastructure or prototyping some new cloud services?

Call Canonical’s cloud infrastructure design and consulting team.

There are a couple of scenarios that we’re focused on at the moment, where we can offer standardised engagements:

  • Telco’s building out cloud infrastructures for public cloud services. These are aiming for specific markets based on geography or network topology – they have existing customers and existing networks and a competitive advantage in handling outsourced infrastructure for companies that are well connected to them, as well as a jurisdictional advantage over the global public cloud providers.
  • Cloud infrastructure prototypes at a division or department level. These are mostly folk who want the elasticity and dynamic provisioning of AWS in a private environment, often to work on products that will go public on Rackspace or AWS in due course, or to demonstrate and evaluate the benefits of this sort of architecture internally.
  • Cloud-style legacy deployments. These are folk building out HPC-type clusters running dedicated workloads that are horizontally scaled but not elastic. Big Hadoop deployments, or Condor deployments, fall into this category.

Cloud has become something of a unifying theme in many of our enterprise and server-oriented conversations in the past six months. While not everyone is necessarily ready to shift their workloads to a dynamic substrate like Ubuntu Cloud Infrastructure (powered by OpenStack) it seems that most large-scale IT deployments are embracing cloud-style design and service architectures, even when they are deploying on the metal. So we’ve put some work into tools which can be used in both cloud and large-scale-metal environments, for provisioning and coordination.

With 12.04 LTS on the horizon, OpenStack exploding into the wider consciousness of cloud-savvy admins, and projects like Ceph and CloudFoundry growing in stature and capability, it’s proving to be a very dynamic time for IT managers and architects. Much as the early days of the web presented a great deal of hype and complexity and options, only to settle down into a few key standard practices and platforms, cloud infrastructure today presents a wealth of options and a paucity of clarity; from NoSQL choices, through IAAS choices, through PAAS choices. Over the next couple of months I’ll outline how we think the cloud stack will shape up. Our goal is to make that “clean, crisp, obvious” deployment Just Work, bringing simplicity to the cloud much as we strive to bring it on the desktop.

For the moment, though, it’s necessary to roll up sleeves and get hands a little dirty, so the team I mentioned previously has been busy bringing some distilled wisdom to customers embarking on their cloud adventures in a hurry. Most of these engagements started out as custom consulting and contract efforts, but there are now sufficient patterns that the team has identified a set of common practices and templates that help to accelerate the build-out for those typical scenarios, and packaged those up as a range of standard cloud building offerings.

 

Surveying participation

Friday, September 9th, 2011

Just a brief note to celebrate Jono and team’s recent work on gathering insight into our membership and developer participation processes. Thanks also to those who took time to comment for the surveys. The results are worth a read if you care about the vibrancy and dynamism of our community. Kudos Jono, and thanks!

Innovation and OpenStack: Lessons from HTTP

Thursday, September 8th, 2011

OpenStack is facing an important choice: does define a new set of API’s, one of many such efforts in cloud infrastructure, or does it build around the existing AWS API’s?  So far, OpenStack has had it both ways, with some new API work and also some AWS-based effort. I’m writing to make the case for a tighter definition of mission around the de facto standard infrastructure API’s of EC2, S3 and a few other elements of AWS.

What prompted this blog was my overhearing (or, seeing an email on a list) the statement that cloud infrastructure projects like OpenStack, Eucalyptus and others should “innovate at the level of the API and infrastructure concepts”. I’m of the view that any projects which try to do so will fail and are not worth spending your or my time on. They are going to be about as successful as projects that try to reinvent HTTP to make it better/faster/cleaner/whatever. Which is to say – not successful at all, because no new protocol with the same conceptual goals will match the ecosystem that exists today around HTTP. There will of course be protocol innovation, the last word is never written, but for the web, it’s a done deal. All the proprietary and ad-hoc things that preceded HTTP have died, and good riddance. Similarly, cloud infrastructure will converge around a standard API which will be imperfect but real. Innovation is all about how that API is implemented, not which API it is.

Nobody would say the web server market lacks innovation. There are many, many different companies and communities that make and market web server solutions. And each of those is innovating in some way – focusing on a different audience, or trying a different approach. Yet that entire market is constrained by a public standard: HTTP, which evolves far more slowly than the products that implement it.

There are also a huge number of things that wrap themselves around HTTP, from cache accelerators to 3G content compressors; the standardisation of that thin layer has created a massive ecosystem and driven fantastic innovation, even as many of the core concepts that drove HTTP’s initial design have eroded or softened. For example, HTTP was relentlessly stateless, but we’ve added cookies and cacheing to address issues caused by that (at the time radical) design constraint.

Today, cloud infrastructure is looking for its HTTP. I think that standard already exists in de facto form today at AWS, with EC2, S3 and some of the credential mechanisms being essentially the core primitives of cloud infrastructure management. There is enormous room for innovation in cloud infrastructure *implementations*, even within the constraints of that minimalist API. The hackers and funders and leaders and advocates of OpenStack, and any number of other cloud infrastructure projects both open source and proprietary, would be better off figuring out how to leverage that standardisation than trying to compete with it, simply because no other API is likely to gain the sort of ecosystem we see around AWS today.

It’s true that those API’s would better be defined in a clean, independent forum analogous to the W3C than inside the boiler-room of development at any single cloud provider, but that’s a secondary issue. And over time, it can be engineered to work that way.

More importantly for the moment, those who make an authentic effort to fit into the AWS protocol standard immediately gain access to chunks of the AWS gene pool, effectively gratis. From services like RightScale to tools like ElasticFox, your cloud is going to be more familiar, more effective and more potent if it can ease the barriers to porting from AWS. No two implementations will magically Just Work, but the rough edges and gotchas can get ironed out much more easily if there is a clear standard and reference implementations. So the cost of “porting” will always be lower between clouds that have commonality by design or heritage.

For OpenStack itself, until that standard is codified, I would describe the most successful mission statement as “to be the reference public cloud provider scale implementation of cloud infrastructure compatible with AWS core API’s”. That’s going to give all the public cloud providers who want to compete with Amazon the best result: they’ll be able to compete on service terms, while convincing early adopters that the move to their offering will be relatively painless. All it takes, really, is some humility and the wisdom to recognise the right place to innovate.

There will be many implementations of those core API’s. One or other will be the Apache, the “just start here” option. But it doesn’t matter so much which one that is, frankly. I think OpenStack has the best possible chance to be that, but only if they stick to this crisp mission and don’t allow themselves to be drawn into front-end differentiation for the sake of it. Should that happen, OpenStack will be vulnerable to another open source project which credibly aims to achieve the goals outlined here. Very vulnerable. Witness the ways in which Eucalyptus is rightly pointing out its superior AWS compatibility in comparison with OpenStack.

For the public cloud providers that hope to build on OpenStack, API differentiation is poison in a juicy steak. It looks tasty, but it’s going to cost you the race prematurely. There were lots of technical reasons why alternatives to Windows were *better*, they just failed to become de facto standards. As long as Amazon doesn’t package up AWS as an on-premise solution, it’s possible to establish a de facto standard around something else, but that something else (perhaps OpenStack) needs to be AWS-compatible in some meaningful way to get enough momentum to matter. That means there’s a window of opportunity to get this right, which is not going to stay open indefinitely. Either Amazon, or another open source project, could close that window on OpenStack’s fingers. And that would be a pity, since the community around OpenStack has tons of energy and goodwill. In order to succeed, it will need to channel that energy into innovation on the implementation, not on trying to redefine an existing standard.

Of course, all this would be much easier if there were a real HTTP-like standard defining those API’s. The web had the enormous advantage of being founded by Tim Berners-Lee, in an institution like CERN, with the vision to setup the W3C. In the case of today’s cloud infrastructure, there isn’t the same dynamic or set of motivations. Amazon’s position of vagueness on the AWS API’s is tactically perfect for them right now, and I would expect them to maintain that line while knowing full well there is no real proprietary claim in a public network API, and no real advantage to be had from claiming otherwise. What’s needed is simply to start codifying existing practice as a draft standard in a credible forum of experts, with a roadmap and the prospect of support from multiple vendors. I think that would be relatively easy to arrange, if we could get Rackspace, IBM and HP to sit down and commit to doing it. We already have HP and Rackspace at the OpenStack table, so the signs are encouraging.

A good standard would:

* be pragmatic about the fact that Amazon has already made a bunch of decisions we’ll live with for ever.
* have a commitment from folk like OpenStack and Eucalyptus to aim for compliance
* include a real automated functional test suite that becomes the interop benchmark of choice over time
* be open to participation by Amazon, though that will not likely come for some time
* be well documented and well managed, like HTTP and CSS and HTML
* not be run but the ITU or ISO

I’m quite willing to contribute resources to getting such a standard off the ground. Forget big consortiums or working groups or processes or lobbying forums, what’s needed are a few savvy folk who know AWS, Eucalyptus and OpenStack, together with a very few technical writers. Let me know if you’re interested.

Now, I started out by saying that I was writing to make the case for OpenStack to be focused on a particular area. It’s a bit cheeky for me to write anything of the sort, of course, because OpenStack is a well run project that has an excellent steering group, which recently held a poll of contributors to appoint some new members, none of which was me. I’ve every confidence in the leadership of the project, despite the tremendous pressure they are under to realise the hopes of so many diverse users and companies. I’m optimistic for the potential OpenStack has to accelerate cloud technology, and in Canonical we put a considerable amount of effort into making OpenStack deployment a smooth experience for Ubuntu users and Canonical customers. Ubuntu Cloud Infrastructure depends now on OpenStack. And I have a few old friends who are also leaders in the OpenStack community, so for all those reasons I thought it worth making this perspective public.

Dash takes shape for 11.10 Unity

Tuesday, August 16th, 2011

Our goal with Unity is unprecedented ease of use, visual style and performance on the Linux desktop. With feature freeze behind us, we have a refined target render of the Dash for Oneiric, and here it is:

click for the full size render.

Scopes and Lenses

We’ve moved from the idea of “Places” to a richer set of “Scopes and Lenses”. Scopes are data sources, and can tap into any online or offline data set as long as they can generate categorised results for a search, describe a set of filters and support some standard interfaces. Lenses are various ways to present the data that come from Scopes.

The Scopes have a range of filtering options they can use, such as ratings (“show me all the 5 star apps in the Software Center please”) and categories (“… that are games or media related”). Over time, the sophistication of this search system will grow but the goal is to keep it visual and immediate – something anyone can drive at first attempt.

This delivers on the original goal of creating a device-like experience that was search driven. Collaboration with the always-excellent Zeitgeist crew (quite a few of whom are now full time on the Unity team!) has improved the search experience substantially, kudos to them for the awesome work they’ve put in over the past six months. Since we introduced the Dash as a full screen device-like search experience, the same idea has made its way into several other shells, most notably Mac OS X Lion. While we’re definitely the outsider in this contest, I think we can stay one step ahead in the game given the support of our community.

The existing Places are all in the process of being updated to the Scopes and Lenses model, it’s a bit of a construction site at the moment so hard-hats are advised but dive in if you have good ideas for some more interesting scopes. I’ve heard all sorts of rumours about cool scopes in the pipeline ;-) and I bet this will be fertile ground for innovation. It’s pretty straightforward to make a scope, I’m sure others will blog and document the precise mechanisms but for those who want a head start, just use the source, Luke.

Panel evolution

In the panel, you’ll see that the top left corner is now consistently used to close whatever has the focus. Maximising a window keeps the window controls in the same position relative to the window – the top left corner. We have time to refine the behaviour of this based on user testing of the betas; for example, in one view, one should be able to close successive windows from that top left corner even if they are not maximised.

It’s taken several releases of careful iteration to get to this point. Even though we had a good idea where we were headed, each step needed to be taken one release at a time. Perhaps this might make a little clearer the reasons for the move of window controls to the left – it was the only place where we could ultimately keep them consistent all the way up to a maximised window with the title bar integrated into the panel. I’m confident this part will all be settled by 12.04.

As part of this two-step shuffle, the Dash invocation is now integrated in the Launcher. While this is slightly less of a Fitts-fantastic location, we consider it appropriate for a number of reasons. First, it preserves the top left corner for closing windows. Second, the Dash is best invoked with the Super key (sometimes erroneously and anachronistically referred to as the “Windows” key, for some reason ;-)). And finally, observations during user testing showed people as more inclined to try clicking on items in the Launcher than on the top left icon in the panel, unless that icon was something explicit like a close button for the window. Evidence based design rules.

Visual refinements

Rather than a flat darkening, we’re introducing a wash based on the desktop colour. The dash thus adjusts to your preferred palette based on your wallpaper. The same principle will drive some of the login experience – choosing a user will shift the login screen towards that users wallpaper and palette.

We’ve also integrated the panel and the dash, so indicators are rendered in a more holographic fashion inside the dash. Together with efforts to mute the contrast of Launcher icons the result is a more striking dash altogether: content is presented more dramatically.

Since we have raw access to the GL pipeline, we’re taking advantage of that with some real-time blur effects to help the readability and presentation of overlay content in the Dash, too. Both Nux in the case of Unity-3D and Qt in the case of Unity-2D have rich GL capabilities, and we’d like to make the most of whatever graphics stack you have on your hardware, while still running smoothly on the low end.

Growing community and ecosystem

A project like this needs diverse perspectives, talents and interests to make it feel rounded and complete. While Canonical is carrying the core load, and we’re happy to do so in order to bring this level of quality to the Ubuntu desktop user experience, what makes me particularly optimistic is the energy of the contributors both to Unity directly and to the integration of many other components and applications with the platform.

On the contribution front, a key goal for the Unity community is to maintain velocity in contributor patch flows. You should expect a rapid review and, all being well, landing, for contributions to Unity that are in line with the design goals. In a few cases we’ve also accepted patches that make it possible to use Unity in ways that are different to the design goals, especially where the evidence doesn’t lean very heavily one way or the other. Such contributions add some complexity but also give us the opportunity to test alternatives in a very rich way; the winning alternative would certainly survive, the other might not.

Contrary to common prognostication, this community is shaping up to be happy and productive. For those who do so for love and personal interest, participating in such a community should be fun and stimulating, an opportunity to exercise skills or pursue interests, give something back that will be appreciated and enjoyed by many, and help raise the bar for Linux experiences. I’d credit Jorge and others for their stewardship of this so far, and my heartfelt thanks to all of those who have helped make Unity better just for the fun of it.

Outside of the core, the growing number of apps that integrate sweetly with the launcher (quicklists), dash (scopes), indicators (both app-specific and category indicators) is helping to ensure that API’s are useful, refined and well implemented, as well as improving the experience of Ubuntu users everywhere. Now that we’re moving to Unity by default for both 2D and 3D, that’s even more valuable.

Under the hood

In this round, Unity-3D and Unity-2D have grown together and become twin faces on the same underlying model. They now share a good deal of common code and common services and – sigh – common bugs :-). But we’re now at the point where we can be confident that the Unity experience is available on the full range of hardware, from lightweight thin client systems made of ARM or Atom CPU’s to CADstations with oodles of GPU horsepower.

There’s something of a competition under way between proponents of the QML based Unity-2D, who believe that the GL support here is good enough to compete both at the high end and on the low end, and the GL-heads in Unity-3D, who think that the enhanced experiences possible with raw GL access justify the additional complexity of working in C++ and GL on the metal. Time will tell! Since a lot of the design inspiration for Unity came from game interfaces, I lean to the “let’s harness all the GL we can for the full 3D experience” side of the spectrum, but I’m intrigued with what the QML team are doing.

Corporate desktops and Ubuntu

Monday, August 15th, 2011

Good news for people with skills deploying and managing Ubuntu: the corporate desktop is being reinvented, and Ubuntu is a popular option for those leading the change.

In the past year there’s been a notable shift in the way IT shops think about their corporate desktops. Suddenly, Windows is optional, or at least it can be managed and delivered as a service to any other platform, so it no longer has to BE the platform on the client. In part, that’s because so much has moved to the web, and in part it’s because virtualisation has become so good at letting people deliver desktop apps (and whole desktops) over the network. It’s also a function of the success of other non-Windows platforms, like iOS, which make IT think in terms of standards for interoperability rather than standard applications.

All of this is drumming up interest in alternative ways to design large scale desktop infrastructure, from Linux-based thin clients (Ubuntu is at the heart of some recent products from Wyse) on alternative architectures like ARM to straightforward Ubuntu desktops with thin client software giving access to legacy Windows applications. In all these cases, Windows and proprietary software continue to play an important role, but the stranglehold of Windows on the platform itself seems to be coming unstuck. That makes for a much enhanced competitive landscape.

A migration from Windows to Ubuntu is still a project that requires a lot of planning, analysis and hard work. But for most institutions, it’s realistic to be confident that 10-25% of the desktops can migrate smoothly if a professional team has that as their mission over a year or two. For large organisations, that might be 5,000-50,000 seats, and the resulting savings are tremendous given the increase in Windows licensing costs driven by Win 7.

I joined a call recently with the team at Canonical that works with customers to plan and deliver desktop migrations to Ubuntu. They have a standard engagement process that charts a course for the organisation and maps out typical risks and low-hanging fruit. They were talking to a bank, traditionally a very conservative audience, about the stages and milestones in a typical migration of 20,000 seats to Ubuntu. I was struck by the tone of the conversation on both sides – it wasn’t a question of whether to do it, it was a question of how to do so most efficiently. And that’s a huge leap forward from the days when we used to speculate if it was sane to even think about Linux on the desktop for anybody other than a developer.

It’s clear that Windows is no longer the target – personal computing and productivity computing are in the process of being reinvented, and being an effective replacement for Windows is no guarantee of relevance in the future. But for many IT departments, the desktop represents an enormous cost base which will not disappear overnight, and Ubuntu is creating options for them to control that cost.

We often celebrate the way free software transforms the lives of those most in need, but it’s equally energising to see it making a difference to IT teams that in turn help inject resources into the acceleration of the free software platform. Winning 20,000 desktops to Ubuntu helps improve the platform for every school or university deployment, just as much as it helps improve the platform for developers and home users too.

Inclusive membership

Wednesday, August 10th, 2011

We have a strong preference for inclusivity in the way we structure the Ubuntu community. We recognise a very diverse range of contributions, and we go to some lengths to recognise leaders in many areas so that we can delegate the evaluation process to those who know best.

For example, we have governance structures for different social forums (IRC, the Forums) and for various technical fields (development, translation). We quite pointedly see development and packaging as one facet of a multi-faceted project, and I think Ubuntu is much stronger for that approach.

Jonathan Carter raised a very interesting question recently, which was how we recognise those who’s best contribution to Ubuntu (and it may well be intended very much as a contribution to Ubuntu) is by working in other projects. His specific question was “upstream” but as we discussed it amongst members of the CC it threw up all sorts of gaps in our membership process, such as recognising those who make collaboration between Debian and Ubuntu easier, and those who work in derivatives like Mint, and do so in a way that makes it easy for Ubuntu to benefit from their work.

The issue was brought to the fore because several contributors to Unity and Launchpad had applied for membership. We had previously seen scattered upstream-based applications for membership, but the fact that there were a few of these and that they were Canonical employees turned it into something which needed wider consideration.

For obvious reasons, I don’t think a Canonical project should be special cased. Ubuntu is a shared effort between Canonical and community contributors, and Canonical already has effective representation. But I think the question is much more interesting when it’s asked as a general proposition: if someone feels they want to help other Ubuntu users by doing work in a *different* project, why should we not recognise that? And if we’re going to try to do so, how do we do it well? Specifically, how do we
avoid diluting the cohesive nature of the membership of Ubuntu, who are polled for confidence in appointments to the Community Council, if we say we will consider membership applications from any of the thousands of projects that feed into the platform? How on earth do we even start to evaluate “substantial and sustained” contributions? It’s a very interesting problem.

Matthew East blogged to say that there are some codebases that are closer to Ubuntu than others. That’s probably true, but it’s also been a point of pride that we encourage groups like the LXDE community to express their experience as official remixes, from Kubuntu to Xubuntu and Lubuntu and beyond. I agree with Matthew that, in order to be an unambiguous target for developers, we need to be firm about a narrow set of API’s and installation requirements. That means a project like Unity becomes an “obvious” way to contribute to Ubuntu, though of course other distributions are likely to use Unity and there may be people who contribute there specifically for them.

But more importantly, I think we should start to find ways in which someone could participate in ANY project upstream (or downstream) and still gain membership in Ubuntu if that is valuable to them, or aligned with their values.

For example: someone who correlates upstream bugs with Ubuntu bugs in Launchpad, is doing work that improves Ubuntu for everyone. Someone who fixes issues in an upstream stable release in order to facilitate an SRU, is doing the same. Ensuring that an upstream is going to work with the next kernel/toolchain/X/GL version set has a similar benefit. There are any number of ways in which someone who uses Ubuntu, shares the values of the project and wants to help but is closer to an upstream can express
themselves. And we should recognise them all.

I’m not at all certain how we would do so perfectly in practice, because the diversity of projects out there is so great, and the diversity of ways they track their contributors is so great. But I think we should try. We could create a team that reviews such applications and makes a best effort to assess them, erring on the side of caution. Who’s up for the challenge?

The responsibilities of ownership

Friday, July 22nd, 2011

In the open source community we make a very big deal about the rights of ownership. But what about the responsibilities?

Any asset comes with attendant costs, risks and responsibilities. And anybody who doesn’t take those seriously is a poor steward of the asset.

In the physical world, we know this very well. If you own a house, there are taxes to pay every year, there will be some bills for energy and maintenance, and there’s paperwork to fill out. I was rudely reminded of this when I got an SMS at 2am this morning, care of British Gas, helpfully reminding me to settle up the gas bill for a tenant of mine. If we fail to take care of these responsibilities, we’re at risk of having the asset degraded or taken away from our care. An abandoned building will eventually be condemned and demolished rather than staying around as a health hazard. A car which has not been tested and licensed cannot legally be driven on public roads. In short, ownership comes with a certain amount of work, and that work has to be handled well.

In the intellectual and digital world, things are a little different. There isn’t an obvious lawn to trim or wall to paint. But there are still responsibilities. For example, trademarks need to be defended or they are deemed to be lost. Questions need to be answered. Healthy projects grow and adapt over time in a dynamic world; change is inevitable and needs to be accommodated.

Maintaining a piece of free software is a non-trivial effort. The rest of the stack is continuously changing – compilers change, dependencies change, conventions change. The responsibility for maintenance should not be shirked, if you want your project to stay relevant and useful. But maintainership is very often the responsibility of “core” developers, not light contributors. Casual contributors who have scratched their own itch or met a work obligation by writing a patch often give, as a reason for the contribution, their desire to have that maintenance burden carried by the project, and not by themselves.

When a maintainer adds a patch to a work, they are also accepting responsibility for its maintenance, unless they have some special circumstance, like the patch is a plugin and essentially maintained by the contributor. For general cases, adding the patch is like mixing paint – it adds to the general body of maintenance in a way that cannot easily be undone or compartmentalised.

And owning an asset can create real liabilities. For example, in some countries, if you own a house and someone slips on the stairs, you can be held liable. If you own a car and it’s being borrowed, and the brakes fail, you can be held liable. In the case of code, accepting a patch implies, like it or not, accepting some liability for that patch. Whether it turns out to be a real liability, or just a contingent one, is something only time will tell. But ownership requires defence in the case of an attack, and that can be expensive, even if it turns out the attack is baseless.

So, one of the reasons I’m happy to donate (fully and irreversibly) a patch to a maintainer, and why Canonical generally does assign patches to upstreams who ask for it, is that I think the rights and responsibilities of ownership should be matched. If I want someone else to handle the work – the responsibility – of maintenance, then I’m quite happy for them to carry the rights as well. That only seems balanced. In the common case, that maintenance turns out to be as much work as the original crafting of the patch, and frankly, it’s the “boring work” part, while the fun part was solving the problem immediately at hand.

Of course, there are uncommon cases too.

One of the legendary fights over code ownership, between Sun and Novell, revolved around a plugin for OpenOffice that did some very cool stuff. Sun ended up re-creating that work because Novell would not give it to Sun. Frankly, I think Sun was silly. The plugin was a whole work, that served a coherent purpose all by itself. Novell had designed and implemented that component, and was perfectly willing and motivated to maintain it. In that case, it makes sense to me that Sun should have been willing to make space for Novell’s great work, leaving it as Novell’s. Instead, they ended up redoing that work, and lots of people felt hard done by. But that’s an uncommon case. The more usual scenario is that a contribution enhances the core, but is not in itself valuable without the rest of the code in the project being there.

Of course, “value” is relative. A patch that only applies against an existing codebase still has value in its ability to teach others how to do that kind of work. And it has value as art – you can put it on a t-shirt, or a wall.

But contributing – really contributing, actually donating – a patch to a maintainer doesn’t have to reduce those kinds of value to the original creator. I consider it best practice that a donation be matched by a wide license back. In other words, if I give my patch to the maintainer, it’s nice if they grant me a full set of rights back. While does a bad job with many other things, the Canonical contribution agreement does this: when you make a contribution under it, you get a wide license back. So that way, the creator retains all the useful rights, including the ability to publish, relicense, sell, or make a t-shirt, without also carrying all the responsibilities that go with ownership.

So a well-done contribution agreement can make clear who carries which responsibilities, and not materially diminish the rights of those who contribute. And a well-done policy of contribution would recognise that there are uncommon cases, where a contribution is in fact a whole piece in itself, and not require donation of that whole piece in order to be part of an aggregate whole.

What about cases where there is no clear maintainer or owner?

Well, outside of the world of copyright and code, we do have some models to refer to. For example, companies issue shares to their shareholders, reflecting their relative contribution and therefor relative shared ownership in the company. Those companies end up with diverse owners, each of whom is going to have their own opinions, preferences, ideals and constraints.

We would never think to require consensus on every decision of the board, or the company, among all shareholders. That would be unworkable – in fact, much of the apparatus of corporate governance exists specifically to give voice to the desires of shareholders while at the same time keeping institutions functional. That’s not to say that shareholders don’t get abused – there are enough case studies of management taking advantage of their position to fill a long and morbidly interesting book. Rules on corporate governance, and especially the protection of minority interests in companies, as well as the state of the art of constructing shareholder agreements to achieve the same goals, are constantly evolving. But at the end of the day, decisions need to be taken which are binding on the company and thus binding on the shareholders. The rights of ownership extend to the right to show up and be represented, and to participate in the discussion, and usually a vote of some sort. Thereafter, the decision is taken and (usually) the majority will carries.

In our absolutist mentality, we tend to think that a single line of code, or a single small patch, carries the same weight as the rest of a coherence codebase. It’s easy to feel that way: when a small patch is shared, but not donated, the creator retains sole ownership of that patch. So in theory, any change in the state of the whole must require the agreement of every owner. This is more than theory – it’s law in many places.

But in practice, that approach has not withstood any hard tests.

There are multiple cases where huge bodies of work, composed of the aggregate “patches” of many different owners, have been relicensed. Mozilla, the Ubuntu wiki, and I think even Wikipedia have all gone through public processes to figure out how to move the license of an aggregate work to something that the project leadership considered more appropriate.

I’d be willing to bet that, if some fatal legal flaw were discovered in the GPLv2, Linus would lead a process of review and discussion and debate about what to do about the Linux kernel, it would be testy and contentious, but in the end he would take a decision and most would follow to a new and better license. Personally, I’d be an advocate of GPLv3, but in the end it’s well known that I’m not a big shareholder in that particular company, so to speak, so I wouldn’t expect to have any say ;-) Those who did not want to follow would resign themselves to having their contributions replaced, and most would not bother to turn up for the meeting, giving tacit assent.

So our pedantic view that every line of code is sacred just would not hold up to real-world pressure. Projects have GOT to respond to major changes in the world around them. It would be unwise to loan a patch to a project in the belief that the project will never, under any circumstances, take a decision that is different to your personal views. Life’s just not like that. Change is inevitable, and we’re all only going to be thrilled about some subset of that change.

And that’s as it should be. Clinging to something small that’s part of someone else’s life and livelihood just isn’t healthy. It’s better either to commit to a reasonable shared ownership approach, which involves being willing to show up at meetings, contribute to maintenance and accept the will of the majority on major moves that might be unpalatable anyway, or to make a true gift that comes with no strings attached.

Sometimes I see people saying they are happy to make a donation as long as it has some constraints associated with it.

There was a family in SA that lived under weird circumstances for generations because a wealthy ancestor specified that they had to do that if they wanted access to their inheritance. It’s called “ruling from the grave”, and it’s really quite antisocial. Either you give someone what you’re giving them, and accept that they will carry those rights and responsibilities wisely and well, or you don’t give it to them at all. You’re not going to be around after your will is executed, and it’s impossible to anticipate everything that might happen. It’s exceedingly uncool, in my view, to leave people stuck.

It’s difficult to predict, in 50 or 100 years time, what the definition of “openness” will be, and who will have the definition that you might most agree with. In the short term we all have favourites, but every five or ten years the world changes and that precipitates a new round of definitions, licenses, concepts. Consider GPLv2 and GPLv3, where there turned out to be a real need to address new challenges in the way free software is being used. Or the Franklin Street Declaration, on web services. Despite having options like AGPL around, there still isn’t any real consensus on how best to handle those scenarios.

One can pick institutions, but institutions change too. Go back and look at the historical positions of any long-term political party, like the UK Whigs, and you’ll be amazed at how a group can shift their positions over a succession of leaders. I have complete trust in the FSF today, but little idea what they’ll be up to in 100 years time. That’s no insult to the FSF, it’s just a lesson I’ve learned from looking at the patterns of behaviour of institutions over the long term. It’s the same for the OSI or Creative Commons or any other political or ideological or corporate group. People move on, people die, times change, priorities shift, economics ebb and flow, affiliations and alliances and competition shift the terrain to the point that today’s liberal group are tomorrows conservatives or the other way around.

So, if one is going to put strings attached to a donation, what does one do? Pick a particular license? No current license will remain perfectly relevant or useful or attractive indefinitely. Pick an institution? No institution is free of human dynamics in the long term.

If there’s a natural place to put the patch, it’s with the code it patches. And usually, that means with the institution that is the anchor tenant, for better or worse. And yes, that creates real rights which can be really abused, or at least used in ways that one would not choose for ones own projects.

And that brings us to the toughest issue. How should we feel about the fact that a company which owns a codebase can create both proprietary and open products from that code?

And the “grave” scenario really is an issue, in the case of copyright too. When people have discussed changes to codebases that have been around for any length of time, it’s a sad but real likelihood that there are contributors who have died, and left no provision for how their work is to be managed. More often than not, the estate in question isn’t sufficiently funded to cover the cost of legal questions concerned.

The first time I was asked to sign a contribution agreement on behalf of Canonical, it was for a competitor, and I declined. That night, it preyed on my conscience. We had the benefit of a substantial amount of work from this competitor, and yet I had refused to give them back *properly* our own modest contribution. I frankly felt terrible, and the next day signed the agreement, and changed it to be our policy that we will do so, regardless of what we think about the company itself. So we’ve now done them for almost all our competitors, and I feel good about it.

That’s the magical thing about creation and ownership. It creates the possibility for generosity. You can’t really give something you don’t own, but if you do, you’ve made a genuine contribution. A gift is different from a loan. It imposes no strings, it empowers the recipient and it frees the giver of the responsibilities of ownership. We tend to think that solving our own problems to produce a patch which is interesting to us and useful for us is the generosity. It isn’t. The opportunity for generosity comes thereafter. And in our ecosystem, generosity is important. It’s at the heart of the Ubuntu ethic, and it’s important even between competitors, because the competitors outside our ecosystem are impossible to beat if we are not supportive of one another.

Fantastic engineering management is…

Tuesday, July 12th, 2011

I’m going to write a series of posts on different career tracks in software engineering and design over the next few months. This is the first of ‘em, I don’t have a timeline for the rest but will get to them all in due course, and am happy to take requests in the comments ;-)

Recently, I wrote up two Canonical engineering management job descriptions – one for those managing a team of software engineers directly, another for folk coordinating the work of groups of teams – software engineering directors. In both cases, the emphasis is on organisation, social coordination, roadmap planning and inter-team connectivity, and not in any way about engineering prowess. Defining some of these roles for one of my teams got me motivated to blog about the things that make for truly great management, as opposed to other kinds of engineering leadership.

The art of software engineering management is so different from software engineering that it should be an entirely separate career track, with equal kudos and remuneration available on either path.

This is because developing, and managing developers, are at opposite ends of the interrupt scale. Great engineering depends on deep, uninterrupted focus. But great management is all about handling interrupts efficiently so that engineers don’t have to. Companies need to recognise that difference, and create career paths on both sides of that scale, rather than expecting folk to leap from the one end to the other. It’s crazy to think that someone who loves deep focused thought should have to become a multithreaded interrupt driven manager to advance their career.

Very occasionally someone is both a fantastic developer and a fantastic manager, but that’s the exception rather than the rule. In recognition of that, we should design our teams to work well without depending on a miracle each and every time we put one together.

Great engineering managers are like coaches – they get their deepest thrill from seeing a team perform at the top of its game, not from performing vicariously. They understand that they are not going to be on the field between the starting and finishing whistle. They understand that there will be decisions to be taken on the field that the players will have to make for themselves, and their job is to prepare the team physically and mentally for the game, rather than to try and play from the sidelines. A great coach isn’t trying to steer the movement of the ball from during the game, she’s making notes about the coaching and team selections needed between this match and the next. A terrible coach is a player that won’t let go of the game, wants to be out there in the thick of it, and loses themselves in the details of the game itself.

An engineering manager is an organiser and a mentor and a coach, not a veteran star player. They need to love winning, and love the sport, and know that they help most by making the team into a winning team. The way they get code written is by making an environment which is conducive to that; the way they create quality is by fostering a passion for quality and making space in the schedule and the team for work which serves only that goal.

When I’m hiring a manager, I look for people who love to keep other people productive. That means handling all the productivity killers in an engineering team: hiring and firing, inter-team meetings, customer presentations, reporting up and out and sideways, planning, travel coordination, conferences, expenses… all those things which we don’t want engineers to spend much time on or have rattling around at the back of their minds. It also means caring about people, and being that gregarious and nosy type of person who knows what everyone is doing, and why, and also what’s going on outside the workplace.

An engineering manager is doing well if every single member of their team can answer these questions, all the time:

* what are my key goals, in order of importance, in this cycle?
* what are the key delivery dates, in this cycle?
* how am I doing, generally? and what is the company view of my strengths and not-so-strengths?
* how do I fit into the team, who are my counterparts, and how do I complement them?

Also, the manager is doing well if they know, for each member of their team:

* what personal stresses or other circumstances might be a distraction for them,
* what the interpersonal dynamics are between that member, and other counterparts or team members,
* what that member’s best contributions are, and strongest interests outside of the assigned goals

For the team as a whole, the manager should know

* what the team is good at, and weak at, and what their plan is to bolster what needs bolstering,
* what the cycle looks like, in terms of goals and progress against them
* what the next cycle is shaping up to look like, and how that fits with long term goals

Really great management makes a company a joy to work in, as a developer. It’s something we should celebrate and cultivate, teach and select for, not just be the natural upward path for people who have been around a while. If you truly love technology, there are lots of careers that take you to the top of the tech game without having to move into management. And conversely, if you love organising and leading, it’s possible to get started on a management career in software without being the world’s greatest coder first. If you think that’s you, you’ll love being an engineering manager at Canonical.

Note to the impatient: this is a long post and it only gets to free software ecosystem dynamics towards the end. The short version is that we need to empower software companies to participate in the GNU/Linux ecosystem, and not fear them. Rather than undermining their power, we need to balance it through competition.

Church schools in apartheid South Africa needed to find creative ways to teach pupils about the wrongs of that system. They couldn’t actively foment revolt, but they could teach alternative approaches to governance. That’s how, as a kid in South Africa, I spent a lot of time studying the foundations of the United States, a system of governance defined by underdogs who wanted to defend not just against the abuses of the current power, but abuses of power in general.

My favourite insight in that regard comes from James Madison in the Federalist Papers, where he describes the need to understand and harness human nature as a force: to pit ambition against ambition, as it is often described. The relevant text is worth a read if you don’t have time for the whole letter:

But the great security against a gradual concentration of the several powers in the same department, consists in giving to those who administer each department the necessary constitutional means and personal motives to resist encroachments of the others. The provision for defense must in this, as in all other cases, be made commensurate to the danger of attack. Ambition must be made to counteract ambition. The interest of the man must be connected with the constitutional rights of the place. It may be a reflection on human nature, that such devices should be necessary to control the abuses of government. But what is government itself, but the greatest of all reflections on human nature? If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself. A dependence on the people is, no doubt, the primary control on the government; but experience has taught mankind the necessity of auxiliary precautions.

When we debate our goals, principles and practices in the FLOSS community, we devote a great deal of energy to “how things should be”, and to the fact that “men are not angels”. I think the approach of James Madison is highly relevant to those discussions.

The conservation of power

Just as energy, momentum, charge and other physical properties of a system are conserved, so in a sense is power. If your goal is to reduce the power of one agency in government, the most effective strategy is to strengthen the position of another. We know that absolute monarchies are bad: they represent unbalanced power.

Within a system, power will tend to consolidate. We have antitrust agencies specifically to monitor the consolidation of economic power and to do something about it. We setup independent branches of government to ensure that some kinds of power simply cannot be consolidated.

Undermining power in one section of an ecosystem inevitably strengthens the others.

Since we humans tend to think the grass is greener on the other side of the fence, and since power takes a little while to get properly abused, you can often see societies oscillate in the allocation of power. When things seem a little out of control, we give more power to the police and other securocrats. Then, when they become a little thuggish, we squeeze their power through regulation and oversight, and civil liberties gain in power, until the pendulum swings again.

The necessity of concentrated power

Any power can be abused. I had a very wise headmaster at that same school who used to say that the only power worth having was power that was worth abusing. This was not a call to the abuse of power, you understand, merely a reflection on the fact that power comes with the real responsibility of restraint.

So, if power can be abused, why do we tolerate it at all? Why not dissolve authority down to the individual? Because the absence of power leads to chaos, which ironically is an easy place to establish despotic authority. Power isn’t seized – it’s given. We give people power over us. And in a state of chaos, all it takes is a few people to gain some power and they have a big advantage over everyone else. That’s why early leaders in new ecosystems tend to become unbeatable very quickly.

Also, power clears the path for action. In a world with no power, little gets done at all. We are better off with large companies that have the power to organise themselves around a goal than trying to achieve the same goal with a collection of individuals; try making a Boeing from an equivalent group of artisans, and you’ll see what I mean. Artisans form guilds and companies to increase their reach and impact. Individual volunteers join professional institutions to get more effective: consider the impact of handing out food yourself, versus helping sustain a network of soup kitchens, even in the purely non-profit world. Having some clout on your side is nothing to sniff at, even if you have purely philanthropic goals.

Power and innovation

If you have all the power already, there’s no spur to innovate. So kingdoms stagnate, eventually.

But power makes space for good things, too. It’s the powerful (and rich) who fund the arts in most societies. Innovation needs breathing space; companies with economic power can incubate new ideas to the point where they become productive.

Too much competition can thus limit innovation: look how difficult it has been for the Windows-based PC manufacturers, who live in a brutally competitive world and have little margin, to innovate. They are trapped between a highly efficient parts supply ecosystem, which feeds them all the same stuff at the same price, and a consumer market that requires them all to provide PC’s which run the same stuff the same way. As a result, they have little power, little margin, little innovation.

The trick is not to fear power itself, but instead, to shape, balance and channel it. You don’t want to aim for the absence of power, you want the Goldilocks effect of having “just enough”. And that was James Madison’s genius.

Verticals, competition and the balance of power

Of course, competition between rivals is the balance of power in business. We resent monopolies because they are either abusing their power, or stagnating.

In economics, we talk about “verticals” as the set of supply dependencies needed for a particular good. So, to make an aircraft, you need various things like engines and alloys, and those suppliers all feed the same pool of aircraft manufacturers.

In order to have a healthy ecosystem, you need a balance of power both between suppliers at the same level of the stack, and vertically, between the providers of parts and providers of the finished product. That’s because innovation needs both competition AND margin to stimulate and nurture it.

In the PC case, the low margins in the PC sector helped reinforce the Windows monopoly. Not only was there no competition for Microsoft, there was no ability for a supplier further down the chain to innovate around them. The only player in that ecosystem that had the margin to innovate was Microsoft, and since they faced no competition, there was little stimulus to embrace their own R&D, no matter how much they spent on it.

Power in the FLOSS ecosystem: upstreams and distributions

So, where do we stand in the free software and open source ecosystem?

The lines between upstreams and distributions aren’t perfectly clear, of course. Simplistic versions of that picture are often used to prove points, but in fact, all the distributions are also in some sense upstreams, and even derivative distributions end up being leaders of those they derive from in some pieces or markets. Nevertheless, I think it’s worth looking at the balance of power between upstream projects and distributions, as it is today and as it could be.

Also, I think it’s worth looking at related parties, companies and institutions which work a lot with FLOSS but have orthogonal interests.

If one uses margin, or profit, as an indicator of power, it’s clear that the distributions today are in a far stronger position than most individual projects or upstreams. The vast majority of software-related revenue in the FLOSS ecosystem goes to distributions.

Within that segment, Red Hat claims 80% market share of paid Linux, a number that is probably accurate. Novell, the de facto #2, is in the midst of some transition, but indicators are that it continues to weaken. Oracle’s entry into the RHEL market has had at best marginal impact on RHEL economics (the substantial price rises in RHEL 6 are a fairly clear signal of the degree to which Red Hat believes it faces real competition). The existence of “unpaid RHEL” in the form of CentOS, as well as OEL, essentially strengthens the position of RHEL itself. Ubuntu and Debian have large combined levels of adoption, but low revenue.

So clearly, there is work to do just to balance power in the distribution market. And it will take work – historically, platforms tend to monopolies, and in the absence of a definitive countervailing force that establishes strength outside the RHEL gravity well, that’s what we’ll have. But that’s not the most interesting piece. What’s more interesting is the dynamic between distributions and upstreams.

Today, most upstreams are weak. They have little institutional strength. It’s generally difficult to negotiate and do business with an upstream. In many cases, that’s by design – the teams behind a project are simply not interested, or they are explicitly non-profit, as in the case of the FSF, which makes them good leaders of specific values, but difficult to engage with commercially.

As a result, those who need to do business with open source go to distributions, even in cases where they really want to be focused on a particular component. This greatly amplifies the power of the distributions: they essentially are the commercial vehicles for ALL of open source. The weakness of individual upstreams turns into greater strength for distributions.

You can imagine that distributions like it that way, and it would be surprising to see a distribution, or company that backs a distribution, arguing for stronger upstreams. But that’s exactly the position I take: FLOSS needs stronger upstreams, and as a consequence, weaker distributions.

Stronger upstreams will result in more innovation in FLOSS than stronger distributions. Essentially, like Microsoft, a distribution receives cash for the whole platform and allocates it to specific areas of R&D. That means the number of good ideas that receive funding in our ecosystem, today, is dependent on the insights of a very few companies. Just as Microsoft invested a lot in R&D and yet seemed to fall behind, upstream innovation will be strangled if it’s totally dependent on cash flow via distributions.

It’s not just innovation that suffers because we don’t have more power, or economic leverage, in the hands of upstreams. It’s also the myriad of things beyond code itself. When you have a company behind a project, they tend to take care of a lot more than just the code: QA, documentation, testing, promotion. It’s easy, as a developer, to undervalue those things, or to see them as competing for resources with the “real work” of code. But that competition is necessary, and they make a great contribution to the dynamism of the final product.

Consider the upstream projects which have been very successful over the long term. Qt and MySQL, for example, both had companies behind them that maintained strong leverage over the product. That leverage was often unpopular, but the result was products available to all of us under a free license that continued to grow in stature, quality and capability despite the ups and downs of the broader market, and without being too dependent on the roving spotlight of “coolness”, which tends to move quickly from project to project.

There are of course successful upstream projects which do not have such companies. The best example is probably the Linux kernel itself. However, those projects fall into a rather unusual category: they are critical to some large number of companies that make money in non-software ways, and those companies are thus forced to engage with the project and contribute. In the case of the kernel, hardware companies directly and indirectly underwrite the vast majority of the boring but critical work that, in other projects, would be covered by the sponsoring institution. And despite that, there are many gaps in the kernel. You don’t have to dig very hard to find comments from key participants bemoaning the lack of testing and documentation. Nevertheless, it gets by quite well under the circumstances.

But most ecosystems will have very few projects that are at such a confluence. Most upstream projects are the work of a few people, the “coolness” spotlight shines on them briefly if at all. They need either long term generosity from core contributors, or an institution to house and care for them, if they want to go the distance. The former rarely works for more than a few years.

Projects which depend on indirect interests, such as those sponsored by hardware companies, have another problem. Their sponsoring institutions are generally not passionate about software. They don’t really need or want to produce GREAT software. And if you look at the projects which get a lot of such contributions, that becomes very obvious. Compare and contrast the quality of apps from companies which deeply care about software from those which come from hardware companies, and you see what I mean.

We FLOSS folk like to tell ourselves that the Windows hegemony was purely a result of the manipulations of its sponsor, and the FLOSS as we do it today is capable of doing much more if it only had a fair chance. I don’t think, having watched the success of iOS and Android as new ecosystems, that we can justify that position any longer. I think we have to be willing to think hard about what we are willing to change if we want to have the chance of building an ecosystem as strong, but around GNU/Linux. Since that’s my goal, I’m thinking very hard about that, and creatively. I think it’s possible, but not without challenging some sacred cows and figuring out what values we want to preserve and which we can remould.

Power is worth having in your ecosystem, despite its occasional abuse

There’s no doubt that power created will be abused. That’s true of a lot of important rights and powers. For example, we know that free speech is often abused, but we nevertheless value it highly in many societies that are also big contributors to FLOSS. You probably know the expression, “I disagree with what you are saying entirely, but I will defend to the death your right to say it”.

Similarly, in our ecosystem, power will be abused. But it’s still worth helping institutions acquire it, even those we dislike or distrust, or those we compete with. At Canonical, we’ve directly and indirectly helped lots of institutions that you could describe that way – Oracle, Novell, Red Hat, Intel and many others. The kneejerk reaction is usually “no way”, but upon deeper thought, we figured that it is better to have an ecosystem of stronger players, considering the scale of the battle with the non-FLOSS world.

I often find people saying “I would help an institution if I thought I could trust it”. And I think that’s a red herring, because just as power will be abused, trust will be abused too. If you believe that this is a battle of ecosystems and platforms, you want to have as many powerful competitors in your ecosystem as possible, even though you probably cannot trust any of them in the very long term. It’s the competition between them that really creates long term stability, to come back to the thinking of James Madison. It’s pitting ambition against ambition, not finding angels, which makes that ecosystem a winner. If you care about databases, don’t try to weaken MySQL, because you need it strong when you need it. Rather figure out how to strengthen PostGRES alongside it.

How Canonical fits in

Canonical is in an interesting position with regard to all of this. As a distribution, we could stay silent on the issue, and reasonably expect to grow in power over time, on the same basis that Red Hat has. And there are many voices in Canonical that say exactly that: don’t rock the boat, essentially.

However, perhaps unlike other Linux distributions, Canonical very much wants to see end users running free software, and not just IT professionals. That raises the bar dramatically in terms of the quality of the individual pieces. It means that it’s not good enough for us to work in an ecosystem which produces prototype or rough cut products, which we then aggregate and polish at the distribution level. Unlike those who have gone before, we don’t want to be the sole guarantor of quality in our ecosystem, because that will not scale.

For that reason, looking at the longer term, it’s very important to me that we figure out how to give more power to upstreams, so that they in turn can invest in producing components or works which have the completeness and quality that end-users expect. I enjoy working with strong commercial institutions in the open source ecosystem – while they always represent some competitive tension, they also represent the opportunity to help our ecosystem scale and out-compete the proprietary world. So I’d like to find ways to strengthen the companies that have products under free software, and encourage more that have proprietary projects to make them available under free licenses, even if that’s not the only way they publish them.

If you’ve read this far, you probably have a good idea where I’m going with this. But I have a few more steps before actually getting there. More soon.

Till then, I’m interested in how people think we can empower upstream projects to be stronger institutionally.

There are a couple of things that are obvious and yet don’t work. For example, lots of upstreams think they should form a non-profit institution to house their work. The track record of those is poor: they get setup, and they fail as soon as they have to file their annual paperwork, leaving folks like the SFLC to clean up the mess. Not cool. At the end of the day, such new institutions add paperwork without adding funding or other sources of energy. They don’t broaden out the project the same way a company writing documentation and selling services usually does. On the other hand, non-profits like the FSF which have critical mass are very important, though, which is why on occasion we’ve been happy to contribute to them in various ways.

Also, I’m interested in how we can reshape our attitudes to power. Today, the tenor of discussion in most FLOSS debates is simplistic: we fear power, and we attempt to squash it always, celebrating the individual. But that misses the point that we are merely strengthening the power elsewhere; in distributions, in other ecosystems. We need a richer language for describing “the Goldilocks power” balance, and how we can move beyond FUD.

So, what do you think we could do to create more Mozilla’s, more MySQL’s, more Qt’s and more OpenStacks?

I’ll summarise interesting comments and threads in the next post.