Archive for the 'thoughts' Category

Be careful of headlines, they appeal to our sense of the obvious and the familiar, they entrench rather than challenge established stereotypes and memes. What one doesn’t read about every day is usually more interesting than what’s in the headlines. And in the current round of global unease, what’s not being said – what we’ve failed to admit about our Western selves and our local allies – is central to the problems at hand.

Both Iraq and Ukraine, under Western tutelage, failed to create states which welcome diversity. Both Iraq and the Ukraine aggressively marginalised significant communities, with the full knowledge and in some cases support of their Western benefactors. And in both cases, those disenfranchised communities have rallied their cause into wars of aggression.

Reading the Western media one would think it’s clear who the aggressors are in both cases: Islamic State and Russia are “obvious bad actors” who’s behaviour needs to be met with stern action. Russia clearly has no business arming rebels with guns they use irresponsibly to tragic effect, and the Islamic State are clearly “a barbaric, evil force”. If those gross simplifications, reinforced in the Western media, define our debate and discussion on the subject then we are destined pursue some painful paths with little but frustration to show for the effort, and nasty thorns that fester indefinitely. If that sounds familiar it’s because yes, this is the same thing happening all over again. In a prior generation, only a decade ago, anger and frustration at 9/11 crowded out calm deliberation and a focus on the crimes in favour of shock and awe. Today, out of a lack of insight into the root cause of Ukrainian separatism and Islamic State’s attractiveness to a growing number across the Middle East and North Africa, we are about to compound our problems by slugging our way into a fight we should understand before we join.

This is in no way to say that the behaviour of Islamic State or Russia are acceptable in modern society. They are not. But we must take responsibility for our own behaviour first and foremost; time and history are the best judges of the behaviour of others.

In the case of the Ukraine, it’s important to know how miserable it has become for native Russian speakers born and raised in the Ukraine. People who have spent their entire lives as citizens of the Ukraine who happen to speak in Russian at home, at work, in church and at social events have found themselves discriminated against by official decree from Kiev. Friends of mine with family in Odessa tell me that there have been systematic attempts to undermine and disenfranchise Russian speaking in the Ukraine. “You may not speak in your home language in this school”. “This market can only be conducted in Ukrainian, not Russian”. It’s important to appreciate that being a Russian speaker in Ukraine doesn’t necessarily mean one is not perfectly happy to be a Ukranian. It just means that the Ukraine is a diverse cultural nation and has been throughout our lifetimes. This is a classic story of discrimination. Friends of mine who grew up in parts of Greece tell a similar story about the Macedonian culture being suppressed – schools being forced to punish Macedonian language spoken on the playground.

What we need to recognise is that countries – nations – political structures – which adopt ethnic and cultural purity as a central idea, are dangerous breeding grounds for dissent, revolt and violence. It matters not if the government in question is an ally or a foe. Those lines get drawn and redrawn all the time (witness the dance currently under way to recruit Kurdish and Iranian assistance in dealing with IS, who would have thought!) based on marriages of convenience and hot button issues of the day. Turning a blind eye to thuggery and stupidity on the part of your allies is just as bad as making sure you’re hanging with the cool kids on the playground even if it happens that they are thugs and bullies –  stupid and shameful short-sightedness.

In Iraq, the government installed and propped up with US money and materials (and the occasional slap on the back from Britain) took a pointedly sectarian approach to governance. People of particular religious communities were removed from positions of authority, disqualified from leadership, hunted and imprisoned and tortured. The US knew that leading figures in their Iraqi government were behaving in this way, but chose to continue supporting the government which protected these thugs because they were “our people”. That was a terrible mistake, because it is those very communities which have morphed into Islamic State.

The modern nation states we call Iraq and the Ukraine – both with borders drawn in our modern lifetimes – are intrinsically diverse, intrinsically complex, intrinsically multi-cultural parts of the world. We should know that a failure to create governments of that diversity, for that diversity, will result in murderous resentment. And yet, now that the lines for that resentment are drawn, we are quick to choose sides, precisely the wrong position to take.

What makes this so sad is that we know better and demand better for ourselves. The UK and the US are both countries who have diversity as a central tenet of their existence. Freedom of religion, freedom of expression, the right to a career and to leadership on the basis of competence rather than race or creed are major parts of our own identity. And yet we prop up states who take precisely the opposite approach, and wonder why they fail, again and again. We came to these values through blood and pain, we hold on to these values because we know first hand how miserable and how wasteful life becomes if we let human tribalism tear our communities apart. There are doors to universities in the UK on which have hung the bodies of religious dissidents, and we will never allow that to happen again at home, yet we prop up governments for whom that is the norm.

The Irish Troubles was a war nobody could win. It was resolved through dialogue. South African terrorism in the 80′s was a war nobody could win. It was resolved through dialogue and the establishment of a state for everybody. Time and time again, “terrorism” and “barbarism” are words used to describe fractious movements by secure, distant seats of power, and in most of those cases, allowing that language to dominate our thinking leads to wars that nobody can win.

Russia made a very grave error in arming Russian-speaking Ukranian separatists. But unless the West holds Kiev to account for its governance, unless it demands an open society free of discrimination, the misery there will continue. IS will gain nothing but contempt from its demonstrations of murder – there is no glory in violence on the defenceless and the innocent – but unless the West bends its might to the establishment of societies in Syria and Iraq in which these religious groups are welcome and free to pursue their ambitions, murder will be the only outlet for their frustration. Politicians think they have a new “clean” way to exert force – drones and airstrikes without “boots on the ground”. Believe me, that’s false. Remote control warfare will come home to fester on our streets.

 

ACPI, firmware and your security

Monday, March 17th, 2014

ACPI comes from an era when the operating system was proprietary and couldn’t be changed by the hardware manufacturer.

We don’t live in that era any more.

However, we DO live in an era where any firmware code running on your phone, tablet, PC, TV, wifi router, washing machine, server, or the server running the cloud your SAAS app is running on, is a threat vector against you.

If you read the catalogue of spy tools and digital weaponry provided to us by Edward Snowden, you’ll see that firmware on your device is the NSA’s best friend. Your biggest mistake might be to assume that the NSA is the only institution abusing this position of trust – in fact, it’s reasonable to assume that all firmware is a cesspool of insecurity courtesy of incompetence of the worst degree from manufacturers, and competence of the highest degree from a very wide range of such agencies.

In ye olden days, a manufacturer would ship Windows, which could not be changed, and they wanted to innovate on the motherboard, so they used firmware to present a standard interface for things like power management to a platform that could not modified to accommodate their innovation.

Today, that same manufacturer can innovate on the hardware and publish a patch for Linux to express that innovation – and Linux is almost certainly the platform that matters. If Windows enters this market then the Windows driver model can evolve to give manufacturers this same ability to innovate in the Windows world, where proprietary unverifiable blobs are the norm.

Arguing for ACPI on your next-generation device is arguing for a trojan horse of monumental proportions to be installed in your living room and in your data centre. I’ve been to Troy, there is not much left.

We’ve spent a good deal of time working towards a world where you can inspect the code that is running on any device you run. In Ubuntu we work hard to make sure that any issues in that code can be fixed and delivered right away to millions of users. Bruce Schneier wisely calls security a process, not a product. But the processes for finding and fixing problems in firmware are non-existent and not improving.

I would very much like to be part of FIXING the security problem we engineers have created in our rush to ship products in the olden days. I’m totally committed to that.

So from my perspective:

  • Upstream kernel is the place to deliver the software portion of the innovation you’re selling. We have great processes now to deliver that innovation to users, and the same processes help us improve security and efficiency too.
  • Declarative firmware that describes hardware linkages and dependencies but doesn’t include executable code is the best chance we have of real bottom-up security. The Linux device tree is a very good starting point. We have work to do to improve it, and we need to recognise the importance of being able to fix declarations over the life of a product, but we must not introduce blobs in order to short cut that process.

Let’s do this right. Each generation gets its turn to define the platforms it wants to pass on – let’s pass on something we can be proud of.

Our mission in Ubuntu is to give the world’s people a free platform they can trust.  I suspect a lot of the Linux community is motivated by the same goal regardless of their distro. That also means finding ways to ensure that those trustworthy platforms can’t be compromised elsewhere. We can help vendors innovate AND ensure that users have a fighting chance of privacy and security in this brave new world. But we can’t do that if we cling to the tools of the past. Don’t cave in to expediency. Design a better future, it really can be much healthier than the present if we care and act accordingly.

 

Mistakes made and addressed

Sunday, November 10th, 2013

Occasionally we make mistakes. When we do it’s appropriate to apologise, address them, and take steps to ensure they don’t happen again.

Last week, someone at Canonical made a mistake in sending the wrong response to a trademark issue out of the range of responses we usually take. That has been addressed, and steps are being taken to reduce the likelihood of a future repeat.

By way of background, there are a number of trademarks around the Ubuntu name and logo which we are required to “enforce” or risk losing them altogether. In normal companies, the rule is that nobody else gets to use your logo. In Canonical, we have a policy that says that there are lots of cases where people DO get to use our name and logo; this is because our policy takes the internet-friendly view that communities need to have rights to a name if they want to feel like they are part of something; we go even further and explicitly allow the use of our name for elements of satire and mirth around Ubuntu. Every country has different rules about trademarks and free speech, we have a global policy that is more generous than most jurisdictions by default.

We do have to “enforce” those trademarks, or we lose them. That means:

  • we have an email address, trademarks@ubuntu.com, where people can request permission to use the name and logo
  • we actively monitor, mostly using standard services, use of the name and logo
  • we aim to ensure that every use of the name and logo is supported by a “license” or grant of permission

As you can imagine, that is a lot of work. A lot of what we find out there is fine, fun, harmless or constructive. Sometimes however it’s pretty nasty: we have had OEMs forging Ubuntu certifications to meet requirements for government tenders, for example.

In order to make the amount of correspondence manageable, we have a range of standard templates for correspondence. They range from the “we see you, what you are doing is fine, here is a license to use the name and logo which you need to have, no need for further correspondence”, through “please make sure you state you are speaking for yourself and not on behalf of the company or the product”, to the “please do not use the logo without permission, which we are not granting unless you actually certify those machines”, and “please do not use Ubuntu in that domain to pretend you are part of the project when you are not”.

Last week, the less-than-a-month-at-Canonical new guy sent out the toughest template letter to the folks behind a “sucks” site. Now, that was not a decision based on policy or guidance; as I said, Canonical’s trademark policy is unusually generous relative to corporate norms in explicitly allowing for this sort of usage. It was a mistake, and there is no question that the various people in the line of responsibility know and agree that it was a mistake. It was no different, however, than a bug in a line of code, which I think most developers would agree happens to the best of us. It just happened to be, in that analogy, a zero-day remote root bug.

The internets went wild, Wired picked a headline accusing Canonical of a campaign to suppress critics, Debian started arguing about whether it should remove all references to the distro-that-shall-not-be-named but then decided to argue about whether it should enforce its own trademarks which lead to an argument about… oh never mind. The point is, people are judging Canonical over this, which is fine and correct in my view, because I am judging Canonical over this too.

Here’s how I’m judging Canonical. Your framework may vary, but I think this is quite a defensible one.

Judge the policy. In this case Canonical has a trademark policy that enables community members to use the marks (good) and allows for satire and sucks sites even in jurisdictions where the local law does not (great!). Failing to have a policy would not be a bonus point in this review :)

Judge the execution of the policy. Canonical does the work needed to maintain the marks; it monitors and responds to requests and notifications around the marks (good). In this case, the wrong action was taken – a new employee was clearly not properly briefed about policy and sensitivities in a key audience for the company (bad).

Judge the response to the incident. Within hours of the publication of a response to our letter, the CEO, COO and legal team reviewed the decision, corrected the action and addressed the matter publicly. I apologised the moment I was made aware of the incident. And I’m reassured that the team in question is taking steps in training and process to minimise the risk of a recurrence.

For those carrying pitchforks and torches on this issue, ask yourself if that would be appropriate to a bug in a line of code in one of many thousands of changes being made monthly by a large team. No? Think about it.

 

On another, more personal note, I made a mistake myself when I used the label “open source tea party” to refer to the vocal non-technical critics of work that Canonical does. That was unnecessary and quite possibly equally offensive to members of the real Tea Party (hi there!) and the people with vocal non-technical criticism of work that Canonical does (hello there!).

For the record, technical critique of open source software is part of what makes open source software so good. It is welcome and appreciated very much at Canonical; getting reviews and feedback and suggestions for improvement from smart people who care is part of why we enjoy writing open source software. There isn’t anything in what I said to suggest that I don’t welcome such technical feedback, but some assumed I was rejecting all feedback including technical commentary. I was not – I was talking about criticism of software which does not centre on the software itself, but rather on some combination of the motivations of the people who wrote it, or the particular free  software license under which it is published, or the policies of the company, or the nationality of the company behind it. Unless critique is focused on improving the software in question it is pretty much a waste of the time of the people who are trying to improve the software in question. That waste of time is what I had in mind with the comment; nevertheless, it was a thoughtless use of an irrelevant label. Please accept my apologies if you have been a vocal non-technical critic of Canonical’s software and felt offended by the label.

Ubuntu in 2013

Wednesday, December 26th, 2012

This is a time of year to ponder what matters most and choose what we’ll focus on in the year to come. Each of us has our own priorities and perspective, so your goals may be very different to mine. Nevertheless, for everyone in the Ubuntu project, here’s what I’ll be working towards in the coming year, and why.

First, what matters most?

It matters that we not exclude people from our audience. From the artist making scenes for the next blockbuster, to the person who needs a safe way to surf the web once a day, it’s important to me, and to the wider Ubuntu community, the people be able to derive some benefit from our efforts. Some of that benefit might be oblique – when someone prefers XFCE to Unity, they are still benefiting from enormous efforts by hundreds of people to make the core Ubuntu platform, as well as the Xubuntu team’s unique flourish. Even in the rare case where the gift is received ungraciously, the joy is in the giving, and it matters that our efforts paid dividends for others.

In this sense, it matters most that we bring the benefits of free software to an audience which would not previously have had the confidence to be different. If you’ve been arguing over software licenses for the best part of 15 years then you would probably be fine with whatever came before Ubuntu. And perhaps the thing you really need is the ability to share your insights and experience with all the people in your life who wouldn’t previously have been able to relate to the things you care about. So we have that interest in common.

It matters that we make a platform which can be USED by anybody. That’s why we’ve invested so much into research and thinking about how people use their software, what kinds of tools they need handy access to, and what the future looks like. We know that there are plenty of smart people who’s needs are well served by what existed in the past. We continue to maintain older versions of Ubuntu so that they can enjoy those tools on a stable platform. But we want to shape the future, which means exploring territory that is unfamiliar, uncertain and easy to criticise. And in this regard, we know, scientifically, that Ubuntu with Unity is better than anything else out there. That’s not to diminish the works of others, or the opinions of those that prefer something else, it’s to celebrate that the world of free software now has a face that will be friendly to anybody you care to recommend it.

It also matters that we be relevant for the kinds of computing that people want to do every day.

That’s why Unity in 2013 will be all about mobile – bringing Ubuntu to phones and tablets. Shaping Unity to provide the things we’ve learned are most important across all form factors, beautifully. Broadening the Ubuntu community to include mobile developers who need new tools and frameworks to create mobile software. Defining new form factors that enable new kinds of work and play altogether. Bringing clearly into focus the driving forces that have shaped our new desktop into one facet of a bigger gem.

It’s also why we’ll push deeper into the cloud, making it even easier, faster and cost effective to scale out modern infrastructure on the cloud of your choice, or create clouds for your own consumption and commerce. Whether you’re building out a big data cluster or a super-scaled storage solution, you’ll get it done faster on Ubuntu than any other platform, thanks to the amazing work of our cloud community. Whatever your UI of choice, having the same core tools and libraries from your phone to your desktop to your server and your cloud instances makes life infinitely easier. Consider it a gift from all of us at Ubuntu.

There will always be things that we differ on between ourselves, and those who want to define themselves by their differences to us on particular points. We can’t help them every time, or convince them of our integrity when it doesn’t suit their world view. What we can do is step back and look at that backdrop: the biggest community in free software, totally global, diverse in their needs and interests, but united in a desire to make it possible for anybody to get a high quality computing experience that is first class in every sense. Wow. Thank you. That’s why I’ll devote most of my time and energy to bringing that vision to fruition. Here’s to a great 2013.

Holistic UI is smarter UX

Tuesday, March 27th, 2012

In the open source community, we celebrate having pieces that “do one thing well”, with lots of orthogonal tools compounding to give great flexibility. But that same philosophy leads to shortcomings on the GUI / UX front, where we want all the pieces to be aware of each other in a deeper way.

For example, we consciously place the notifications in the top right of the screen, avoiding space that is particularly precious (like new tab titles, and search boxes). But the indicators are also in the top right, and they make menus, which drop down into the same space a notification might occupy.

Since we know that notifications are queued, no notification is guaranteed to be displayed instantly, so a smarter notification experience would stay out of the way while you were using indicator menus, or get out of the way when you invoke them. The design story of focusayatana, where we balance the need for focus with the need for awareness, would suggest that we should suppress awareness-oriented things in favour of focus things. So when you’re interacting with an indicator menu, we shouldn’t pop up the notification. Since the notification system, and the indicator menu system, are separate parts, the UNIX philosophy sells us short in designing a smart, smooth experience because it says they should each do their thing individually.

Going further, it’s silly that the sound menu next/previous track buttons pop up a notification, because the same menu shows the new track immediately anyway. So the notification, which is purely for background awareness, is distracting from your focus, which is conveying exactly the same information!

But it’s not just the system menus. Apps can play in that space too, and we could be better about shaping the relationship between them. For example, if I’m moving the mouse around in the area of a notification, we should be willing to defer it a few seconds to stay out of the focus. When I stop moving the mouse, or typing in a window in that region, then it’s OK to pop up the notification.

It’s only by looking at the whole, that we can design great experiences. And only by building a community of both system and application developers that care about the whole, that we can make those designs real. So, thank you to all of you who approach things this way, we’ve made huge progress, and hopefully there are some ideas here for low-hanging improvements too :)

Government use of Ubuntu

Thursday, March 8th, 2012

Governments are making increasingly effective use of Ubuntu in large-scale projects, from big data to little schools. There is growing confidence in  open source in government quarters, and growing sophistication in how they engage with it.

But adopting open source is not just about replacing one kind of part with another. Open source is not just a substitute for shrink-wrapped proprietary software. It’s much more malleable in the hands of industry and users, and you can engage with it very differently as a result.  I’m interested in hearing from thought leaders in the civil service on ways they think governments could get much more value with open source, by embracing that flexibility. For example, rather than one-size-fits-all software, why can’t we deliver custom versions of Ubuntu for different regions or countries or even departments and purposes? Could we enable the city government of Frankfurt to order PC’s with the Ubuntu German Edition pre-installed?

Or could we go further, and enable those governments to participate in the definition and production and certification process? So rather than having to certify exactly the same bits which everyone else is using, they could create a flavour which is still “certified Ubuntu” and fully compatible with the whole Ubuntu ecosystem, can still be ordered pre-installed from global providers like Dell and Lenovo, but has the locally-certified collection of software, customizations, and certifications layered on top?

If we expand our thinking beyond “replacing what went before”, how could we make it possible for the PC companies to deliver much more relevant offerings, and better value to governments by virtue of free software? Most of the industry processes and pipelines were set up with brittle, fixed, proprietary software in mind. But we’re now in a position to drive change, if there’s a better way to do it, and customers to demand it.

So, for a limited time only, you can reach me at governator@canonical.com (there were just too many cultural references there to resist, and it’s not a mailbox that will be needed again soon ;). If you are in the public service, or focused on the way governments and civic institutions can use open source beyond simply ordering large numbers of machines at a lower cost, drop me a note and let’s strike up a conversation.

Here are a few seed thoughts for exploration and consideration.

Local or national Ubuntu editions, certified and pre-installed by global brands

Lots of governments now buy PC’s from the world market with Ubuntu pre-installed. Several Canadian tenders have been won by companies bidding with Ubuntu pre-installed on PC’s. The same is true in Brazil and Argentina, in China and India and Spain and Germany. We’re seeing countries or provinces that previously had their own-brand local Linux, which they had to install build locally and install manually, shifting towards pre-order with Ubuntu.

In part, this is possible because the big PC brands have built up enough experience and confidence working with Canonical and Ubuntu to be able to respond to those tenders. You can call up Dell or Lenovo and order tens of thousands of laptops or desktops with Ubuntu pre-installed, and they will show up on time, certified. The other brands are following. It has been a lot of work to reach that point, but we’ve got the factory processes all working smoothly from Shenzen to Taipei. If you want tens of thousands of units, it all works well.

But Ubuntu, or free software in general, is not Windows. You shouldn’t have to accept the one-size-fits all story. We saw all of those local editions, or “national linux”, precisely because of the desire that regions have to build something that really suits them well. And Ubuntu, with it’s diversity of packages, open culture and remix-friendly licensing is a very good place to start. Many of the Spanish regional distro’s, for example, are based on Ubuntu. They have the advantage of being shaped to suit local needs better than we can with vanilla Ubuntu, but the disadvantage of being hard to certify with major ISV’s or IHV’s.

I’m interested in figuring out how we can formalise that flexibility, so that we can get the best of both worlds: local customizations and preferences expressed in a compatible way with the rest of the Ubuntu ecosystem, so they can take advantage of all the software and skills and certifications that the ecosystem and brand bring. And so they can order it pre-installed from any major global PC company, no problem, and upgrade to the next version of Ubuntu without losing all the customization that work that they did.

Security certifications by local agencies, with policy frameworks and updates

A European defence force has recently adopted Ubuntu widely as part of an agility-enhancing strategy that gives soldiers and office workers secure desktop capabilities from remote locations like… home, or out in the field. There’s some really quite sexy innovation there, but there’s also Ubuntu as we know and love it. In the process of doing the work, it emerged that their government has certified some specific versions of key apps like OpenVPN, and it would be very useful to them if they could ensure that those versions were the ones in use widely throughout the government.

Of course, today, that means manually installing the right version every time, and tracking updates. But Ubuntu could do that work, if it knew enough about the requirements and the policies, and there was a secure way to keep those policies up to date. Could we make the operating system responsive to such policies, even where it isn’t directly managed by some central infrastructure? If Ubuntu “knows” that it’s supposed to behave in a particular way, can we make it do much of the work itself?

The same idea is useful in an organizational setting, too. And the key question is whether we can do that, while still retaining both access  to the wider Ubuntu ecosystem, and compatibility with factory processes, so these machines could be ordered and arrive pre-installed and ready to go.

Local cultural customization

On a less securocratic note, the idea of Ubuntu being tailored to local culture is very appealing. Every region or community has its news sites, it’s languages, it’s preferred apps and protocols and conventions. Can we expand the design and definition of the Ubuntu experience so that it adapts naturally to those norms in a way much richer and more meaningful than we can with Windows today?

What would the key areas of customisation be? Who would we trust to define them? How would we combine the diversity of our LoCo communities with the leadership of Ubuntu and the formality of government or regional authorities? Would we *want* to do that? It’s a very interesting topic, because the value of having officially recognised platforms is just about on a par with the value of having agile, crowdsourced and community-driven customisation. Nevertheless, could we find a model whereby governments or civil groups could underwrite the creation of recognised editions of Ubuntu that adapt themselves to local cultural norms? Would we get a better experience for human beings if we did that?

Local skills development

Many of the “national linux” efforts focus on building small teams of engineers and designers and translators that are tasked with bringing a local flavour to the technology or content in the distro. We have contributors from almost (perhaps actually?) every country, and we have Canonical members in nearly 40 countries. Could those two threads weave together in an interesting way? I’m often struck, when I meet those teams, at the awkwardness of teams that feel like start-ups, working inside government departments – it’s never seemed an ideal fit for either party.

Sometimes the teams are very domain focused; one such local-Linux project is almost entirely staffed by teachers, because the genesis of the initiative was in school computing, and they have done well for that purpose.

But could we bring those two threads together? The Ubuntu-is-distributed-already and the local-teams-hired-to-focus-on-local-work threads seem highly complimentary; could we create teams which are skilled in distro development work, managed as part of the broader Ubuntu effort, but tasked with local priorities?

Public investments in sector leadership

Savvy governments are starting to ensure that research and development that they fund is made available under open licenses. Whether that’s open content licensing, or open source licensing, or RAND-Z terms, there’s a sensible view that information or tools paid for with public money should be accessible to that public on terms that let them innovate further or build businesses or do analysis of their own.

Some of that investment turns out to be software. For example, governments might prioritise genomics, or automotive, or aerospace, and along the way they might commission chunks of software that are relevant. How could we make that software instantly available to anybody running the relevant local flavour of Ubuntu? Would we do the same with content? How do we do that without delivering Newspeak to the desktop? Are there existing bodies of software which could be open sourced, but they don’t have a natural home, they’re essentially stuck on people’s hard drives or tapes?

 

There are multiple factors driving the move of public institutions to open source – mainly the recognition, after many years, of the quality and flexibility that an open platform provides. Austerity is another source of motivation to change. But participation, the fact that open source can be steered and shaped to suit the needs of those who use it simply through participating in open projects, hasn’t yet been fully explored. Food for thought.

And there’s much more to explore. If this is interesting to you, and you’re in a position to participate in building something that would actually get used in such a context, then please get in touch. Directly via The Governator, or via my office.

Cloudy prognosis for mainframes

Monday, October 24th, 2011

The death of the mainframe is about as elusive as the year of the Linux desktop. But cloud computing might finally present a terminal opportunity, so to speak, to those stalwarts of big business computing, by providing a compelling answer to the twin stories of reliability and throughput that have always been highlights of the big iron pitch.

Advocates of big iron talk about reliability. But with public clouds, we’re learning how to build services that achieve very high levels of reliability despite having low individual node reliability. It doesn’t matter if a single node in the cloud fails – cloud-style architectures route around that damage and keep the overall service available. Just as we dial storage reliability up or down by designing RAID arrays for the right balance of performance and resilience to failure, you can dial service reliability up or down in the cloud by allowing for redundancy. That comes at a price, of course, but the price of an extra 9 is substantially lower when you tackle it cloud-style than when you try and achieve it on a single piece of hardware.

The other big strength of big iron was always throughput. Customers will pay for it, so mainframe vendors were always happy to oblige them. But again, it’s hard to beat the throughput of a Hadoop cluster, and even harder to scale the throughput of a mainframe as cost-effectively as one can scale a private cloud infrastructure underneath Hadoop.

I’m not suggesting insurance companies will throw away their mainframes. They’re working, they’re paid for, so they’ll stick around. But the rapid adoption of cloud-based architectures is going to make it very difficult to consolidate future IT onto mainframes (something that happened in every prior generation) and is also going to reduce the incentive for doing so in the first place. After 20 years of imminent irrelevance, there’s finally a real reason to think their time is up.

Innovation and OpenStack: Lessons from HTTP

Thursday, September 8th, 2011

OpenStack is facing an important choice: does define a new set of API’s, one of many such efforts in cloud infrastructure, or does it build around the existing AWS API’s?  So far, OpenStack has had it both ways, with some new API work and also some AWS-based effort. I’m writing to make the case for a tighter definition of mission around the de facto standard infrastructure API’s of EC2, S3 and a few other elements of AWS.

What prompted this blog was my overhearing (or, seeing an email on a list) the statement that cloud infrastructure projects like OpenStack, Eucalyptus and others should “innovate at the level of the API and infrastructure concepts”. I’m of the view that any projects which try to do so will fail and are not worth spending your or my time on. They are going to be about as successful as projects that try to reinvent HTTP to make it better/faster/cleaner/whatever. Which is to say – not successful at all, because no new protocol with the same conceptual goals will match the ecosystem that exists today around HTTP. There will of course be protocol innovation, the last word is never written, but for the web, it’s a done deal. All the proprietary and ad-hoc things that preceded HTTP have died, and good riddance. Similarly, cloud infrastructure will converge around a standard API which will be imperfect but real. Innovation is all about how that API is implemented, not which API it is.

Nobody would say the web server market lacks innovation. There are many, many different companies and communities that make and market web server solutions. And each of those is innovating in some way – focusing on a different audience, or trying a different approach. Yet that entire market is constrained by a public standard: HTTP, which evolves far more slowly than the products that implement it.

There are also a huge number of things that wrap themselves around HTTP, from cache accelerators to 3G content compressors; the standardisation of that thin layer has created a massive ecosystem and driven fantastic innovation, even as many of the core concepts that drove HTTP’s initial design have eroded or softened. For example, HTTP was relentlessly stateless, but we’ve added cookies and cacheing to address issues caused by that (at the time radical) design constraint.

Today, cloud infrastructure is looking for its HTTP. I think that standard already exists in de facto form today at AWS, with EC2, S3 and some of the credential mechanisms being essentially the core primitives of cloud infrastructure management. There is enormous room for innovation in cloud infrastructure *implementations*, even within the constraints of that minimalist API. The hackers and funders and leaders and advocates of OpenStack, and any number of other cloud infrastructure projects both open source and proprietary, would be better off figuring out how to leverage that standardisation than trying to compete with it, simply because no other API is likely to gain the sort of ecosystem we see around AWS today.

It’s true that those API’s would better be defined in a clean, independent forum analogous to the W3C than inside the boiler-room of development at any single cloud provider, but that’s a secondary issue. And over time, it can be engineered to work that way.

More importantly for the moment, those who make an authentic effort to fit into the AWS protocol standard immediately gain access to chunks of the AWS gene pool, effectively gratis. From services like RightScale to tools like ElasticFox, your cloud is going to be more familiar, more effective and more potent if it can ease the barriers to porting from AWS. No two implementations will magically Just Work, but the rough edges and gotchas can get ironed out much more easily if there is a clear standard and reference implementations. So the cost of “porting” will always be lower between clouds that have commonality by design or heritage.

For OpenStack itself, until that standard is codified, I would describe the most successful mission statement as “to be the reference public cloud provider scale implementation of cloud infrastructure compatible with AWS core API’s”. That’s going to give all the public cloud providers who want to compete with Amazon the best result: they’ll be able to compete on service terms, while convincing early adopters that the move to their offering will be relatively painless. All it takes, really, is some humility and the wisdom to recognise the right place to innovate.

There will be many implementations of those core API’s. One or other will be the Apache, the “just start here” option. But it doesn’t matter so much which one that is, frankly. I think OpenStack has the best possible chance to be that, but only if they stick to this crisp mission and don’t allow themselves to be drawn into front-end differentiation for the sake of it. Should that happen, OpenStack will be vulnerable to another open source project which credibly aims to achieve the goals outlined here. Very vulnerable. Witness the ways in which Eucalyptus is rightly pointing out its superior AWS compatibility in comparison with OpenStack.

For the public cloud providers that hope to build on OpenStack, API differentiation is poison in a juicy steak. It looks tasty, but it’s going to cost you the race prematurely. There were lots of technical reasons why alternatives to Windows were *better*, they just failed to become de facto standards. As long as Amazon doesn’t package up AWS as an on-premise solution, it’s possible to establish a de facto standard around something else, but that something else (perhaps OpenStack) needs to be AWS-compatible in some meaningful way to get enough momentum to matter. That means there’s a window of opportunity to get this right, which is not going to stay open indefinitely. Either Amazon, or another open source project, could close that window on OpenStack’s fingers. And that would be a pity, since the community around OpenStack has tons of energy and goodwill. In order to succeed, it will need to channel that energy into innovation on the implementation, not on trying to redefine an existing standard.

Of course, all this would be much easier if there were a real HTTP-like standard defining those API’s. The web had the enormous advantage of being founded by Tim Berners-Lee, in an institution like CERN, with the vision to setup the W3C. In the case of today’s cloud infrastructure, there isn’t the same dynamic or set of motivations. Amazon’s position of vagueness on the AWS API’s is tactically perfect for them right now, and I would expect them to maintain that line while knowing full well there is no real proprietary claim in a public network API, and no real advantage to be had from claiming otherwise. What’s needed is simply to start codifying existing practice as a draft standard in a credible forum of experts, with a roadmap and the prospect of support from multiple vendors. I think that would be relatively easy to arrange, if we could get Rackspace, IBM and HP to sit down and commit to doing it. We already have HP and Rackspace at the OpenStack table, so the signs are encouraging.

A good standard would:

* be pragmatic about the fact that Amazon has already made a bunch of decisions we’ll live with for ever.
* have a commitment from folk like OpenStack and Eucalyptus to aim for compliance
* include a real automated functional test suite that becomes the interop benchmark of choice over time
* be open to participation by Amazon, though that will not likely come for some time
* be well documented and well managed, like HTTP and CSS and HTML
* not be run but the ITU or ISO

I’m quite willing to contribute resources to getting such a standard off the ground. Forget big consortiums or working groups or processes or lobbying forums, what’s needed are a few savvy folk who know AWS, Eucalyptus and OpenStack, together with a very few technical writers. Let me know if you’re interested.

Now, I started out by saying that I was writing to make the case for OpenStack to be focused on a particular area. It’s a bit cheeky for me to write anything of the sort, of course, because OpenStack is a well run project that has an excellent steering group, which recently held a poll of contributors to appoint some new members, none of which was me. I’ve every confidence in the leadership of the project, despite the tremendous pressure they are under to realise the hopes of so many diverse users and companies. I’m optimistic for the potential OpenStack has to accelerate cloud technology, and in Canonical we put a considerable amount of effort into making OpenStack deployment a smooth experience for Ubuntu users and Canonical customers. Ubuntu Cloud Infrastructure depends now on OpenStack. And I have a few old friends who are also leaders in the OpenStack community, so for all those reasons I thought it worth making this perspective public.

The responsibilities of ownership

Friday, July 22nd, 2011

In the open source community we make a very big deal about the rights of ownership. But what about the responsibilities?

Any asset comes with attendant costs, risks and responsibilities. And anybody who doesn’t take those seriously is a poor steward of the asset.

In the physical world, we know this very well. If you own a house, there are taxes to pay every year, there will be some bills for energy and maintenance, and there’s paperwork to fill out. I was rudely reminded of this when I got an SMS at 2am this morning, care of British Gas, helpfully reminding me to settle up the gas bill for a tenant of mine. If we fail to take care of these responsibilities, we’re at risk of having the asset degraded or taken away from our care. An abandoned building will eventually be condemned and demolished rather than staying around as a health hazard. A car which has not been tested and licensed cannot legally be driven on public roads. In short, ownership comes with a certain amount of work, and that work has to be handled well.

In the intellectual and digital world, things are a little different. There isn’t an obvious lawn to trim or wall to paint. But there are still responsibilities. For example, trademarks need to be defended or they are deemed to be lost. Questions need to be answered. Healthy projects grow and adapt over time in a dynamic world; change is inevitable and needs to be accommodated.

Maintaining a piece of free software is a non-trivial effort. The rest of the stack is continuously changing – compilers change, dependencies change, conventions change. The responsibility for maintenance should not be shirked, if you want your project to stay relevant and useful. But maintainership is very often the responsibility of “core” developers, not light contributors. Casual contributors who have scratched their own itch or met a work obligation by writing a patch often give, as a reason for the contribution, their desire to have that maintenance burden carried by the project, and not by themselves.

When a maintainer adds a patch to a work, they are also accepting responsibility for its maintenance, unless they have some special circumstance, like the patch is a plugin and essentially maintained by the contributor. For general cases, adding the patch is like mixing paint – it adds to the general body of maintenance in a way that cannot easily be undone or compartmentalised.

And owning an asset can create real liabilities. For example, in some countries, if you own a house and someone slips on the stairs, you can be held liable. If you own a car and it’s being borrowed, and the brakes fail, you can be held liable. In the case of code, accepting a patch implies, like it or not, accepting some liability for that patch. Whether it turns out to be a real liability, or just a contingent one, is something only time will tell. But ownership requires defence in the case of an attack, and that can be expensive, even if it turns out the attack is baseless.

So, one of the reasons I’m happy to donate (fully and irreversibly) a patch to a maintainer, and why Canonical generally does assign patches to upstreams who ask for it, is that I think the rights and responsibilities of ownership should be matched. If I want someone else to handle the work – the responsibility – of maintenance, then I’m quite happy for them to carry the rights as well. That only seems balanced. In the common case, that maintenance turns out to be as much work as the original crafting of the patch, and frankly, it’s the “boring work” part, while the fun part was solving the problem immediately at hand.

Of course, there are uncommon cases too.

One of the legendary fights over code ownership, between Sun and Novell, revolved around a plugin for OpenOffice that did some very cool stuff. Sun ended up re-creating that work because Novell would not give it to Sun. Frankly, I think Sun was silly. The plugin was a whole work, that served a coherent purpose all by itself. Novell had designed and implemented that component, and was perfectly willing and motivated to maintain it. In that case, it makes sense to me that Sun should have been willing to make space for Novell’s great work, leaving it as Novell’s. Instead, they ended up redoing that work, and lots of people felt hard done by. But that’s an uncommon case. The more usual scenario is that a contribution enhances the core, but is not in itself valuable without the rest of the code in the project being there.

Of course, “value” is relative. A patch that only applies against an existing codebase still has value in its ability to teach others how to do that kind of work. And it has value as art – you can put it on a t-shirt, or a wall.

But contributing – really contributing, actually donating – a patch to a maintainer doesn’t have to reduce those kinds of value to the original creator. I consider it best practice that a donation be matched by a wide license back. In other words, if I give my patch to the maintainer, it’s nice if they grant me a full set of rights back. While does a bad job with many other things, the Canonical contribution agreement does this: when you make a contribution under it, you get a wide license back. So that way, the creator retains all the useful rights, including the ability to publish, relicense, sell, or make a t-shirt, without also carrying all the responsibilities that go with ownership.

So a well-done contribution agreement can make clear who carries which responsibilities, and not materially diminish the rights of those who contribute. And a well-done policy of contribution would recognise that there are uncommon cases, where a contribution is in fact a whole piece in itself, and not require donation of that whole piece in order to be part of an aggregate whole.

What about cases where there is no clear maintainer or owner?

Well, outside of the world of copyright and code, we do have some models to refer to. For example, companies issue shares to their shareholders, reflecting their relative contribution and therefor relative shared ownership in the company. Those companies end up with diverse owners, each of whom is going to have their own opinions, preferences, ideals and constraints.

We would never think to require consensus on every decision of the board, or the company, among all shareholders. That would be unworkable – in fact, much of the apparatus of corporate governance exists specifically to give voice to the desires of shareholders while at the same time keeping institutions functional. That’s not to say that shareholders don’t get abused – there are enough case studies of management taking advantage of their position to fill a long and morbidly interesting book. Rules on corporate governance, and especially the protection of minority interests in companies, as well as the state of the art of constructing shareholder agreements to achieve the same goals, are constantly evolving. But at the end of the day, decisions need to be taken which are binding on the company and thus binding on the shareholders. The rights of ownership extend to the right to show up and be represented, and to participate in the discussion, and usually a vote of some sort. Thereafter, the decision is taken and (usually) the majority will carries.

In our absolutist mentality, we tend to think that a single line of code, or a single small patch, carries the same weight as the rest of a coherence codebase. It’s easy to feel that way: when a small patch is shared, but not donated, the creator retains sole ownership of that patch. So in theory, any change in the state of the whole must require the agreement of every owner. This is more than theory – it’s law in many places.

But in practice, that approach has not withstood any hard tests.

There are multiple cases where huge bodies of work, composed of the aggregate “patches” of many different owners, have been relicensed. Mozilla, the Ubuntu wiki, and I think even Wikipedia have all gone through public processes to figure out how to move the license of an aggregate work to something that the project leadership considered more appropriate.

I’d be willing to bet that, if some fatal legal flaw were discovered in the GPLv2, Linus would lead a process of review and discussion and debate about what to do about the Linux kernel, it would be testy and contentious, but in the end he would take a decision and most would follow to a new and better license. Personally, I’d be an advocate of GPLv3, but in the end it’s well known that I’m not a big shareholder in that particular company, so to speak, so I wouldn’t expect to have any say ;-) Those who did not want to follow would resign themselves to having their contributions replaced, and most would not bother to turn up for the meeting, giving tacit assent.

So our pedantic view that every line of code is sacred just would not hold up to real-world pressure. Projects have GOT to respond to major changes in the world around them. It would be unwise to loan a patch to a project in the belief that the project will never, under any circumstances, take a decision that is different to your personal views. Life’s just not like that. Change is inevitable, and we’re all only going to be thrilled about some subset of that change.

And that’s as it should be. Clinging to something small that’s part of someone else’s life and livelihood just isn’t healthy. It’s better either to commit to a reasonable shared ownership approach, which involves being willing to show up at meetings, contribute to maintenance and accept the will of the majority on major moves that might be unpalatable anyway, or to make a true gift that comes with no strings attached.

Sometimes I see people saying they are happy to make a donation as long as it has some constraints associated with it.

There was a family in SA that lived under weird circumstances for generations because a wealthy ancestor specified that they had to do that if they wanted access to their inheritance. It’s called “ruling from the grave”, and it’s really quite antisocial. Either you give someone what you’re giving them, and accept that they will carry those rights and responsibilities wisely and well, or you don’t give it to them at all. You’re not going to be around after your will is executed, and it’s impossible to anticipate everything that might happen. It’s exceedingly uncool, in my view, to leave people stuck.

It’s difficult to predict, in 50 or 100 years time, what the definition of “openness” will be, and who will have the definition that you might most agree with. In the short term we all have favourites, but every five or ten years the world changes and that precipitates a new round of definitions, licenses, concepts. Consider GPLv2 and GPLv3, where there turned out to be a real need to address new challenges in the way free software is being used. Or the Franklin Street Declaration, on web services. Despite having options like AGPL around, there still isn’t any real consensus on how best to handle those scenarios.

One can pick institutions, but institutions change too. Go back and look at the historical positions of any long-term political party, like the UK Whigs, and you’ll be amazed at how a group can shift their positions over a succession of leaders. I have complete trust in the FSF today, but little idea what they’ll be up to in 100 years time. That’s no insult to the FSF, it’s just a lesson I’ve learned from looking at the patterns of behaviour of institutions over the long term. It’s the same for the OSI or Creative Commons or any other political or ideological or corporate group. People move on, people die, times change, priorities shift, economics ebb and flow, affiliations and alliances and competition shift the terrain to the point that today’s liberal group are tomorrows conservatives or the other way around.

So, if one is going to put strings attached to a donation, what does one do? Pick a particular license? No current license will remain perfectly relevant or useful or attractive indefinitely. Pick an institution? No institution is free of human dynamics in the long term.

If there’s a natural place to put the patch, it’s with the code it patches. And usually, that means with the institution that is the anchor tenant, for better or worse. And yes, that creates real rights which can be really abused, or at least used in ways that one would not choose for ones own projects.

And that brings us to the toughest issue. How should we feel about the fact that a company which owns a codebase can create both proprietary and open products from that code?

And the “grave” scenario really is an issue, in the case of copyright too. When people have discussed changes to codebases that have been around for any length of time, it’s a sad but real likelihood that there are contributors who have died, and left no provision for how their work is to be managed. More often than not, the estate in question isn’t sufficiently funded to cover the cost of legal questions concerned.

The first time I was asked to sign a contribution agreement on behalf of Canonical, it was for a competitor, and I declined. That night, it preyed on my conscience. We had the benefit of a substantial amount of work from this competitor, and yet I had refused to give them back *properly* our own modest contribution. I frankly felt terrible, and the next day signed the agreement, and changed it to be our policy that we will do so, regardless of what we think about the company itself. So we’ve now done them for almost all our competitors, and I feel good about it.

That’s the magical thing about creation and ownership. It creates the possibility for generosity. You can’t really give something you don’t own, but if you do, you’ve made a genuine contribution. A gift is different from a loan. It imposes no strings, it empowers the recipient and it frees the giver of the responsibilities of ownership. We tend to think that solving our own problems to produce a patch which is interesting to us and useful for us is the generosity. It isn’t. The opportunity for generosity comes thereafter. And in our ecosystem, generosity is important. It’s at the heart of the Ubuntu ethic, and it’s important even between competitors, because the competitors outside our ecosystem are impossible to beat if we are not supportive of one another.

Fantastic engineering management is…

Tuesday, July 12th, 2011

I’m going to write a series of posts on different career tracks in software engineering and design over the next few months. This is the first of ‘em, I don’t have a timeline for the rest but will get to them all in due course, and am happy to take requests in the comments ;-)

Recently, I wrote up two Canonical engineering management job descriptions – one for those managing a team of software engineers directly, another for folk coordinating the work of groups of teams – software engineering directors. In both cases, the emphasis is on organisation, social coordination, roadmap planning and inter-team connectivity, and not in any way about engineering prowess. Defining some of these roles for one of my teams got me motivated to blog about the things that make for truly great management, as opposed to other kinds of engineering leadership.

The art of software engineering management is so different from software engineering that it should be an entirely separate career track, with equal kudos and remuneration available on either path.

This is because developing, and managing developers, are at opposite ends of the interrupt scale. Great engineering depends on deep, uninterrupted focus. But great management is all about handling interrupts efficiently so that engineers don’t have to. Companies need to recognise that difference, and create career paths on both sides of that scale, rather than expecting folk to leap from the one end to the other. It’s crazy to think that someone who loves deep focused thought should have to become a multithreaded interrupt driven manager to advance their career.

Very occasionally someone is both a fantastic developer and a fantastic manager, but that’s the exception rather than the rule. In recognition of that, we should design our teams to work well without depending on a miracle each and every time we put one together.

Great engineering managers are like coaches – they get their deepest thrill from seeing a team perform at the top of its game, not from performing vicariously. They understand that they are not going to be on the field between the starting and finishing whistle. They understand that there will be decisions to be taken on the field that the players will have to make for themselves, and their job is to prepare the team physically and mentally for the game, rather than to try and play from the sidelines. A great coach isn’t trying to steer the movement of the ball from during the game, she’s making notes about the coaching and team selections needed between this match and the next. A terrible coach is a player that won’t let go of the game, wants to be out there in the thick of it, and loses themselves in the details of the game itself.

An engineering manager is an organiser and a mentor and a coach, not a veteran star player. They need to love winning, and love the sport, and know that they help most by making the team into a winning team. The way they get code written is by making an environment which is conducive to that; the way they create quality is by fostering a passion for quality and making space in the schedule and the team for work which serves only that goal.

When I’m hiring a manager, I look for people who love to keep other people productive. That means handling all the productivity killers in an engineering team: hiring and firing, inter-team meetings, customer presentations, reporting up and out and sideways, planning, travel coordination, conferences, expenses… all those things which we don’t want engineers to spend much time on or have rattling around at the back of their minds. It also means caring about people, and being that gregarious and nosy type of person who knows what everyone is doing, and why, and also what’s going on outside the workplace.

An engineering manager is doing well if every single member of their team can answer these questions, all the time:

* what are my key goals, in order of importance, in this cycle?
* what are the key delivery dates, in this cycle?
* how am I doing, generally? and what is the company view of my strengths and not-so-strengths?
* how do I fit into the team, who are my counterparts, and how do I complement them?

Also, the manager is doing well if they know, for each member of their team:

* what personal stresses or other circumstances might be a distraction for them,
* what the interpersonal dynamics are between that member, and other counterparts or team members,
* what that member’s best contributions are, and strongest interests outside of the assigned goals

For the team as a whole, the manager should know

* what the team is good at, and weak at, and what their plan is to bolster what needs bolstering,
* what the cycle looks like, in terms of goals and progress against them
* what the next cycle is shaping up to look like, and how that fits with long term goals

Really great management makes a company a joy to work in, as a developer. It’s something we should celebrate and cultivate, teach and select for, not just be the natural upward path for people who have been around a while. If you truly love technology, there are lots of careers that take you to the top of the tech game without having to move into management. And conversely, if you love organising and leading, it’s possible to get started on a management career in software without being the world’s greatest coder first. If you think that’s you, you’ll love being an engineering manager at Canonical.