What if your cloud instances could be updated with the same certainty and precision as your mobile phone – with carrier grade assurance that an update applies perfectly or is not applied at all? What if your apps could be isolated from one another completely, so there’s no possibility that installing one app could break another, and stronger assurance that a compromise of one app won’t compromise the data from another? When we set out to build the Ubuntu Phone we took on the challenge of raising the bar for reliability and security in the mobile market. And today that same technology is coming to the cloud, in the form of a new “snappy” image called Ubuntu Core, which is in beta today on Azure and as a KVM image you can run on any Linux machine.

This is in a sense the biggest break with tradition in 10 years of Ubuntu, because snappy Ubuntu Core doesn’t use debs or apt-get. We call it “snappy” because that’s the new bullet-proof mechanism for app delivery and system updates; it’s completely different to the traditional package-based Ubuntu server and desktop. The snappy system keeps each part of Ubuntu in a separate, read-only file, and does the same for each application. That way, developers can deliver everything they need to be confident their app will work exactly as they intend, and we can take steps to keep the various apps isolated from one another, and ensure that updates are always perfect. Of course, that means that apt-get won’t work, but that’s OK since developers can reuse debs to make their snappy apps, and the core system is exactly the same as any other Ubuntu system – server or desktop.

Whenever we make a fix to packages in Ubuntu, we’ll publish the same fix to Ubuntu Core, and systems can get that fix transactionally. In fact, updates to Ubuntu Core are even smaller than package updates because we only need to send the precise difference between the old and new versions, not the whole package. Of course, Ubuntu Core is in addition to all the current members of the Ubuntu family – desktop, server, and cloud images that use apt-get and debs, and all the many *buntu remixes which bring their particular shine to our community. You still get all the Ubuntu you like, and there’s a new snappy Core image on all the clouds for the sort of deployment where precision, specialism and security are the top priority.

This is the biggest new thing in Ubuntu since we committed to deliver a mobile phone platform, and it’s very delicious that it’s borne of exactly the same amazing technology that we’ve been perfecting for these last three years. I love it when two completely different efforts find underlying commonalities, and it’s wonderful to me that the work we’ve done for the phone, where carriers and consumers are the audience, might turn out to be so useful in the cloud, which is all about back-end infrastructure.

Why is this so interesting?

Transactional updates have lots of useful properties: if they are done well, you can know EXACTLY what’s running on a particular system, and you can coordinate updates with very high precision across thousands of instances in the cloud. You can run systems as canaries, getting updates ahead of other identical systems to see if they cause unexpected problems. You can roll updates back, because each version is a complete, independent image. That’s very nice indeed.

There have been interesting developments in the transaction systems field over the past few years. ChromeOS is updated transactionally, when you turn it on, it makes sure it’s running the latest version of the OS. CoreOS brought aspects of Chrome OS and Gentoo to the cloud, Red Hat has a beta of Atomic as a transactional version of RHEL, and of course Docker is a way of delivering apps transactionally too (it combines app and system files very neatly). Ubuntu Core raises the bar for certainty, extensibility and security in the transactional systems game. What I love about Ubuntu Core is the way it embraces transactional updates not just for the base system but for applications on top of the system as well. The system is just one layer that can be updated transactionally, and so are each of the apps on the system. You get an extensible platform that retains the lovely properties of transactionality but lets you choose exactly the capabilities you want for yourself, rather than having someone else force you to use a particular tool.

For example, in CoreOS, things like Fleet are built-in, you can’t opt out. In Ubuntu Core, we aim for a much smaller Core, and then enable you to install Docker or any other container system as a framework, with snappy. We’re working with all the different container vendors, and app systems, and container coordination systems, to help them make snappy versions of their tools. That way, you get the transactional semantics you want with the freedom to use whichever tools suit you. And the whole thing is smaller and more secure because we baked fewer assumptions into the core.

The snappy system is also designed to provide security guarantees across diverse environments. Because there is a single repository of frameworks and packages, and each of them has a digital fingerprint that cannot be faked, two people on opposite ends of the world can compare their systems and know that they are running exactly the same versions of the system and apps. Atomic might allow you to roll back, but it’s virtually impossible to customise the system for your own preferences rather than Red Hat’s, and still know you are running the same secure bits as anybody else.

Developers of snappy apps get much more freedom to bundle the exact versions of libraries that they want to use with their apps. It’s much easier to make a snappy package than a traditional Ubuntu package – just bundle up everything you want in one place, and ship it. We use strong application isolation to keep data confidential between apps. If you install a bad app, it only has access to the data you create with that app, not to data from other applications. This is a key piece of security that comes from our efforts to bring Ubuntu to the mobile market, where malware is a real problem today. And as a result, we can enable developers to go much faster – they can publish their app on whatever schedule suits them, regardless of the Ubuntu release cadence. Want the very latest app? Snappy makes that easiest.

This is also why I think snappy will result in much simpler systems management. Instead of having literally thousands of packages on your Ubuntu server, with tons of dependencies, a snappy system just has a single package for each actual app or framework that’s installed. I bet the average system on the cloud ends up with about three packages installed, total! Try this sort of output:

$ snappy info
release: ubuntu-core/devel
frameworks: docker, panamax
apps: owncloud

That’s much easier to manage and reason about at scale. We recently saw how complicated things can get in the old packaging system, when Owncloud upstream wanted to remove the original packages of Owncloud from an old Ubuntu release. With snappy Ubuntu, Owncloud can publish exactly what they want you to use as a snappy package, and can update that for you directly, in a safe transactional manner with full support for rolling back. I think upstream developers are going to love being in complete control of their app on snappy Ubuntu Core.

$ sudo snappy install hello-world

Welcome to a snappy new world!

Things here are really nice and simple:

$ snappy info
$ snappy build .
$ snappy install foo
$ snappy update foo
$ snappy rollback foo
$ snappy remove foo
$ snappy update-versions
$ snappy versions

Just for fun, download the image and have a play. I’m delighted that Ubuntu Core is today’s Qemu Advent Calendar image too! Or launch it on Azure, coming soon to all the clouds.

It’s important for Ubuntu to continue to find new ways to bring free software to a wider audience. The way people think about software is changing, and I think Ubuntu Core becomes a very useful tool for people doing stuff at huge scale in the cloud. If you want crisp, purposeful, tightly locked down systems that are secure by design, Ubuntu Core and snappy packages are the right tool for the job. Running docker farms? Running transcode farms? I think you’ll like this very much!

We have the world’s biggest free software community because we find ways to recognise all kinds of contributions and to support people helping one another to bring their ideas to fruition. One of the goals of snappy was to reduce the overhead and bureaucracy of packaging software to make it incredibly easy for anybody to publish code they care about to other Ubuntu users. We have built a great community of developers using this toolchain for the phone, I think it’s going to be even better on the cloud where Ubuntu is already so popular. There is a lot to do in making the most of existing debs in the snappy environment, and I’m excited that there is a load of amazing software on github that can now flow more easily to Ubuntu users on any cloud.

Welcome to the family, Ubuntu Core!

OpenStack on a diet, redux

Saturday, November 8th, 2014

Subhu writes that OpenStack’s blossoming project list comes at a cost to quality. I’d like to follow up with an even leaner approach based on an outline drafted during the OpenStack Core discussions after ODS Hong Kong, a year ago.

The key ideas in that draft are:

Only call services “core” if the user can detect them.

How the cloud is deployed or operated makes no difference to a user. We want app developers to

Define both “core” and “common” services, but require only “core” services for a cloud that calls itself OpenStack compatible.

Separation of core and common lets us recognise common practice today, while also acknowledging that many ideas we’ve had in the past year or three are just 1.0 iterations, we don’t know which of them will stick any more than one could predict which services on any major public cloud will thrive and which will vanish over time. Signalling that something is “core” means it is something we commit to keeping around a long time. Signalling something is “common” means it’s widespread practice for it to be available in an OpenStack environment, but not a requirement.

Require that “common” services can be self-deployed.

Just as you can install a library or a binary in your home directory, you can run services for yourself in a cloud. Services do not have to be provided by the cloud infrastructure provider, they can usually be run by a user themselves, under their own account, as a series of VMs providing network services. Making it a requirement that users can self-provide a service before designating it common means that users can build on it; if a particular cloud doesn’t offer it, their users can self-provide it. All this means is that the common service itself builds on core services, though it might also depend on other common services which could be self-deployed in advance of it.

Require that “common” services have a public integration test suite that can be run by any user of a cloud to evaluate conformance of a particular implementation of the service.

For example, a user might point the test suite at HP Cloud to verify that the common service there actually conforms to the service test standard. Alternatively, the user who self-provides a common service in a cloud which does not provide it can verify that their self-deployed common service is functioning correctly. This also serves to expand the test suite for the core: we can self-deploy common services and run their test suites to exercise the core more thoroughly than Tempest could.

Keep the whole set as small as possible.

We know that small is beautiful; small is cleaner, leaner, more comprehensible, more secure, easier to test, likely to be more efficiently implemented, easier to attract developer participation. In general, if something can be cut from the core specification it should. “Common” should reflect common practice and can be arbitrarily large, and also arbitrarily changed.

In the light of those ideas, I would designate the following items from Subhu’s list as core OpenStack services:

  • Keystone (without identity, nothing)
  • Nova (the basis for any other service is the ability to run processes somewhere)
    • Glance (hard to use Nova without it)
  • Neutron (where those services run)
    • Designate (DNS is a core aspect of the network)
  • Cinder (where they persist data)

I would consider these to be common OpenStack services:

  • SWIFT (widely deployed, can be self-provisioned with Cinder block backends)
  • Ceph RADOS-GW object storage (widely deployed as an implementation choice, common because it could be self-provided on Cinder block)
  • Horizon (widely deployed, but we want to encourage innovation in the dashboard)

And these I would consider neither core nor common, though some of them are clearly on track there:

  • Barbican (not widely implemented)
  • Ceilometer (internal implementation detail, can’t be common because it requires access to other parts)
  • Juju (not widely implemented)
  • Kite (not widely implemented)
  • HEAT (on track to become common if it can be self-deployed, besides, I eat controversy for breakfast)
  • MAAS (who cares how the cloud was built?)
  • Manila (not widely implemented, possibly core once solid, otherwise common once, err, common)
  • Sahara (not widely implemented, weird that we would want to hardcode one way of doing this in the project)
  • Triple-O (user doesn’t care how the cloud was deployed)
  • Trove (not widely implemented, might make it to “common” if widely deployed)
  • Tuskar (see Ironic)
  • Zaqar (not widely implemented)

In the current DefCore discussions, the “layer” idea has been introduced. My concern is simple: how many layers make sense? End users don’t want to have to figure out what lots of layers mean. If we had “OpenStack HPC” and “OpenStack Scientific” and “OpenStack Genomics” layers, that would just be confusing. Let’s keep it simple – use “common” as a layer, but be explicit that it will change to reflect common practice (of course, anything in common is self-reinforcing in that new players will defer to norms and implement common services, thereby entrenching common unless new ideas make services obsolete).

V is for Vivid

Monday, October 20th, 2014

Release week! Already! I wouldn’t call Trusty ‘vintage’ just yet, but Utopic is poised to leap into the torrent stream. We’ve all managed to land our final touches to *buntu and are excited to bring the next wave of newness to users around the world. Glad to see the unicorn theme went down well, judging from the various desktops I see on G+.

And so it’s time to open the vatic floodgates and invite your thoughts and contributions to our soon-to-be-opened iteration next. Our ventrous quest to put GNU as you love it on phones is bearing fruit, with final touches to the first image in a new era of convergence in computing. From tiny devices to personal computers of all shapes and sizes to the ventose vistas of cloud computing, our goal is to make a platform that is useful, versal and widely used.

Who would have thought – a phone! Each year in Ubuntu brings something new. It is a privilege to celebrate our tenth anniversary milestone with such vernal efforts. New ecosystems are born all the time, and it’s vital that we refresh and renew our thinking and our product in vibrant ways. That we have the chance to do so is testament to the role Linux at large is playing in modern computing, and the breadth of vision in our virtual team.

To our fledgling phone developer community, for all your votive contributions and vocal participation, thank you! Let’s not be vaunty: we have a lot to do yet, but my oh my what we’ve made together feels fantastic. You are the vigorous vanguard, the verecund visionaries and our venerable mates in this adventure. Thank you again.

This verbose tract is a venial vanity, a chance to vector verbal vibes, a map of verdant hills to be climbed in months ahead. Amongst those peaks I expect we’ll find new ways to bring secure, free and fabulous opportunities for both developers and users. This is a time when every electronic thing can be an Internet thing, and that’s a chance for us to bring our platform, with its security and its long term support, to a vast and important field. In a world where almost any device can be smart, and also subverted, our shared efforts to make trusted and trustworthy systems might find fertile ground. So our goal this next cycle is to show the way past a simple Internet of things, to a world of Internet things-you-can-trust.

In my favourite places, the smartest thing around is a particular kind of monkey. Vexatious at times, volant and vogie at others, a vervet gets in anywhere and delights in teasing cats and dogs alike. As the upstart monkey in this business I can think of no better mascot. And so let’s launch our vicenary cycle, our verist varlet, the Vivid Vervet!

The South African Supreme Court of Appeal today found in my favour in a case about exchange controls. I will put the returned funds of R250m plus interest into a trust, to underwrite constitutional court cases on behalf of those who’s circumstances deny them the ability to be heard where the counterparty is the State. Here is a statement in full:

Exchange controls may appear to be targeted at a very small number of South Africans but their consequences are significant for all of us: especially those who are building relationships across Southern Africa such as migrant workers and small businesses seeking to participate in the growth of our continent. It is more expensive to work across South African borders than almost anywhere else on Earth, purely because the framework of exchange controls creates a cartel of banks authorized to act as the agents of the Reserve Bank in currency matters.

We all pay a very high price for that cartel, and derive no real benefit in currency stability or security for that cost.

Banks profit from exchange controls, but our economy is stifled, and the most vulnerable suffer most of all. Everything you buy is more expensive, South Africans are less globally competitive, and cross-border labourers, already vulnerable, pay the highest price of all – a shame we should work to address. The IMF found that “A study in South Africa found that the comparative cost of an international transfer of 250 rand was the lowest when it went through a friend or a taxi driver and the highest when it went through a bank.” The World Bank found that “remittance fees punish poor Africans“. South Africa scores worst of all, and according to the Payments Association of South Africa and the Reserve Bank, this is “..mostly related to the regulations that South African financial institutions needed to comply with, such as the Financial Intelligence Centre Act (Fica) and exchange-control regulations.”

Today’s ruling by the Supreme Court of Appeal found administrative and procedural fault with the Reserve Bank’s actions in regards to me, and returned the fees levied, for which I am grateful. This case, however, was not filed solely in pursuit of relief for me personally. We are now considering the continuation of the case in the Constitutional Court, to challenge exchange control on constitutional grounds and ensure that the benefits of today’s ruling accrue to all South Africans.

This is a time in our history when it will be increasingly important to defend constitutional rights. Historically, these are largely questions related to the balance of power between the state and the individual. For all the eloquence of our Constitution, it will be of little benefit to us all if it cannot be made binding on our government. It is expensive to litigate at the constitutional level, which means that such cases are imbalanced – the State has the resources to make its argument, but the individual often does not.

For that reason, I will commit the funds returned to me to today by the SCA to a trust run by veteran and retired constitutional scholars, judges and lawyers, that will selectively fund cases on behalf of those unable to do so themselves, where the counterparty is the state. The mandate of this trust will extend beyond South African borders, to address constitutional rights for African citizens at large, on the grounds that our future in South Africa is in every way part of that great continent.

This case is largely thanks to the team of constitutional lawyers who framed their arguments long before meeting me; I have been happy to play the role of model plaintiff and to underwrite the work, but it is their determination to correct this glaring flaw in South African government policy which inspired me to support them.

For that reason I will ask them to lead the establishment of this new trust and would like to thank them for their commitment to the principles on which our democracy is founded.

This case also has a very strong personal element for me, because it is exchange controls which make it impossible for me to pursue the work I am most interested in from within South Africa and which thus forced me to emigrate years ago. I pursue this case in the hope that the next generation of South Africans who want to build small but global operations will be able to do so without leaving the country. In our modern, connected world, and our modern connected country, that is the right outcome for all South Africans.

Mark

“The Internet sees censorship as damage and routes around it” was a very motivating tagline during my early forays into the internet. Having grown up in Apartheid-era South Africa, where government control suppressed the free flow of ideas and information, I was inspired by the idea of connecting with people all over the world to explore the cutting edge of science and technology. Today, people connect with peers and fellow explorers all over the world not just for science but also for arts, culture, friendship, relationships and more. The Internet is the glue that is turning us into a super-organism, for better or worse. And yes, there are dark sides to that easy exchange – internet comments alone will make you cry. But we should remember that the brain is smart even if individual brain cells are dumb, and negative, nasty elements on the Internet are just part of a healthy whole. There’s no Department of Morals I would trust to weed ‘em out or protect me or mine from them.

Today, the pendulum is swinging back to government control of speech, most notably on the net. First, it became clear that total surveillance is the norm even amongst Western democratic governments (the “total information act” reborn).  Now we hear the UK government wants to be able to ban organisations without any evidence of involvement in illegal activities because they might “poison young minds”. Well, nonsense. Frustrated young minds will go off to Syria precisely BECAUSE they feel their avenues for discourse and debate are being shut down by an unfair and unrepresentative government – you couldn’t ask for a more compelling motivation for the next generation of home-grown anti-Western jihadists than to clamp down on discussion without recourse to due process. And yet, at the same time this is happening in the UK, protesters in Hong Kong are moving to peer-to-peer mechanisms to organise their protests precisely because of central control of the flow of information.

One of the reasons I picked the certificate and security business back in the 1990′s was because I wanted to be part of letting people communicate privately and securely, for business and pleasure. I’m saddened now at the extent to which the promise of that security has been undermined by state pressure and bad actors in the business of trust.

So I think it’s time that those of us who invest time, effort and money in the underpinnings of technology focus attention on the defensibility of the core freedoms at the heart of the internet.

There are many efforts to fix this under way. The IETF is slowly become more conscious of the ways in which ideals can be undermined and the central role it can play in setting standards which are robust in the face of such inevitable pressure. But we can do more, and I’m writing now to invite applications for Fellowships at the Shuttleworth Foundation by leaders that are focused on these problems. TSF already has Fellows working on privacy in personal communications; we are interested in generalising that to the foundations of all communications. We already have a range of applications in this regard, I would welcome more. And I’d like to call attention to the Edgenet effort (distributing network capabilities, based on zero-mq) which is holding a sprint in Brussels October 30-31.

20 years ago, “Clipper” (a proposed mandatory US government back door, supported by the NSA) died on the vine thanks to a concerted effort by industry to show the risks inherent to such schemes. For two decades we’ve had the tide on the side of those who believe it’s more important for individuals and companies to be able to protect information than it is for security agencies to be able to monitor it. I’m glad that today, you are more likely to get into trouble if you don’t encrypt sensitive information in transit on your laptop than if you do. I believe that’s the right side to fight for and the right side for all of our security in the long term, too. But with mandatory back doors back on the table we can take nothing for granted – regulatory regimes can and do change, as often for the worse as for the better. If you care about these issues, please take action of one form or another.

Law enforcement is important. There are huge dividends to a society in which people to make long term plans, which depends on their confidence in security and safety as much as their confidence in economic fairness and opportunity. But the agencies in whom we place this authority are human and tend over time, like any institution, to be more forceful in defending their own existence and privileges than they are in providing for the needs of others. There has never been an institution in history which has managed to avoid this cycle. For that reason, it’s important to ensure that law enforcement is done by due process; there are no short cuts which will not be abused sooner rather than later. Checks and balances are more important than knee-jerk responses to the last attack. Every society, even today’s modern Western society, is prone to abusive governance. We should fear our own darknesses more than we fear others.

A fair society is one where laws are clear and crimes are punished in a way that is deemed fair. It is not one where thinking about crime is criminal, or one where talking about things that are unpalatable is criminal, or one where everybody is notionally protected from the arbitrary and the capricious. Over the past 20 years life has become safer, not more risky, for people living in an Internet-connected West. That’s no thanks to the listeners; it’s thanks to living in a period when the youth (the source of most trouble in the world) feel they have access to opportunity and ideas on a world-wide basis. We are pretty much certain to have hard challenges ahead in that regard. So for all the scaremongering about Chinese cyber-espionage and Russian cyber-warfare and criminal activity in darknets, we are better off keeping the Internet as a free-flowing and confidential medium than we are entrusting an agency with the job of monitoring us for inappropriate and dangerous ideas. And that’s something we’ll have to work for.

Cloud Foundry for the Ubuntu community?

Monday, September 29th, 2014

Quick question – we have Cloud Foundry in private beta now, is there anyone in the Ubuntu community who would like to use a Cloud Foundry instance if we were to operate that for Ubuntu members?

Be careful of headlines, they appeal to our sense of the obvious and the familiar, they entrench rather than challenge established stereotypes and memes. What one doesn’t read about every day is usually more interesting than what’s in the headlines. And in the current round of global unease, what’s not being said – what we’ve failed to admit about our Western selves and our local allies – is central to the problems at hand.

Both Iraq and Ukraine, under Western tutelage, failed to create states which welcome diversity. Both Iraq and the Ukraine aggressively marginalised significant communities, with the full knowledge and in some cases support of their Western benefactors. And in both cases, those disenfranchised communities have rallied their cause into wars of aggression.

Reading the Western media one would think it’s clear who the aggressors are in both cases: Islamic State and Russia are “obvious bad actors” who’s behaviour needs to be met with stern action. Russia clearly has no business arming rebels with guns they use irresponsibly to tragic effect, and the Islamic State are clearly “a barbaric, evil force”. If those gross simplifications, reinforced in the Western media, define our debate and discussion on the subject then we are destined pursue some painful paths with little but frustration to show for the effort, and nasty thorns that fester indefinitely. If that sounds familiar it’s because yes, this is the same thing happening all over again. In a prior generation, only a decade ago, anger and frustration at 9/11 crowded out calm deliberation and a focus on the crimes in favour of shock and awe. Today, out of a lack of insight into the root cause of Ukrainian separatism and Islamic State’s attractiveness to a growing number across the Middle East and North Africa, we are about to compound our problems by slugging our way into a fight we should understand before we join.

This is in no way to say that the behaviour of Islamic State or Russia are acceptable in modern society. They are not. But we must take responsibility for our own behaviour first and foremost; time and history are the best judges of the behaviour of others.

In the case of the Ukraine, it’s important to know how miserable it has become for native Russian speakers born and raised in the Ukraine. People who have spent their entire lives as citizens of the Ukraine who happen to speak in Russian at home, at work, in church and at social events have found themselves discriminated against by official decree from Kiev. Friends of mine with family in Odessa tell me that there have been systematic attempts to undermine and disenfranchise Russian speaking in the Ukraine. “You may not speak in your home language in this school”. “This market can only be conducted in Ukrainian, not Russian”. It’s important to appreciate that being a Russian speaker in Ukraine doesn’t necessarily mean one is not perfectly happy to be a Ukranian. It just means that the Ukraine is a diverse cultural nation and has been throughout our lifetimes. This is a classic story of discrimination. Friends of mine who grew up in parts of Greece tell a similar story about the Macedonian culture being suppressed – schools being forced to punish Macedonian language spoken on the playground.

What we need to recognise is that countries – nations – political structures – which adopt ethnic and cultural purity as a central idea, are dangerous breeding grounds for dissent, revolt and violence. It matters not if the government in question is an ally or a foe. Those lines get drawn and redrawn all the time (witness the dance currently under way to recruit Kurdish and Iranian assistance in dealing with IS, who would have thought!) based on marriages of convenience and hot button issues of the day. Turning a blind eye to thuggery and stupidity on the part of your allies is just as bad as making sure you’re hanging with the cool kids on the playground even if it happens that they are thugs and bullies –  stupid and shameful short-sightedness.

In Iraq, the government installed and propped up with US money and materials (and the occasional slap on the back from Britain) took a pointedly sectarian approach to governance. People of particular religious communities were removed from positions of authority, disqualified from leadership, hunted and imprisoned and tortured. The US knew that leading figures in their Iraqi government were behaving in this way, but chose to continue supporting the government which protected these thugs because they were “our people”. That was a terrible mistake, because it is those very communities which have morphed into Islamic State.

The modern nation states we call Iraq and the Ukraine – both with borders drawn in our modern lifetimes – are intrinsically diverse, intrinsically complex, intrinsically multi-cultural parts of the world. We should know that a failure to create governments of that diversity, for that diversity, will result in murderous resentment. And yet, now that the lines for that resentment are drawn, we are quick to choose sides, precisely the wrong position to take.

What makes this so sad is that we know better and demand better for ourselves. The UK and the US are both countries who have diversity as a central tenet of their existence. Freedom of religion, freedom of expression, the right to a career and to leadership on the basis of competence rather than race or creed are major parts of our own identity. And yet we prop up states who take precisely the opposite approach, and wonder why they fail, again and again. We came to these values through blood and pain, we hold on to these values because we know first hand how miserable and how wasteful life becomes if we let human tribalism tear our communities apart. There are doors to universities in the UK on which have hung the bodies of religious dissidents, and we will never allow that to happen again at home, yet we prop up governments for whom that is the norm.

The Irish Troubles was a war nobody could win. It was resolved through dialogue. South African terrorism in the 80′s was a war nobody could win. It was resolved through dialogue and the establishment of a state for everybody. Time and time again, “terrorism” and “barbarism” are words used to describe fractious movements by secure, distant seats of power, and in most of those cases, allowing that language to dominate our thinking leads to wars that nobody can win.

Russia made a very grave error in arming Russian-speaking Ukranian separatists. But unless the West holds Kiev to account for its governance, unless it demands an open society free of discrimination, the misery there will continue. IS will gain nothing but contempt from its demonstrations of murder – there is no glory in violence on the defenceless and the innocent – but unless the West bends its might to the establishment of societies in Syria and Iraq in which these religious groups are welcome and free to pursue their ambitions, murder will be the only outlet for their frustration. Politicians think they have a new “clean” way to exert force – drones and airstrikes without “boots on the ground”. Believe me, that’s false. Remote control warfare will come home to fester on our streets.

 

This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play. When you see Ubuntu on a cloud it means that Canonical has a working relationship with that cloud vendor, and the Ubuntu images there come with a set of guarantees:

  1. Those images are up to date and secure.
  2. They have also been optimised on that cloud, both for performance and cost.
  3. The images provide a standard experience for app compatibility.

That turns out to be a lot of work for us to achieve, but it makes your life really easy.

Fresh, secure and tasty images

We update the cloud images across all clouds on a regular basis. Updating the image means that you have more of the latest updates pre-installed so launching a new machine is much faster – fewer updates to install on boot for a fully secured and patched machine.

  1. At least every two weeks, typically, if there are just a few small updates across the board to roll into the freshest image.
  2. Immediately if there is a significant security issue, so starting a fresh image guarantees you to have no known security gotchas.
  3. Sooner than usual if there are a lot of updates which would make launching and updating a machine slow.

Updates might include fixes to the kernel, or any of the packages we install by default in the “core” cloud images. We also make sure that these updated images are used by default in any “quick launch” UI that the cloud provides, so you don’t have to go hunt for the right image identity. And there are automated tools that will tell you the ID for the current image of Ubuntu on your cloud of choice. So you can script “give me a fresh Ubuntu machine” for any cloud, trivially. It’s all very nice.

Optimised for your pocket and your workload

Every cloud behaves differently – both in terms of their architecture, and their economics. When we engage with the cloud operator we figure out how to ensure that Ubuntu is “optimal” on that cloud. Usually that means we figure out things like storage mechanisms (the classic example is S3 but we have to look at each cloud to see what they provide and how to take advantage of it) and ensure that data-heavy operations like system updates draw on those resources in the most cost-efficient manner. This way we try to ensure that using Ubuntu is a guarantee of the most cost-effective base OS experience on any given cloud. In the case of more sophisticated clouds, we are digging in to kernel parameters and drivers to ensure that performance is first class. On Azure there is a LOT of deep engineering between Canonical and Microsoft to ensure that Ubuntu gets the best possible performance out of the Hyper-V substrate, and we are similarly engaged with other cloud operators and solution providers that use highly-specialised hypervisors, such as Joyent and VMware. Even the network can be tweaked for efficiency in a particular cloud environment once we know exactly how that cloud works under the covers. And we do that tweaking in the standard images so EVERYBODY benefits and you can take it for granted – if you’re using Ubuntu, it’s optimal. The results of this work can be pretty astonishing. In the case of one cloud we reduced the Ubuntu startup time by 23x from what their team had done internally; not that they were ineffective, it’s just that we see things through the eyes of a large-scale cloud user and care about things that a single developer might not care about as much. When you’re doing something at scale, even small efficiencies add up to big numbers.

Standard, yummy

Before we had this program in place, every cloud vendor hacked their own Ubuntu images, and they were all slightly different in unpredictable ways. We all have our own favourite way of doing things, so if every cloud has a lead engineer who rigged the default Ubuntu the way they like it, end users have to figure out the differences the hard way, stubbing their toes on them. In some cases they had default user accounts with different behaviour, in others they had different default packages installed. EMACS, Vi, nginx, the usual tweaks. In a couple of cases there were problems with updates or security, and we realised that Ubuntu users would be much better off if we took responsibility for this and ensured that the name is an assurance of standard behaviour and quality across all clouds. So now we have that, and if you see Ubuntu on a public cloud you can be sure it’s done to that standard, and we’re responsible. If it isn’t, please let us know and we’ll fix it for you. That means that you can try out a new cloud really easily – your stuff should work exactly the same way with those images, and differences between the clouds will have been considered and abstracted in the base OS. We’ll have tweaked the network, kernel, storage, update mechanisms and a host of other details so that you don’t have to, we’ll have installed appropriate tools for that specific cloud, and we’ll have lined things up so that to the best of our ability none of those changes will break your apps, or updates. If you haven’t recently tried a new cloud, go ahead and kick the tires on the base Ubuntu images in two or three of them. They should all Just Work TM.   It’s frankly a lot of fun for us to work with the cloud operators – this is the frontline of large-scale systems engineering, and the guys driving architecture at public cloud providers are innovating like crazy but doing so in a highly competitive and operationally demanding environment. Our job in this case is to make sure that end-users don’t have to worry about how the base OS is tuned – it’s already tuned for them. We’re taking that to the next level in many cases by optimising workloads as well, in the form of Juju charms, so you can get whole clusters or scaled-out services that are tuned for each cloud as well. The goal is that you can create a cloud account and have complex scale-out infrastructure up and running in a few minutes. Devops, distilled.

Enjoying G+

Thursday, May 29th, 2014

Plussers, I’m now at https://plus.google.com/+MarkShuttleworthCanonical/ and quite enjoying the service. That G+ persona is mostly software and cloud related (so short on beekeeping, kitesurfing or botanical pursuits).

This is probably the best way to keep in touch. I’m not on Facebook and every year Claire and I clear out the oddball fake-me’s that seem to crop up regularly there. To their credit, FB have been really good about sorting those out. I’m told I’m missing out but, srsly, TIME!

G+ seems really clean and well organised, the chatter is mostly thoughtful and good-natured.

This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play.

We run an extensive program to identify issues and features that make a difference to cloud users. One result of that program is that we pioneered dynamic image customisation and wrote cloud-init. I’ll tell the story of cloud-init as an illustration of the focus the Ubuntu team has on making your devops experience fantastic on any given cloud.

 

Ever struggled to find the “right” image to use on your favourite cloud? Ever wondered how you can tell if an image is safe to use, what keyloggers or other nasties might be installed? We set out to solve that problem a few years ago and the resulting code, cloud-init, is one of the more visible pieces Canonical designed and built, and very widely adopted.

Traditionally, people used image snapshots to build a portfolio of useful base images. You’d start with a bare OS, add some software and configuration, then snapshot the filesystem. You could use those snapshots to power up fresh images any time you need more machines “like this one”. And that process works pretty amazingly well. There are hundreds of thousands, perhaps millions, of such image snapshots scattered around the clouds today. It’s fantastic. Images for every possible occasion! It’s a disaster. Images with every possible type of problem.

The core issue is that an image is a giant binary blob that is virtually impossible to audit. Since it’s a snapshot of an image that was running, and to which anything might have been done, you will need to look in every nook and cranny to see if there is a potential problem. Can you afford to verify that every binary is unmodified? That every configuration file and every startup script is safe? No, you can’t. And for that reason, that whole catalogue of potential is a catalogue of potential risk. If you wanted to gather useful data sneakily, all you’d have to do is put up an image that advertises itself as being good for a particular purpose and convince people to run it.

There are other issues, even if you create the images yourself. Each image slowly gets out of date with regard to security updates. When you fire it up, you need to apply all the updates since the image was created, if you want a secure machine. Eventually, you’ll want to re-snapshot for a more up-to-date image. That requires administration overhead and coordination, most people don’t do it.

That’s why we created cloud-init. When your virtual machine boots, cloud-init is run very early. It looks out for some information you send to the cloud along with the instruction to start a new machine, and it customises your machine at boot time. When you combine cloud-init with the regular fresh Ubuntu images we publish (roughly every two weeks for regular updates, and whenever a security update is published), you have a very clean and elegant way to get fresh images that do whatever you want. You design your image as a script which customises the vanilla, base image. And then you use cloud-init to run that script against a pristine, known-good standard image of Ubuntu. Et voila! You now have purpose-designed images of your own on demand, always built on a fresh, secure, trusted base image.

Auditing your cloud infrastructure is now straightforward, because you have the DNA of that image in your script. This is devops thinking, turning repetitive manual processes (hacking and snapshotting) into code that can be shared and audited and improved. Your infrastructure DNA should live in a version control system that requires signed commits, so you know everything that has been done to get you where you are today. And all of that is enabled by cloud-init. And if you want to go one level deeper, check out Juju, which provides you with off-the-shelf scripts to customise and optimise that base image for hundreds of common workloads.