This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play.

When you see Ubuntu on a cloud it means that Canonical has a working relationship with that cloud vendor, and the Ubuntu images there come with a set of guarantees:

  1. Those images are up to date and secure.
  2. They have also been optimised on that cloud, both for performance and cost.
  3. The images provide a standard experience for app compatibility.

That turns out to be a lot of work for us to achieve, but it makes your life really easy.

Fresh, secure and tasty images

We update the cloud images across all clouds on a regular basis. Updating the image means that you have more of the latest updates pre-installed so launching a new machine is much faster – fewer updates to install on boot for a fully secured and patched machine.

  1. At least every two weeks, typically, if there are just a few small updates across the board to roll into the freshest image.
  2. Immediately if there is a significant security issue, so starting a fresh image guarantees you to have no known security gotchas.
  3. Sooner than usual if there are a lot of updates which would make launching and updating a machine slow.

Updates might include fixes to the kernel, or any of the packages we install by default in the “core” cloud images.

We also make sure that these updated images are used by default in any “quick launch” UI that the cloud provides, so you don’t have to go hunt for the right image identity. And there are automated tools that will tell you the ID for the current image of Ubuntu on your cloud of choice. So you can script “give me a fresh Ubuntu machine” for any cloud, trivially. It’s all very nice.

Optimised for your pocket and your workload

Every cloud behaves differently – both in terms of their architecture, and their economics. When we engage with the cloud operator we figure out how to ensure that Ubuntu is “optimal” on that cloud. Usually that means we figure out things like storage mechanisms (the classic example is S3 but we have to look at each cloud to see what they provide and how to take advantage of it) and ensure that data-heavy operations like system updates draw on those resources in the most cost-efficient manner. This way we try to ensure that using Ubuntu is a guarantee of the most cost-effective base OS experience on any given cloud.

In the case of more sophisticated clouds, we are digging in to kernel parameters and drivers to ensure that performance is first class. On Azure there is a LOT of deep engineering between Canonical and Microsoft to ensure that Ubuntu gets the best possible performance out of the Hyper-V substrate, and we are similarly engaged with other cloud operators and solution providers that use highly-specialised hypervisors, such as Joyent and VMware. Even the network can be tweaked for efficiency in a particular cloud environment once we know exactly how that cloud works under the covers. And we do that tweaking in the standard images so EVERYBODY benefits and you can take it for granted – if you’re using Ubuntu, it’s optimal.

The results of this work can be pretty astonishing. In the case of one cloud we reduced the Ubuntu startup time by 23x from what their team had done internally; not that they were ineffective, it’s just that we see things through the eyes of a large-scale cloud user and care about things that a single developer might not care about as much. When you’re doing something at scale, even small efficiencies add up to big numbers.

Standard, yummy

Before we had this program in place, every cloud vendor hacked their own Ubuntu images, and they were all slightly different in unpredictable ways. We all have our own favourite way of doing things, so if every cloud has a lead engineer who rigged the default Ubuntu the way they like it, end users have to figure out the differences the hard way, stubbing their toes on them. In some cases they had default user accounts with different behaviour, in others they had different default packages installed. EMACS, Vi, nginx, the usual tweaks. In a couple of cases there were problems with updates or security, and we realised that Ubuntu users would be much better off if we took responsibility for this and ensured that the name is an assurance of standard behaviour and quality across all clouds.

So now we have that, and if you see Ubuntu on a public cloud you can be sure it’s done to that standard, and we’re responsible. If it isn’t, please let us know and we’ll fix it for you.

That means that you can try out a new cloud really easily – your stuff should work exactly the same way with those images, and differences between the clouds will have been considered and abstracted in the base OS. We’ll have tweaked the network, kernel, storage, update mechanisms and a host of other details so that you don’t have to, we’ll have installed appropriate tools for that specific cloud, and we’ll have lined things up so that to the best of our ability none of those changes will break your apps, or updates. If you haven’t recently tried a new cloud, go ahead and kick the tires on the base Ubuntu images in two or three of them. They should all Just Work TM.

 

It’s frankly a lot of fun for us to work with the cloud operators – this is the frontline of large-scale systems engineering, and the guys driving architecture at public cloud providers are innovating like crazy but doing so in a highly competitive and operationally demanding environment. Our job in this case is to make sure that end-users don’t have to worry about how the base OS is tuned – it’s already tuned for them. We’re taking that to the next level in many cases by optimising workloads as well, in the form of Juju charms, so you can get whole clusters or scaled-out services that are tuned for each cloud as well. The goal is that you can create a cloud account and have complex scale-out infrastructure up and running in a few minutes. Devops, distilled.

This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play.

We run an extensive program to identify issues and features that make a difference to cloud users. One result of that program is that we pioneered dynamic image customisation and wrote cloud-init. I’ll tell the story of cloud-init as an illustration of the focus the Ubuntu team has on making your devops experience fantastic on any given cloud.

 

Ever struggled to find the “right” image to use on your favourite cloud? Ever wondered how you can tell if an image is safe to use, what keyloggers or other nasties might be installed? We set out to solve that problem a few years ago and the resulting code, cloud-init, is one of the more visible pieces Canonical designed and built, and very widely adopted.

Traditionally, people used image snapshots to build a portfolio of useful base images. You’d start with a bare OS, add some software and configuration, then snapshot the filesystem. You could use those snapshots to power up fresh images any time you need more machines “like this one”. And that process works pretty amazingly well. There are hundreds of thousands, perhaps millions, of such image snapshots scattered around the clouds today. It’s fantastic. Images for every possible occasion! It’s a disaster. Images with every possible type of problem.

The core issue is that an image is a giant binary blob that is virtually impossible to audit. Since it’s a snapshot of an image that was running, and to which anything might have been done, you will need to look in every nook and cranny to see if there is a potential problem. Can you afford to verify that every binary is unmodified? That every configuration file and every startup script is safe? No, you can’t. And for that reason, that whole catalogue of potential is a catalogue of potential risk. If you wanted to gather useful data sneakily, all you’d have to do is put up an image that advertises itself as being good for a particular purpose and convince people to run it.

There are other issues, even if you create the images yourself. Each image slowly gets out of date with regard to security updates. When you fire it up, you need to apply all the updates since the image was created, if you want a secure machine. Eventually, you’ll want to re-snapshot for a more up-to-date image. That requires administration overhead and coordination, most people don’t do it.

That’s why we created cloud-init. When your virtual machine boots, cloud-init is run very early. It looks out for some information you send to the cloud along with the instruction to start a new machine, and it customises your machine at boot time. When you combine cloud-init with the regular fresh Ubuntu images we publish (roughly every two weeks for regular updates, and whenever a security update is published), you have a very clean and elegant way to get fresh images that do whatever you want. You design your image as a script which customises the vanilla, base image. And then you use cloud-init to run that script against a pristine, known-good standard image of Ubuntu. Et voila! You now have purpose-designed images of your own on demand, always built on a fresh, secure, trusted base image.

Auditing your cloud infrastructure is now straightforward, because you have the DNA of that image in your script. This is devops thinking, turning repetitive manual processes (hacking and snapshotting) into code that can be shared and audited and improved. Your infrastructure DNA should live in a version control system that requires signed commits, so you know everything that has been done to get you where you are today. And all of that is enabled by cloud-init. And if you want to go one level deeper, check out Juju, which provides you with off-the-shelf scripts to customise and optimise that base image for hundreds of common workloads.

US visa-waiver program

Monday, May 29th, 2006

Joi ito has had a few stern looks from the US INS regarding visa waiver forms.

I can relate.

I have a UK passport by virtue of the fact that my father was born in the UK (mostly by accident – another fun story). So I also know about the visa waiver program – it used to cover me too. Until one day I flew into the US briefly, on my own plane, to visit friends in DC as part of a long trip. When we arrived at Dulles, the immigration officer said there was a small problem. The operator of my plane had never signed the visa-waiver treaty, and so despite the fact that I had entered the US 27 times previously on that same passport, without a visa, they would now have to decline me entry.

But before doing that they would:

  • take me in for questioning
  • search me (I objected to the strip search, they relented)
  • fingerprint me and send those fingerprints off around the world (no, Mossad is not looking for me, yet)
  • examine for obvious tattoos and other distinguishing features
  • ask me to sign a statement of wrongdoing (I declined)
  • terminate my visa waiver access – from then on I need a visa

A complication was that, because they did not have records of all the times I left the USA, they believed I had previously stayed for longer than the 90 days. Fortunately I was able to get copies of all my inbound and outbound tickets faxed to them, so I think they eventually came to believe that I had not actually overstayed the visa program ever.
Then they let me back on the plane, we flew to Ottawa, the US embassy kindly gave me a visa, and we returned to the USA.

Now, flying into the USA I am ALWAYS sent off for extra questions and paperwork. And on applying for a new visa, I have to fill out the form for “people with a criminal record” (cross out the criminal record part, write in “visa waiver declined”, I kid you not). It’s a joyless process.

Hello, land of the free, knock knock.

I fell in love with the USA once. It was built on beautiful principles. Alas, it appears to have forsaken those in the name of security and expediency. As a result, I think the world is looking for a new source of inspiration – a new country where the most interesting people of the world can arrive, feel welcome, and feel free. Joi, best you be sure to hand that little green form back, every time.