This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play. When you see Ubuntu on a cloud it means that Canonical has a working relationship with that cloud vendor, and the Ubuntu images there come with a set of guarantees:

  1. Those images are up to date and secure.
  2. They have also been optimised on that cloud, both for performance and cost.
  3. The images provide a standard experience for app compatibility.

That turns out to be a lot of work for us to achieve, but it makes your life really easy.

Fresh, secure and tasty images

We update the cloud images across all clouds on a regular basis. Updating the image means that you have more of the latest updates pre-installed so launching a new machine is much faster – fewer updates to install on boot for a fully secured and patched machine.

  1. At least every two weeks, typically, if there are just a few small updates across the board to roll into the freshest image.
  2. Immediately if there is a significant security issue, so starting a fresh image guarantees you to have no known security gotchas.
  3. Sooner than usual if there are a lot of updates which would make launching and updating a machine slow.

Updates might include fixes to the kernel, or any of the packages we install by default in the “core” cloud images. We also make sure that these updated images are used by default in any “quick launch” UI that the cloud provides, so you don’t have to go hunt for the right image identity. And there are automated tools that will tell you the ID for the current image of Ubuntu on your cloud of choice. So you can script “give me a fresh Ubuntu machine” for any cloud, trivially. It’s all very nice.

Optimised for your pocket and your workload

Every cloud behaves differently – both in terms of their architecture, and their economics. When we engage with the cloud operator we figure out how to ensure that Ubuntu is “optimal” on that cloud. Usually that means we figure out things like storage mechanisms (the classic example is S3 but we have to look at each cloud to see what they provide and how to take advantage of it) and ensure that data-heavy operations like system updates draw on those resources in the most cost-efficient manner. This way we try to ensure that using Ubuntu is a guarantee of the most cost-effective base OS experience on any given cloud. In the case of more sophisticated clouds, we are digging in to kernel parameters and drivers to ensure that performance is first class. On Azure there is a LOT of deep engineering between Canonical and Microsoft to ensure that Ubuntu gets the best possible performance out of the Hyper-V substrate, and we are similarly engaged with other cloud operators and solution providers that use highly-specialised hypervisors, such as Joyent and VMware. Even the network can be tweaked for efficiency in a particular cloud environment once we know exactly how that cloud works under the covers. And we do that tweaking in the standard images so EVERYBODY benefits and you can take it for granted – if you’re using Ubuntu, it’s optimal. The results of this work can be pretty astonishing. In the case of one cloud we reduced the Ubuntu startup time by 23x from what their team had done internally; not that they were ineffective, it’s just that we see things through the eyes of a large-scale cloud user and care about things that a single developer might not care about as much. When you’re doing something at scale, even small efficiencies add up to big numbers.

Standard, yummy

Before we had this program in place, every cloud vendor hacked their own Ubuntu images, and they were all slightly different in unpredictable ways. We all have our own favourite way of doing things, so if every cloud has a lead engineer who rigged the default Ubuntu the way they like it, end users have to figure out the differences the hard way, stubbing their toes on them. In some cases they had default user accounts with different behaviour, in others they had different default packages installed. EMACS, Vi, nginx, the usual tweaks. In a couple of cases there were problems with updates or security, and we realised that Ubuntu users would be much better off if we took responsibility for this and ensured that the name is an assurance of standard behaviour and quality across all clouds. So now we have that, and if you see Ubuntu on a public cloud you can be sure it’s done to that standard, and we’re responsible. If it isn’t, please let us know and we’ll fix it for you. That means that you can try out a new cloud really easily – your stuff should work exactly the same way with those images, and differences between the clouds will have been considered and abstracted in the base OS. We’ll have tweaked the network, kernel, storage, update mechanisms and a host of other details so that you don’t have to, we’ll have installed appropriate tools for that specific cloud, and we’ll have lined things up so that to the best of our ability none of those changes will break your apps, or updates. If you haven’t recently tried a new cloud, go ahead and kick the tires on the base Ubuntu images in two or three of them. They should all Just Work TM.   It’s frankly a lot of fun for us to work with the cloud operators – this is the frontline of large-scale systems engineering, and the guys driving architecture at public cloud providers are innovating like crazy but doing so in a highly competitive and operationally demanding environment. Our job in this case is to make sure that end-users don’t have to worry about how the base OS is tuned – it’s already tuned for them. We’re taking that to the next level in many cases by optimising workloads as well, in the form of Juju charms, so you can get whole clusters or scaled-out services that are tuned for each cloud as well. The goal is that you can create a cloud account and have complex scale-out infrastructure up and running in a few minutes. Devops, distilled.

Two weeks with Mir

Tuesday, July 9th, 2013

Mir has been running smoothly on my laptop for two weeks now. It’s an all-Intel Dell XPS, so the driver stack on Ubuntu is very clean, but I’m nonetheless surprised that the system feels *smoother* than it did pre-Mir. It might be coincidence, Saucy is changing pretty fast and new versions of X and Compiz have both landed while I’ve had Mir running. But watching top suggests that both Xorg and Compiz are using less memory and fewer CPU cycles under Mir than they were with X handling the hardware directly.

Talking with the Mir team, they say others have seen the same thing, and they attribute it to more efficient buffering of requests on the way to the hardware. YMMV but it’s definitely worth trying. I have one glitch which catches me out – Chromium triggers an issue in the graphics stack which freezes the display. Pressing Alt-F1 unfreezes it (it causes Compiz to invoke something which twiddles the right bits to bring the GPU back from it’s daze). I’m told that will get sorted trivially in a coming update to the PPA.

The overall impression I have is that Mir has delivered what we hoped. Perhaps it had the advantage of being able to study what went before – SurfaceFlinger, Wayland, X – and perhaps also the advantage of looking at things through the perspective of a mobile lens, where performance and efficiency are a primary concern, but regardless, it’s lean, efficient, high quality and brings benefits even when running a legacy X stack.

We take a lot of flack for every decision we make in Ubuntu, because so many people are affected. But I remind the team – failure to act when action is needed is as much a failure as taking the wrong kind of action might be. We have a responsibility to our users to explore difficult territory. Many difficult choices in the past are the bedrock of our usefulness to a very wide audience today.

Building a graphics stack is not a decision made lightly – it’s not an afternoon’s hacking. The decision was taken based on a careful consideration of technical factors. We need a graphics stack that works reliably across a very wide range of hardware, that performs predictably, that provides a consistent quality of user experience on many different desktop environments.

Of course, there is competition out there, which we think is healthy. I believe Mir will be able to evolve faster than the competition, in part because of the key differences and choices made now. For example, rather than a rigid protocol that can only be extended, Mir provides an API. The implementation of that API can evolve over time for better performance, while it’s difficult to do the same if you are speaking a fixed protocol. We saw with X how awkward life becomes when you have a fixed legacy protocol and negotiate over extensions which themselves might be versioned. Others have articulated the technical rationale for the Mir approach better than I can, read what they have to say if you’re interested in the ways in which Mir is different, the lessons learned from other stacks, and the benefits we see from the architecture of Mir.

Providing Mir as an option is easy. Mir is a very focused part of the stack, it has far fewer tentacles and knock-on consequences for app developers than, say, the init system, which means we should be able to work with a very tight group of communities to get great performance. It’s much easier for a distro to engage with Mir than to move to SystemD, for example; instead of an impact on every package, there is a need to coordinate in just a few packages for great results. We’ve had a very positive experience working with the Qt and WebKit communities, for example, so we know those apps will absolutely fly and talk Mir natively. Good upstreams want their code widely useful, so I’ve no doubt that the relevant toolkits will take patches that provide enhanced capabilities on Mir when available. And we also know that we can deliver a high-performance X stack on Mir, which means any application that talks X, or any desktop environment that talks X, will perform just as well with Mir, and have smoother transitions in and out thanks to the system compositor capabilities that Mir provides.

On Ubuntu, we’re committed that every desktop environment perform well with Mir, either under X or directly. We didn’t press the ‘GO’ button on Mir until we were satisfied that the whole Ubuntu community, and other distributions, could easily benefit from the advantages of a leaner, cleaner graphics stack. We’re busy optimising performance for X now so that every app and every desktop environment will work really well in 13.10 under Mir, without having to make any changes. And we’re taking patches from people who want Mir to support capabilities they need for native, super-fast Mir access. Distributions should be able to provide Mir as an option for their users to experiment with very easily – the patch to X is very small (less than 500 lines). For now, if you want to try it, the easiest way to do so is via the Ubuntu PPA. It will land in 13.10 just as soon as our QA and release teams are happy that its ready for very widespread testing.