In a series of 12 posts, I’ll make the case for Ubuntu as the platform of choice for public clouds, enterprise clouds and related scale-out initiatives.

Cloud computing is largely being defined on public clouds today. There are a range of initiatives for private cloud computing – some proprietary, some open – but for sheer scale and traction, the game today is all about public cloud services. Azure, AWS, a range of offerings from telco’s and service providers together with innovative takes on the concept from hardware OEMs have been the leading edge of the cloud market for the past five years. We do expect private clouds to flourish around OpenStack, but we expect the gene pool of innovation to stay on the public clouds for some time.

And what do people run on public clouds? By substantial majority, most of that innovation, most of that practical experience and most of the insights being generated are on Ubuntu.

Digital Ocean, the fastest growing new challenger in the US public cloud market, published definitive statistics on the share of operating systems that customers choose on their cloud:

Ubuntu has 67% share of the Digital Ocean public cloud

Ubuntu is the most popular OS on public clouds, by far.

AWS hasn’t spoken publicly on the topic but there are a number of measurements by third parties that provide some insight. For example,  SCALR offer a management service that is used by enterprises looking for more institutional management control of the way their teams use Amazon. One might think that an enterprise management perspective would be skewed away from Ubuntu towards traditional, legacy enterprise Linux, but in fact they find that Ubuntu is more than 70% of all the images they see, three times as popular as CentOS.

There is no true safety in numbers, but there is certainly reassurance. Using a platform that is being used by most other people means that the majority of the content you find about how to get things done efficiently is immediately relevant to you. Version skew – subtle differences in the versions of components that are available by default on your platform of choice – is much less of an issue if the guidebook you are reading assumes you’re on the same platform they used.

There is also the question of talent – finding people to get amazing things done on the cloud is a lot easier if you let them use the platforms they have already grown comfortable with. They can be more productive, and there are many more of them around to hire. Talking to companies about cloud computing today it’s clear their biggest constraint is knowledge acquisition; the time it takes to grow own internal skills or to hire in the necessary skills to get the job done. Building on Ubuntu gives you a much broader talent and knowledge base to work with. Training your own team to use Ubuntu if they are familiar with another Linux is a relatively minor switch compared to the fundamental challenge of adopting a IAAS-based architecture. Switching to Ubuntu is the fastest way to tame that dragon, and the economics are great, too.

That’s why we see many companies that have been doing Linux one way for a decade switching to Ubuntu when they switch to the cloud. Even if what they are doing on the cloud is essentially the same as something they already do on another platform, it’s “easier with Ubuntu on the cloud”, so they switch.

Automated deployment of Ubuntu with Orchestra

Thursday, October 27th, 2011


Orchestra is one of the most exciting new capabilities in 11.10. It provides automated installation of Ubuntu across sets of machines. Typically, it’s used by people bringing up a cluster or farm of servers, but the way it’s designed makes it very easy to bring up rich services, where there may be a variety of different kinds of nodes that all need to be installed together.

There’s a long history of tools that have been popular at one time or another for automated installation. FAI is the one I knew best before Orchestra came along and I was interested in the rationale for a new tool, and the ways in which it would enhance the experience of people building clusters, clouds and other services at scale. Dustin provided some of that in his introduction to Orchestra, but the short answer is that Orchestra is savvy to the service orchestration model of Juju, which means that the intelligence distilled in Juju charms can easily be harnessed in any deployment that uses Orchestra on bare metal.

What’s particularly cool about THAT is that it unifies the new world of private cloud with the old approach of Linux deployment in a cluster. So, for example, Orchestra can be used to deploy Hadoop across 3,000 servers on bare metal, and that same Juju charm can also deploy Hadoop on AWS or an OpenStack cloud. And soon it should be possible to deploy Hadoop across n physical machines with automatic bursting to your private or favourite public cloud, all automatically built in. Brilliant. Kudos to the conductor 🙂

Private cloud is very exciting – and with Ubuntu 11.10 it’s really easy to set up a small cloud to kick the tires, then scale that up as needed for production. But there are still lots of reasons why you might want to deploy a service onto bare metal, and Orchestra is a neat way to do that while at the same time preparing for a cloud-oriented future, because the work done to codify policies or practices in the physical environment should be useful immediately in the cloud, too.

For 12.04 LTS, where supporting larger-scale deployments will be a key goal, Orchestra becomes a tool that every Ubuntu administrator will find useful. I bet it will be the focus of a lot of discussion at UDS next week, and a lot of work in this cycle.