Thank you CC

Tuesday, May 17th, 2016

Just to state publicly my gratitude that the Ubuntu Community Council has taken on their responsibilities very thoughtfully, and has demonstrated a proactive interest in keeping the community happy, healthy and unblocked. Their role is a critical one in the Ubuntu project, because we are at our best when we are constantly improving, and we are at our best when we are actively exploring ways to have completely different communities find common cause, common interest and common solutions. They say that it’s tough at the top because the easy problems don’t get escalated, and that is particularly true of the CC. So far, they are doing us proud.

 

Y is for…

Thursday, April 21st, 2016

Yakkety yakkety yakkety yakkety yakkety yakkety yakkety yakkety yak. Naturally 🙂

With the release of LXC 2.0 and LXD, we now have a pure-container hypervisor that delivers bare-metal performance with a standard Linux guest OS experience. Very low latency, very high density, and very high control of specific in-guest application processes compared to KVM and ESX make it worth checking out for large-scale Linux virtualisation operations.

Even better, the drivers to enable LXD as a hypervisor in  OpenStack, are maturing upstream.

That means you get bare metal performance on OpenStack for Linux workloads, without actually giving people the whole physical server. LXD supports live migration so you can migrate those users to a different physical server with no downtime, which is great for maintenance. And you can have all the nice Openstack semantics for virtual networks etc without having to try very hard.

By contrast, Ironic has the problem that the user can now modify any aspect of the machine as if you gave them physical access to it. In most cases, that’s not desirable, and in public clouds it’s a fun way to let the NSA (and other agencies) install firmware for your users to enjoy later.

NSA-as-a-Service does have a certain ring to it though.

Nominations to the 2015 Ubuntu Community Council

Wednesday, November 11th, 2015

I am delighted to nominate these long-standing members of the Ubuntu community for your consideration in the upcoming Community Council election.

* Phillip Ballew https://launchpad.net/~philipballew
* Walter Lapchynski https://launchpad.net/~wxl
* Marco Ceppi https://launchpad.net/~marcoceppi
* Jose Antonio Rey https://launchpad.net/~jose
* Laura Czajkowskii https://launchpad.net/~czajkowski
* Svetlana Belkin https://launchpad.net/~belkinsa
* Chris Crisafulli https://launchpad.net/~itnet7
* Michael Hall https://launchpad.net/~mhall119
* Scarlett Clark https://launchpad.net/~sgclark
* C de-Avillez https://launchpad.net/~hggdh2
* Daniel Holbach https://launchpad.net/~dholbach

The Community Council is our most thoughtful body, who carry the responsibility of finding common ground between our widely diverse interests. They oversee all membership in the project, recognising those who make substantial and sustained contributions through any number of forums and mechanisms with membership and a voice in the governance of Ubuntu. They delegate in many cases responsibility for governance of pieces of the project to teams who are best qualified to lead in those areas, but they maintain overall responsibility for our discourse and our standards of behaviour.

We have been the great beneficiaries of the work of the outgoing CC, who I would like to thank once again for their tasteful leadership. I was often reminded of the importance of having a team which continues to inspire and lead and build bridges, even under great pressure, and the CC team who conclude their term shortly have set the highest bar for that in my experience. I’m immensely grateful to them and excited to continue working with whomever the community chooses from this list of nominations.

I would encourage you to meet and chat with all of the candidates and choose those who you think are best able to bring teams together; Ubuntu is a locus of collaboration between groups with intensely different opinions, and it is our ability to find a way to share and collaborate with one another that sets us apart. When it gets particularly tricky, the CC are at their most valuable to the project.

Voting details have gone out to all voting members of Ubuntu, thank you for participating in the election!

X marks the spot

Wednesday, October 21st, 2015
LXD is the lightervisor, a pure-container virtualisation system, the world's fastest hypervisor.

LXD is the pure-container hypervisor

What a great Wily it’s been, and for those of you who live on the latest release and haven’t already updated, the bits are baked and looking great. You can jump the queue if you know where to look while we spin up the extra servers needed for IMG and ISO downloads 🙂

Utopic, Vivid and Wily have been three intense releases, packed with innovation, and now we intend to bring all of those threads together for our Long Term Support release due out in April 2016.

LXD is the world’s fastest hypervisor, led by Canonical, a pure-container way to run Linux guests on Linux hosts. If you haven’t yet played with LXD (a.k.a LXC 2.0-b1) it will blow you away.  It will certainly transform your expectations of virtualisation, from slow-and-hard to amazingly light and fast. Imagine getting a full machine running any Linux you like, as a container on your laptop, in less than a second. For me, personally, it has become a fun way to clean up my build processes, spinning up a container on demand to make sure I always build in a fresh filesystem.

Snappy packages have transactional updates with rollback

Snappy Packaging System

Snappy is the world’s most secure packaging system, delivering crisp and transaction updates with rollback for both applications and the system, from phone to appliance. We’re using snappy on high-end switches and flying wonder-machines, on raspberry pi’s and massive clouds. Ubuntu Core is the all-snappy minimal server, and Ubuntu Personal will be the all-snappy phone / tablet / pc. With a snap you get to publish exactly the software you want to your device, and update it instantly over the air, just like we do the Ubuntu Phone. Snappy packages are automatically confined to ensure that a bug in one app doesn’t put your data elsewhere at risk. Amazing work, amazing team, amazing community!

MAAS is your physical cloud

Metal as a Service

MAAS is your physical cloud, with bare-metal machines on demand, supporting Ubuntu, CentOS and Windows. Drive your data centre from a single dashboard, bond network interfaces, raid your disks and rock the cloud generation. Led by Canonical, loved by the world leaders of big, and really big, deployments. MAAS gives you high availability DNS, DHCP, PXE and other critical infrastructure, for huge and dynamic data centres. Also pretty fun to run at home.

Juju is… model-driven application orchestration, that lets communities define how big topological apps like Hadoop and OpenStack map onto the cloud of your choice. The fastest way to find the fastest way to spin those applications into the cloud you prefer. With traditional configuration managers like Puppet now also saying that model-driven approaches are the way to the future, I’m very excited to see the kinds of problems that huge enterprises are starting to solve with Juju, and equally excited to see start-ups using Juju to speed their path to adoption. Here’s the Hadoop, Spark, IPython Notebook coolness I deployed live on stage at Apache Hadoopcon this month:

Juju model of Apache Hadoop with Spark and IPython Notebook

Apache Hadoop, Spark, IPython modelled with Juju

All of these are coming together beautifully, making Ubuntu the fastest path to magic of all sorts. And that magic will go by the codename… xenial xerus!

What fortunate timing that our next LTS should be X, because “xenial” means “friendly relations between hosts and guests”, and given all the amazing work going into LXD and KVM for Ubuntu OpenStack, and beyond that the interoperability of Ubuntu OpenStack with hypervisors of all sorts, it seems like a perfect fit.

And Xerus, the African ground squirrels, are among the most social animals in my home country. They thrive in the desert, they live in small, agile, social groups that get along unusually well with their neighbours (for most mammals, neighbours are a source of bloody competition, for Xerus, hey, collaboration is cool). They are fast, feisty, friendly and known for their enormous… courage. That sounds just about right. With great… courage… comes great opportunity!

Canonical just announced a new, free, and very cool way to provide thousands of IP addresses to each of your VMs on AWS. Check out the fan networking on Ubuntu wiki page to get started, or read Dustin’s excellent fan walkthrough. Carry on here for a simple description of this happy little dose of awesome.

Containers are transforming the way people think about virtual machines (LXD) and apps (Docker). They give us much better performance and much better density for virtualisation in LXD, and with Docker, they enable new ways to move applications between dev, test and production. These two aspects of containers – the whole machine container and the process container, are perfectly complementary. You can launch Docker process containers inside LXD machine containers very easily. LXD feels like KVM only faster, Docker feels like the core unit of a PAAS.

The density numbers are pretty staggering. It’s *normal* to run hundreds of containers on a laptop.

And that is what creates one of the real frustrations of the container generation, which is a shortage of easily accessible IP addresses.

It seems weird that in this era of virtual everything that a number is hard to come by. The restrictions are real, however, because AWS restricts artificially the number of IP addresses you can bind to an interface on your VM. You have to buy a bigger VM to get more IP addresses, even if you don’t need extra compute. Also, IPv6 is nowehre to be seen on the clouds, so addresses are more scarce than they need to be in the first place.

So the key problem is that you want to find a way to get tens or hundreds of IP addresses allocated to each VM.

Most workarounds to date have involved “overlay networking”. You make a database in the cloud to track which IP address is attached to which container on each host VM. You then create tunnels between all the hosts so that everything can talk to everything. This works, kinda. It results in a mess of tunnels and much more complex routing than you would otherwise need. It also ruins performance for things like multicast and broadcast, because those are now exploding off through a myriad twisty tunnels, all looking the same.

The Fan is Canonical’s answer to the container networking challenge.

We recognised that container networking is unusual, and quite unlike true software-defined networking, in that the number of containers you want on each host is probably roughly the same. You want to run a couple hundred containers on each VM. You also don’t (in the docker case) want to live migrate them around, you just kill them and start them again elsewhere. Essentially, what you need is an address multiplier – anywhere you have one interface, it would be handy to have 250 of them instead.

So we came up with the “fan”. It’s called that because you can picture it as a fan behind each of your existing IP addresses, with another 250 IP addresses available. Anywhere you have an IP you can make a fan, and every fan gives you 250x the IP addresses. More than that, you can run multiple fans, so each IP address could stand in front of thousands of container IP addresses.

We use standard IPv4 addresses, just like overlays. What we do that’s new is allocate those addresses mathematically, with an algorithmic projection from your existing subnet / network range to the expanded range. That results in a very flat address structure – you get exactly the same number of overlay addresses for each IP address on your network, perfect for a dense container setup.

Because we’re mapping addresses algorithmically, we avoid any need for a database of overlay addresses per host. We can calculate instantly, with no database lookup, the host address for any given container address.

More importantly, we can route to these addresses much more simply, with a single route to the “fan” network on each host, instead of the maze of twisty network tunnels you might have seen with other overlays.

You can expand any network range with any other network range. The main idea, though, is that people will expand a class B range in their VPC with a class A range. Who has a class A range lying about? You do! It turns out that there are a couple of class A networks that are allocated and which publish no routes on the Internet.

We also plan to submit an IETF RFC for the fan, for address expansion. It turns out that “Class E” networking was reserved but never defined, and we’d like to think of that as a new “Expansion” class. There are several class A network addresses reserved for Class E, which won’t work on the Internet itself. While you can use the fan with unused class A addresses (and there are several good candidates for use!) it would be much nicer to do this as part of a standard.

The fan is available on Ubuntu on AWS and soon on other clouds, for your testing and container experiments! Feedback is most welcome while we refine the user experience.

Configuration on Ubuntu is super-simple. Here’s an example:

In /etc/network/fan:

# fan 241
241.0.0.0/8 172.16.3.0/16 dhcp

In /etc/network/interfaces:

iface eth0 static
address 172.16.3.4
netmask 255.255.0.0
up fanctl up 241.0.0.0/8 172.16.3.4/16
down fanctl down 241.0.0.0/8 172.16.3.4/16

This will map 250 addresses on 241.0.0.0/8 to your 172.16.0.0/16 hosts.

Docker, LXD and Juju integration is just as easy. For docker, edit /etc/default/docker.io, adding:

DOCKER_OPTS=”-d -b fan-10-3-4 –mtu=1480 –iptables=false”

You must then restart docker.io:

sudo service docker.io restart

At this point, a Docker instance started via, e.g.,

docker run -it ubuntu:latest

will be run within the specified fan overlay network.

Enjoy!

Announcing the “wily werewolf”

Monday, May 4th, 2015

Watchful observers will have wondered why “W” is yet unnamed! Without wallowing in the wizzo details, let’s just say it’s been a wild and worthy week, and as it happens I had the well-timed opportunity of a widely watched keynote today and thought, perhaps wonkily, that it would be fun to announce it there.

But first, thank you to all who have made such witty suggestions in webby forums. Alas, the “wacky wabbit” and “watery walrus”, while weird enough and wisely whimsical, won’t win the race. The “warty wombat”, while wistfully wonderful, will break all sorts of systems with its wepetition. And the “witchy whippet”, in all its wiry weeness, didn’t make the cut.

Instead, my waggish friends, the winsome W on which we wish will be… the “wily werewolf”.

Enjoy!

W is for…

Monday, May 4th, 2015

… waiting till the Ubuntu Summit online opening keynote today, at 1400 UTC. See you there 😉

“Smart, connected things” are redefining our home, work and play, with brilliant innovation built on standard processors that have shrunk in power and price to the point where it makes sense to turn almost every “thing” into a smart thing. I’m inspired by the inventors and innovators who are creating incredible machines – from robots that might clean or move things around the house, to drones that follow us at play, to smarter homes which use energy more efficiently or more insightful security systems. Prooving the power of open source to unleash innovation, most of this stuff runs on Linux – but it’s a hugely fragmented and insecure kind of Linux. Every device has custom “firmware” that lumps together the OS and drivers and devices-specific software, and that firmware is almost never updated. So let’s fix that!

Ubuntu is right at the heart of the “internet thing” revolution, and so we are in a good position to raise the bar for security and consistency across the whole ecosystem. Ubuntu is already pervasive on devices – you’ve probably seen lots of “Ubuntu in the wild” stories, from self-driving cars to space programs and robots and the occasional airport display. I’m excited that we can help underpin the next wave of innovation while also thoughtful about the responsibility that entails. So today we’re launching snappy Ubuntu Core on a wide range of boards, chips and chipsets, because the snappy system and Ubuntu Core are perfect for distributed, connected devices that need security updates for the OS and applications but also need to be completely reliable and self-healing. Snappy is much better than package dependencies for robust, distributed devices.

Transactional updates. App store. A huge range of hardware. Branding for device manufacturers.

In this release of Ubuntu Core we’ve added a hardware abstraction layer where platform-specific kernels live. We’re working commercially with the major silicon providers to guarantee free updates to every device built on their chips and boards. We’ve added a web device manager (“webdm”) that handles first-boot and app store access through the web consistently on every device. And we’ve preserved perfect compatibility with the snappy images of Ubuntu Core available on every major cloud today. So you can start your kickstarter project with a VM on your favourite cloud and pick your processor when you’re ready to finalise the device.

If you are an inventor or a developer of apps that might run on devices, then Ubuntu Core is for you. We’re launching it with a wide range of partners on a huge range of devices. From the pervasive Beaglebone Black to the $35 Odroid-C1 (1Ghz processor, 1 GB RAM), all the way up to the biggest Xeon servers, snappy Ubuntu Core gives you a crisp, ultra-reliable base platform, with all the goodness of Ubuntu at your fingertips and total control over the way you deliver your app to your users and devices. With an app store (well, a “snapp” store) built in and access to the amazing work of thousands of communities collaborating on Github and other forums, with code for robotics and autopilots and a million other things instantly accessible, I can’t wait to see what people build.

I for one welcome the ability to install AI on my next camera-toting drone, and am glad to be able to do it in a way that will get patched automatically with fixes for future heartbleeds!

What if your cloud instances could be updated with the same certainty and precision as your mobile phone – with carrier grade assurance that an update applies perfectly or is not applied at all? What if your apps could be isolated from one another completely, so there’s no possibility that installing one app could break another, and stronger assurance that a compromise of one app won’t compromise the data from another? When we set out to build the Ubuntu Phone we took on the challenge of raising the bar for reliability and security in the mobile market. And today that same technology is coming to the cloud, in the form of a new “snappy” image called Ubuntu Core, which is in beta today on Azure and as a KVM image you can run on any Linux machine.

This is in a sense the biggest break with tradition in 10 years of Ubuntu, because snappy Ubuntu Core doesn’t use debs or apt-get. We call it “snappy” because that’s the new bullet-proof mechanism for app delivery and system updates; it’s completely different to the traditional package-based Ubuntu server and desktop. The snappy system keeps each part of Ubuntu in a separate, read-only file, and does the same for each application. That way, developers can deliver everything they need to be confident their app will work exactly as they intend, and we can take steps to keep the various apps isolated from one another, and ensure that updates are always perfect. Of course, that means that apt-get won’t work, but that’s OK since developers can reuse debs to make their snappy apps, and the core system is exactly the same as any other Ubuntu system – server or desktop.

Whenever we make a fix to packages in Ubuntu, we’ll publish the same fix to Ubuntu Core, and systems can get that fix transactionally. In fact, updates to Ubuntu Core are even smaller than package updates because we only need to send the precise difference between the old and new versions, not the whole package. Of course, Ubuntu Core is in addition to all the current members of the Ubuntu family – desktop, server, and cloud images that use apt-get and debs, and all the many *buntu remixes which bring their particular shine to our community. You still get all the Ubuntu you like, and there’s a new snappy Core image on all the clouds for the sort of deployment where precision, specialism and security are the top priority.

This is the biggest new thing in Ubuntu since we committed to deliver a mobile phone platform, and it’s very delicious that it’s borne of exactly the same amazing technology that we’ve been perfecting for these last three years. I love it when two completely different efforts find underlying commonalities, and it’s wonderful to me that the work we’ve done for the phone, where carriers and consumers are the audience, might turn out to be so useful in the cloud, which is all about back-end infrastructure.

Why is this so interesting?

Transactional updates have lots of useful properties: if they are done well, you can know EXACTLY what’s running on a particular system, and you can coordinate updates with very high precision across thousands of instances in the cloud. You can run systems as canaries, getting updates ahead of other identical systems to see if they cause unexpected problems. You can roll updates back, because each version is a complete, independent image. That’s very nice indeed.

There have been interesting developments in the transaction systems field over the past few years. ChromeOS is updated transactionally, when you turn it on, it makes sure it’s running the latest version of the OS. CoreOS brought aspects of Chrome OS and Gentoo to the cloud, Red Hat has a beta of Atomic as a transactional version of RHEL, and of course Docker is a way of delivering apps transactionally too (it combines app and system files very neatly). Ubuntu Core raises the bar for certainty, extensibility and security in the transactional systems game. What I love about Ubuntu Core is the way it embraces transactional updates not just for the base system but for applications on top of the system as well. The system is just one layer that can be updated transactionally, and so are each of the apps on the system. You get an extensible platform that retains the lovely properties of transactionality but lets you choose exactly the capabilities you want for yourself, rather than having someone else force you to use a particular tool.

For example, in CoreOS, things like Fleet are built-in, you can’t opt out. In Ubuntu Core, we aim for a much smaller Core, and then enable you to install Docker or any other container system as a framework, with snappy. We’re working with all the different container vendors, and app systems, and container coordination systems, to help them make snappy versions of their tools. That way, you get the transactional semantics you want with the freedom to use whichever tools suit you. And the whole thing is smaller and more secure because we baked fewer assumptions into the core.

The snappy system is also designed to provide security guarantees across diverse environments. Because there is a single repository of frameworks and packages, and each of them has a digital fingerprint that cannot be faked, two people on opposite ends of the world can compare their systems and know that they are running exactly the same versions of the system and apps. Atomic might allow you to roll back, but it’s virtually impossible to customise the system for your own preferences rather than Red Hat’s, and still know you are running the same secure bits as anybody else.

Developers of snappy apps get much more freedom to bundle the exact versions of libraries that they want to use with their apps. It’s much easier to make a snappy package than a traditional Ubuntu package – just bundle up everything you want in one place, and ship it. We use strong application isolation to keep data confidential between apps. If you install a bad app, it only has access to the data you create with that app, not to data from other applications. This is a key piece of security that comes from our efforts to bring Ubuntu to the mobile market, where malware is a real problem today. And as a result, we can enable developers to go much faster – they can publish their app on whatever schedule suits them, regardless of the Ubuntu release cadence. Want the very latest app? Snappy makes that easiest.

This is also why I think snappy will result in much simpler systems management. Instead of having literally thousands of packages on your Ubuntu server, with tons of dependencies, a snappy system just has a single package for each actual app or framework that’s installed. I bet the average system on the cloud ends up with about three packages installed, total! Try this sort of output:

$ snappy info
release: ubuntu-core/devel
frameworks: docker, panamax
apps: owncloud

That’s much easier to manage and reason about at scale. We recently saw how complicated things can get in the old packaging system, when Owncloud upstream wanted to remove the original packages of Owncloud from an old Ubuntu release. With snappy Ubuntu, Owncloud can publish exactly what they want you to use as a snappy package, and can update that for you directly, in a safe transactional manner with full support for rolling back. I think upstream developers are going to love being in complete control of their app on snappy Ubuntu Core.

$ sudo snappy install hello-world

Welcome to a snappy new world!

Things here are really nice and simple:

$ snappy info
$ snappy build .
$ snappy install foo
$ snappy update foo
$ snappy rollback foo
$ snappy remove foo
$ snappy update-versions
$ snappy versions

Just for fun, download the image and have a play. I’m delighted that Ubuntu Core is today’s Qemu Advent Calendar image too! Or launch it on Azure, coming soon to all the clouds.

It’s important for Ubuntu to continue to find new ways to bring free software to a wider audience. The way people think about software is changing, and I think Ubuntu Core becomes a very useful tool for people doing stuff at huge scale in the cloud. If you want crisp, purposeful, tightly locked down systems that are secure by design, Ubuntu Core and snappy packages are the right tool for the job. Running docker farms? Running transcode farms? I think you’ll like this very much!

We have the world’s biggest free software community because we find ways to recognise all kinds of contributions and to support people helping one another to bring their ideas to fruition. One of the goals of snappy was to reduce the overhead and bureaucracy of packaging software to make it incredibly easy for anybody to publish code they care about to other Ubuntu users. We have built a great community of developers using this toolchain for the phone, I think it’s going to be even better on the cloud where Ubuntu is already so popular. There is a lot to do in making the most of existing debs in the snappy environment, and I’m excited that there is a load of amazing software on github that can now flow more easily to Ubuntu users on any cloud.

Welcome to the family, Ubuntu Core!