Archive for the 'free software' Category

Cue the Cosmic Cuttlefish

Tuesday, May 8th, 2018

With our castor Castor now out for all to enjoy, and the Twitterverse delighted with the new minimal desktop and smooth snap integration, it’s time to turn our attention to the road ahead to 20.04 LTS, and I’m delighted to say that we’ll kick off that journey with the Cosmic Cuttlefish, soon to be known as Ubuntu 18.10.

Each of us has our own ideas of how the free stack will evolve in the next two years. And the great thing about Ubuntu is that it doesn’t reflect just one set of priorities, it’s an aggregation of all the things our community cares about. Nevertheless I thought I’d take the opportunity early in this LTS cycle to talk a little about the thing I’m starting to care more about than any one feature, and that’s security.

If I had one big thing that I could feel great about doing, systematically, for everyone who uses Ubuntu, it would be improving their confidence in the security of their systems and their data. It’s one of the very few truly unifying themes that crosses every use case.

It’s extraordinary how diverse the uses are to which the world puts Ubuntu these days, from the heart of the mainframe operation in a major financial firm, to the raspberry pi duck-taped to the back of a prototype something in the middle of nowhere, from desktops to clouds to connected things, we are the platform for ambitions great and small. We are stewards of a shared platform, and one of the ways we respond to that diversity is by opening up to let people push forward their ideas, making sure only that they are excellent to each other in the pushing.

But security is the one thing that every community wants – and it’s something that, on reflection, we can raise the bar even higher on.

So without further ado: thank you to everyone who helped bring about Bionic, and may you all enjoy working towards your own goals both in and out of Ubuntu in the next two years.

Congratulations to Team *Buntu on the release of our Artful Aardvark 17.10, featuring all your favourite desktop environments, kubernetes 1.8, the latest OpenStack, and security updates for 9 months, which takes us all the way to our next enterprise release, Ubuntu 18.04 LTS.

A brumous development cycle always makes for cool-headed work and brisk progress on the back of breem debate.

As always, 18.04 LTS will represent the sum of all our interests.

For those of you with bimodal inclinations, there’s the official upstream Kubernetes-on-Ubuntu spell for ‘conjure-up kubernetes’ with bijou multi-cloud goodness. We also have spells for OpenStack on Ubuntu and Hadoop on Ubuntu, so conjure-up is your one-stop magic shop for at-scale boffo big data, cloud and containers. Working with upstreams to enable fast deployment and operations of their stuff on all the clouds is a beamish way to spend the day.

If your thing is bling, pick a desktop! We’ve defaulted to GNOME, but we’re the space where KDE and GNOME and MATE and many others come together to give users real and easy choice of desktops. And if you’re feeling boned by the lack of Unity in open source, you might want to hop onto the channel and join those who are updating Unity7 for the newest X and kernel graphics in 18.04.

And of course, if your thing is actually a thing with internet smarts, then it’s Ubuntu Core that will get you flying (or driving or gatewaying or routing or, well, anything your thing desires) in a snap.

It takes a booky brilliance to shine, and we celebrate brilliance in all its forms in our community. Thanks to the artists and the advocates, the brains and the documenters, the councils and yes, the crazies who find entirely new ways to contribute, Ubuntu grows and reflects the depth and breadth of free software. For many upstream projects, Ubuntu represents the way most users will enjoy their contribution to society. That’s a big responsibility, and one we take seriously. Leave the bolshy, blithe and branky BS aside, and let’s appeal to all that’s brave and bonzer as we shape the platform on which others will build.

It’s builders that we celebrate – the people that build our upstream applications and packages, the people who build Ubuntu, and the people who build on Ubuntu. In honour of that tireless toil, our mascot this cycle is a mammal known for it’s energetic attitude, industrious nature and engineering prowess. We give it a neatly nerdy 21st century twist in honour of the relentless robots running Ubuntu Core. Ladies and gentlemen, I give you 18.04 LTS, the Bionic Beaver.

“Smart, connected things” are redefining our home, work and play, with brilliant innovation built on standard processors that have shrunk in power and price to the point where it makes sense to turn almost every “thing” into a smart thing. I’m inspired by the inventors and innovators who are creating incredible machines – from robots that might clean or move things around the house, to drones that follow us at play, to smarter homes which use energy more efficiently or more insightful security systems. Prooving the power of open source to unleash innovation, most of this stuff runs on Linux – but it’s a hugely fragmented and insecure kind of Linux. Every device has custom “firmware” that lumps together the OS and drivers and devices-specific software, and that firmware is almost never updated. So let’s fix that!

Ubuntu is right at the heart of the “internet thing” revolution, and so we are in a good position to raise the bar for security and consistency across the whole ecosystem. Ubuntu is already pervasive on devices – you’ve probably seen lots of “Ubuntu in the wild” stories, from self-driving cars to space programs and robots and the occasional airport display. I’m excited that we can help underpin the next wave of innovation while also thoughtful about the responsibility that entails. So today we’re launching snappy Ubuntu Core on a wide range of boards, chips and chipsets, because the snappy system and Ubuntu Core are perfect for distributed, connected devices that need security updates for the OS and applications but also need to be completely reliable and self-healing. Snappy is much better than package dependencies for robust, distributed devices.

Transactional updates. App store. A huge range of hardware. Branding for device manufacturers.

In this release of Ubuntu Core we’ve added a hardware abstraction layer where platform-specific kernels live. We’re working commercially with the major silicon providers to guarantee free updates to every device built on their chips and boards. We’ve added a web device manager (“webdm”) that handles first-boot and app store access through the web consistently on every device. And we’ve preserved perfect compatibility with the snappy images of Ubuntu Core available on every major cloud today. So you can start your kickstarter project with a VM on your favourite cloud and pick your processor when you’re ready to finalise the device.

If you are an inventor or a developer of apps that might run on devices, then Ubuntu Core is for you. We’re launching it with a wide range of partners on a huge range of devices. From the pervasive Beaglebone Black to the $35 Odroid-C1 (1Ghz processor, 1 GB RAM), all the way up to the biggest Xeon servers, snappy Ubuntu Core gives you a crisp, ultra-reliable base platform, with all the goodness of Ubuntu at your fingertips and total control over the way you deliver your app to your users and devices. With an app store (well, a “snapp” store) built in and access to the amazing work of thousands of communities collaborating on Github and other forums, with code for robotics and autopilots and a million other things instantly accessible, I can’t wait to see what people build.

I for one welcome the ability to install AI on my next camera-toting drone, and am glad to be able to do it in a way that will get patched automatically with fixes for future heartbleeds!

OpenStack on a diet, redux

Saturday, November 8th, 2014

Subhu writes that OpenStack’s blossoming project list comes at a cost to quality. I’d like to follow up with an even leaner approach based on an outline drafted during the OpenStack Core discussions after ODS Hong Kong, a year ago.

The key ideas in that draft are:

Only call services “core” if the user can detect them.

How the cloud is deployed or operated makes no difference to a user. We want app developers to

Define both “core” and “common” services, but require only “core” services for a cloud that calls itself OpenStack compatible.

Separation of core and common lets us recognise common practice today, while also acknowledging that many ideas we’ve had in the past year or three are just 1.0 iterations, we don’t know which of them will stick any more than one could predict which services on any major public cloud will thrive and which will vanish over time. Signalling that something is “core” means it is something we commit to keeping around a long time. Signalling something is “common” means it’s widespread practice for it to be available in an OpenStack environment, but not a requirement.

Require that “common” services can be self-deployed.

Just as you can install a library or a binary in your home directory, you can run services for yourself in a cloud. Services do not have to be provided by the cloud infrastructure provider, they can usually be run by a user themselves, under their own account, as a series of VMs providing network services. Making it a requirement that users can self-provide a service before designating it common means that users can build on it; if a particular cloud doesn’t offer it, their users can self-provide it. All this means is that the common service itself builds on core services, though it might also depend on other common services which could be self-deployed in advance of it.

Require that “common” services have a public integration test suite that can be run by any user of a cloud to evaluate conformance of a particular implementation of the service.

For example, a user might point the test suite at HP Cloud to verify that the common service there actually conforms to the service test standard. Alternatively, the user who self-provides a common service in a cloud which does not provide it can verify that their self-deployed common service is functioning correctly. This also serves to expand the test suite for the core: we can self-deploy common services and run their test suites to exercise the core more thoroughly than Tempest could.

Keep the whole set as small as possible.

We know that small is beautiful; small is cleaner, leaner, more comprehensible, more secure, easier to test, likely to be more efficiently implemented, easier to attract developer participation. In general, if something can be cut from the core specification it should. “Common” should reflect common practice and can be arbitrarily large, and also arbitrarily changed.

In the light of those ideas, I would designate the following items from Subhu’s list as core OpenStack services:

  • Keystone (without identity, nothing)
  • Nova (the basis for any other service is the ability to run processes somewhere)
    • Glance (hard to use Nova without it)
  • Neutron (where those services run)
    • Designate (DNS is a core aspect of the network)
  • Cinder (where they persist data)

I would consider these to be common OpenStack services:

  • SWIFT (widely deployed, can be self-provisioned with Cinder block backends)
  • Ceph RADOS-GW object storage (widely deployed as an implementation choice, common because it could be self-provided on Cinder block)
  • Horizon (widely deployed, but we want to encourage innovation in the dashboard)

And these I would consider neither core nor common, though some of them are clearly on track there:

  • Barbican (not widely implemented)
  • Ceilometer (internal implementation detail, can’t be common because it requires access to other parts)
  • Juju (not widely implemented)
  • Kite (not widely implemented)
  • HEAT (on track to become common if it can be self-deployed, besides, I eat controversy for breakfast)
  • MAAS (who cares how the cloud was built?)
  • Manila (not widely implemented, possibly core once solid, otherwise common once, err, common)
  • Sahara (not widely implemented, weird that we would want to hardcode one way of doing this in the project)
  • Triple-O (user doesn’t care how the cloud was deployed)
  • Trove (not widely implemented, might make it to “common” if widely deployed)
  • Tuskar (see Ironic)
  • Zaqar (not widely implemented)

In the current DefCore discussions, the “layer” idea has been introduced. My concern is simple: how many layers make sense? End users don’t want to have to figure out what lots of layers mean. If we had “OpenStack HPC” and “OpenStack Scientific” and “OpenStack Genomics” layers, that would just be confusing. Let’s keep it simple – use “common” as a layer, but be explicit that it will change to reflect common practice (of course, anything in common is self-reinforcing in that new players will defer to norms and implement common services, thereby entrenching common unless new ideas make services obsolete).

U talking to me?

Wednesday, April 23rd, 2014

This upstirring undertaking Ubuntu is, as my colleague MPT explains, performance art. Not only must it be art, it must also perform, and that on a deadline. So many thanks and much credit to the teams and individuals who made our most recent release, the Trusty Tahr, into the gem of 14.04 LTS. And after the uproarious ululation and post-release respite, it’s time to open the floodgates to umpteen pent-up changes and begin shaping our next show.

The discipline of an LTS constrains our creativity – our users appreciate the results of a focused effort on performance and stability and maintainability, and we appreciate the spring cleaning that comes with a focus on technical debt. But the point of spring cleaning is to make room for fresh ideas and new art, and our next release has to raise the roof in that regard. And what a spectacular time to be unleashing creativity in Ubuntu. We have the foundations of convergence so beautifully demonstrated by our core apps teams – with examples that shine on phone and tablet and PC. And we have equally interesting innovation landed in the foundational LXC 1.0, the fastest, lightest virtual machines on the planet, born and raised on Ubuntu. With an LTS hot off the press, now is the time to refresh the foundations of the next generation of Linux: faster, smaller, better scaled and better maintained. We’re in a unique position to bring useful change to the ubiquitary Ubuntu developer, that hardy and precise pioneer of frontiers new and potent.

That future Ubuntu developer wants to deliver app updates instantly to users everywhere; we can make that possible. They want to deploy distributed brilliance instantly on all the clouds and all the hardware. We’ll make that possible. They want PAAS and SAAS and an Internet of Things that Don’t Bite, let’s make that possible. If free software is to fulfil its true promise it needs to be useful for people putting precious parts into production, and we’ll stand by our commitment that Ubuntu be the most useful platform for free software developers who carry the responsibilities of Dev and Ops.

It’s a good time to shine a light on umbrageous if understandably imminent undulations in the landscape we love – time to bring systemd to the centre of Ubuntu, time to untwist ourselves from Python 2.x and time to walk a little uphill and, thereby, upstream. Time to purge the ugsome and prune the unusable. We’ve all got our ucky code, and now’s a good time to stand united in favour of the useful over the uncolike and the utile over the uncous. It’s not a time to become unhinged or ultrafidian, just a time for careful review and consideration of business as usual.

So bring your upstanding best to the table – or the forum – or the mailing list – and let’s make something amazing. Something unified and upright, something about which we can be universally proud. And since we’re getting that once-every-two-years chance to make fresh starts and dream unconstrained dreams about what the future should look like, we may as well go all out and give it a dreamlike name. Let’s get going on the utopic unicorn. Give it stick. See you at vUDS.

ACPI, firmware and your security

Monday, March 17th, 2014

ACPI comes from an era when the operating system was proprietary and couldn’t be changed by the hardware manufacturer.

We don’t live in that era any more.

However, we DO live in an era where any firmware code running on your phone, tablet, PC, TV, wifi router, washing machine, server, or the server running the cloud your SAAS app is running on, is a threat vector against you.

If you read the catalogue of spy tools and digital weaponry provided to us by Edward Snowden, you’ll see that firmware on your device is the NSA’s best friend. Your biggest mistake might be to assume that the NSA is the only institution abusing this position of trust – in fact, it’s reasonable to assume that all firmware is a cesspool of insecurity courtesy of incompetence of the worst degree from manufacturers, and competence of the highest degree from a very wide range of such agencies.

In ye olden days, a manufacturer would ship Windows, which could not be changed, and they wanted to innovate on the motherboard, so they used firmware to present a standard interface for things like power management to a platform that could not modified to accommodate their innovation.

Today, that same manufacturer can innovate on the hardware and publish a patch for Linux to express that innovation – and Linux is almost certainly the platform that matters. If Windows enters this market then the Windows driver model can evolve to give manufacturers this same ability to innovate in the Windows world, where proprietary unverifiable blobs are the norm.

Arguing for ACPI on your next-generation device is arguing for a trojan horse of monumental proportions to be installed in your living room and in your data centre. I’ve been to Troy, there is not much left.

We’ve spent a good deal of time working towards a world where you can inspect the code that is running on any device you run. In Ubuntu we work hard to make sure that any issues in that code can be fixed and delivered right away to millions of users. Bruce Schneier wisely calls security a process, not a product. But the processes for finding and fixing problems in firmware are non-existent and not improving.

I would very much like to be part of FIXING the security problem we engineers have created in our rush to ship products in the olden days. I’m totally committed to that.

So from my perspective:

  • Upstream kernel is the place to deliver the software portion of the innovation you’re selling. We have great processes now to deliver that innovation to users, and the same processes help us improve security and efficiency too.
  • Declarative firmware that describes hardware linkages and dependencies but doesn’t include executable code is the best chance we have of real bottom-up security. The Linux device tree is a very good starting point. We have work to do to improve it, and we need to recognise the importance of being able to fix declarations over the life of a product, but we must not introduce blobs in order to short cut that process.

Let’s do this right. Each generation gets its turn to define the platforms it wants to pass on – let’s pass on something we can be proud of.

Our mission in Ubuntu is to give the world’s people a free platform they can trust.  I suspect a lot of the Linux community is motivated by the same goal regardless of their distro. That also means finding ways to ensure that those trustworthy platforms can’t be compromised elsewhere. We can help vendors innovate AND ensure that users have a fighting chance of privacy and security in this brave new world. But we can’t do that if we cling to the tools of the past. Don’t cave in to expediency. Design a better future, it really can be much healthier than the present if we care and act accordingly.

 

The very best edge of all

Saturday, March 8th, 2014

Check out “loving the bottom edge” for the most important bit of design guidance for your Ubuntu mobile app.

This work has been a LOT of fun. It started when we were trying to find the zen of each edge of the screen, a long time back. We quickly figured out that the bottom edge is by far the most fun, by far the most accessible. You can always get to it easily, it feels great. I suspect that’s why Apple has used the bottom edge for their quick control access on IOS.

progresion

We started in the same place as Apple, thinking that the bottom edge was so nice we wanted it for ourselves, in the system. But as we discussed it, we started to think that the app developer was the one who deserved to do something really distinctive in their app with it instead. It’s always tempting to grab the tastiest bit for oneself, but the mark of civility is restraint in the use of power and this felt like an appropriate time to exercise that restraint.

Importantly you can use it equally well if we split the screen into left and right stages. That made it a really important edge for us because it meant it could be used equally well on the Ubuntu phone, with a single app visible on the screen, and on the Ubuntu tablet, where we have the side stage as a uniquely cool way to put phone apps on tablet screens alongside a bigger, tablet app.

The net result is that you, the developer, and you, the user, have complete creative freedom with that bottom edge. There are of course ways to judge how well you’ve exercised that freedom, and the design guidance tries to leave you all the freedom in the world while still providing a framework for evaluating how good the result will feel to your users. If you want, there are some archetypes and patterns to choose from, but what I’d really like to see is NEW patterns and archetypes coming from diverse designs in the app developer community.

Here’s the key thing – that bottom edge is the one thing you are guaranteed to want to do more innovatively on Ubuntu than on any other mobile platform. So if you are creating a portable app, targeting a few different environments, that’s the thing to take extra time over for your Ubuntu version. That’s the place to brainstorm, try out ideas on your friends, make a few mockups. It’s the place you really express the single most important aspects of your application, because it’s the fastest, grooviest gesture in the book, and it’s all yours on Ubuntu.

Have fun!

Raring community skunkworks

Thursday, October 18th, 2012

Mapping out the road to 13.04, there are a few items with high “tada!” value that would be great candidates for folk who want to work on something that will get attention when unveiled. While we won’t talk about them until we think they are ready to celebrate, we’re happy to engage with contributing community members that have established credibility (membership, or close to it) in Ubuntu, who want to be part of the action.

This would provide early community input and review, without spoiling the surprise when we think the piece is ready. It would allow community members to work on something that will be widely covered at release (at least, on OMG ;-))

The skunkworks approach has its detractors. We’ve tried it both ways, and in the end, figured out that critics will be critics whether you discuss an idea with them in advance or not. Working on something in a way that lets you refine it till it feels ready to go has advantages: you can take time to craft something, you can be judged when you’re ready, you get a lot more punch when you tell your story, and you get your name in lights (though not every headline is one you necessarily want ;)).

So, we thought we would extend the invitation to people who trust us and in whom we have reason to trust, to work together on some sexy 13.04 surprises. The projects range from webby (javascript, css, html5) to artistic (do you obsess about kerning and banding) to scientific (are you a framerate addict) to glitzy (pixel shader sherpas wanted) to privacy-enhancing (how is your crypto?) to analytical (big daddy, big brother, pick your pejorative). But they all make the Ubuntu experience better for millions of users, they are all groundbreaking in free software, they will all result in code under the GPL (or an existing upstream license if they are extensions to existing projects). No NDA’s needed but we will need to trust you not to talk in your sleep ;). We’ll also need to trust you to write code that is thorough and tested, stuff you’ll be as proud of as we are of the rest of the Ubuntu experience. Of course.

There’s also plenty going on that doesn’t warrant the magician’s reveal. But if you are game for a bit of the spotlight, bring some teflon and ping Michael Hall at mhall119 on Freenode.

Microsoft has built an impressive new entrant to the Infrastructure-as-a-Service market, and Ubuntu is there for customers who want to run workloads on Azure that are best suited to Linux. Windows Azure was built for the enterprise market, an audience which is increasingly comfortable with Ubuntu as a workhorse for scale-out workloads; in short, it’s a good fit for both of us, and it’s been interesting to do the work to bring Ubuntu to the platform.

Given that it’s normal for us to spin up 2,000-node Hadoop clusters with Juju, it will be very valuable to have a new enterprise-oriented cloud with which to evaluate performance, latency, reliability, scalability and many other key metrics for production deployment scenarios.

As IAAS grows in recognition as a standard part of the enterprise toolkit, it will be important to have a wide range of infrastructures that are addressable, with diverse strengths. In the case of Windows Azure, there is clearly a deep connection between Windows-based IT and the new IAAS. But I think Microsoft has set their sights on a bigger story, which is high-quality enterprise-oriented infrastructure that is generally useful. That’s why Ubuntu is important to them, and why it was worthwhile for us to work together despite our differences. Just as we need to ensure that customers can run Ubuntu and Windows together inside their data centre and on the LAN, we want to ensure that cloud workloads play nicely.

The team leading Azure has a sophisticated understanding of Ubuntu and Linux in general. They are taking a pragmatic approach that will raise eyebrows around the Redmond campus, but is exactly what customers want to see. We have taken a similar view. I know there will be members of the free software community that will leap at the chance to berate Microsoft for its very existence, but it’s not very Ubuntu to do so: let’s argue our perspective, work towards our goals, be open to those who are open to us, and build great stuff. There is nothing proprietary in Ubuntu-for-Azure, and no about-turn from us on long-held values. This is us making sure our audience, and especially the enterprise audience, can benefit from the work our community and Canonical do no matter where they want to do it.

Windows Azure IAAS is in beta. If you are using the cloud today, or interested in it, I highly recommend you try it out. There’s no better way to make yourself heard over there.

Unsung heroes

Monday, March 26th, 2012

The new privacy features in Ubuntu 12.04 are a lovely example of collaboration and contribution. I’d like to thank Manish Sinha and Stefano Candori who contributed significantly to that effort and hadn’t received a shout-out despite being central to the success. The body of contributors to Ubuntu and Unity continues to grow, and I know the team finds it immensely rewarding to help folk land patches or changes that bring the experience closer to the designed goal. Manish, Stafano, thank you!