This quirky scheme of adjectives and animals presents a pretty puzzle every six months. What mix of characteristics do we want to celebrate in the next release? Here we are, busily finalizing the precise pangolin (which was a rather perfect product placement for a scaly anteater, all things considered) and before one realises it’s time to talk turkey, so to speak, about Q! Our code names may raise a quizzical eyebrow here and there, but they capture the zeitgeist of a cycle and shape our discussions in surprising ways. The quest for a name has no quick answer unless, of course, you jump to the last paragraph 😉
12.04 being an LTS we’ve been minding our P’s and Q’s, but many of our quality-oriented practices from 12.04 LTS will continue into Q territory. We’ll keep the platform usable throughout the cycle, because that helped hugely to encourage daily use of the release, which in turn gives us much better feedback on questions of quality. And we’ll ratchet up the continuous integration, smoke testing and automated benchmarking of the release, since we can do it all in the cloud. We have, so to speak, stacks and stacks of cloud to use. So quality is quotidian rather than quarterly. And it is both qualitative and quantitative, with user research and testing continuing to shape our design decisions. The effort we put into polishing Unity and the rest of the platform in 12.04 seem to have paid off handsomely, with many quondam quarrelsome suddenly quiescent in the face of a surge in support for the work.
But the finest quality is that without a name, so support for “quality” as a codename would at best be qualified. Every release has quality first these days – they all get used, on the server, on devices, and while the term of maintenance might vary, our commitment to interim releases is just as important as that to an LTS.
Our focus on quality permeates from the platform up to the code we write upstream, and our choices of upstream components too. We require tests and gated trunks for all Canonical codebases, and prefer upstreams that share the same values. Quality starts at the source, it’s not something that can be patched in after the fact. And I’m delighted that we have many upstreams using our tools to improve their quality too! We have awesome tools for daily builds from branches, continuous integration support in Launchpad, the ability to provide a gated trunk with tests run in the cloud for projects that really care about quality. Rumours and allegations of a move from Upstart to systemd are unfounded: Upstart has a huge battery of tests, the competition has virtually none. Upstart knows everything it wants to be, the competition wants to be everything. Quality comes from focus and clarity of purpose, it comes from careful design and rigorous practices. After a review by the Ubuntu Foundations team our course is clear: we’re committed to Upstart, it’s the better choice for a modern init, innit. For our future on cloud and client, Upstart is crisp, clean and correct. It will be a pleasure to share all the Upstart-enablement patches we carry with other family friends as soon as their release is ready and they can take a breath, so to speak.
From a styling point of view, we think in terms of quadruples: this next release starts a cycle of four, which will culminate in 14.04 LTS. So there’s an opportunity to refresh the look. That will kick off with a project on typography to make sure we are expressing ourselves with crystal clarity – making the most of Ubuntu’s Light and Medium font weights for a start. And a project on iconography, with the University of Reading, to refine the look of apps and interfaces throughout the platform. It’s amazing how quaint the early releases of Ubuntu look compared to the current style. And we’re only just getting started! In our artistic explorations we want to embrace tessellation as an expression of the part-digital, part-organic nature of Ubuntu. We love the way tessellated art expresses both the precision and reliability of our foundations, and the freedom and collaboration of a project driven by people making stuff for people. There’s nothing quixotic in our desire to make Ubuntu the easiest, steadiest, and most beautiful way to live digitally.
On the fauna front, the quotable campaign for the Queer Quokka is quorate but, it must sometimes be said, this is not a democracy. One man’s favourite furball is another’s mangy marsupial. No, the quintessential stories of Q will be all about style on the client, with a refresh of our theme and typography, a start on new iconography and perhaps even a new form factor taking flight. So brown is out and something colourful and light is called for. On the cloud front, the new virtualized network madness called Quantum will make its appearance. Being a first cut, it’s more likely to be Folsom than wholesome, but it’s going to be worth calling out, and the name is reminiscent of our package-oriented practices, where goodness is delivered one piece at a time. And so the stage is set for a decision: I give you the Quantal Quetzal, soon to be dressed in tessellated technicolour, now open for toolchains, kernels and other pressing preparatory packages.
As we move from “tens” to “hundreds” to “thousands” of nodes in a typical data centre we need new tools and practices. This hyperscale story – of hyper-dense racks with wimpy nodes – is the big shift in the physical world which matches the equally big shift to cloud computing in the virtualised world. Ubuntu’s popularity in the cloud comes in part from being leaner, faster, more agile. And MAAS – Metal as a Service – is bringing that agility back to the physical world for hyperscale deployments.
Servers used to aspire to being expensive. Powerful. Big. We gave them names like “Hercules” or “Atlas”. The bigger your business, or the bigger your data problem, the bigger the servers you bought. It was all about being beefy – with brands designed to impress, like POWER and Itanium.
Things are changing.
Today, server capacity can be bought as a commodity, based on the total cost of compute: the cost per teraflop, factoring in space, time, electricity. We can get more power by adding more nodes to our clusters, rather than buying beefier nodes. We can increase reliability by doubling up, so services keep running when individual nodes fail. Much as RAID changed the storage game, this scale-out philosophy, pioneered by Google, is changing the server landscape.
In this hyperscale era, each individual node is cheap, wimpy and, by historical standards for critical computing, unreliable. But together, they’re unstoppable. The horsepower now resides in the cluster, not the node. Likewise, the reliability of the infrastructure now depends on redundancy, rather than heroic performances from specific machines. There is, as they say, safety in numbers.
We don’t even give hyperscale nodes proper names any more – ask “node-0025904ce794”. Of course, you can still go big with the cluster name. I’m considering “Mark’s Magnificent Mountain of Metal” – significantly more impressive than “Mark’s Noisy Collection of Fans in the Garage”, which is what Claire will probably call it. And that’s not the knicker-throwing kind of fan, either.
The catch to this massive multiplication in node density, however, is in the cost of provisioning. Hyperscale won’t work economically if every server has to be provisioned, configured and managed as if it were a Hercules or an Atlas. To reap the benefits, we need leaner provisioning processes. We need deployment tools to match the scale of the new physical reality.
That’s where Metal as a Service (MAAS) comes in. MAAS makes it easy to set up the hardware on which to deploy any service that needs to scale up and down dynamically – a cloud being just one example. It lets you provision your servers dynamically, just like cloud instances – only in this case, they’re whole physical nodes. “Add another node to the Hadoop cluster, and make sure it has at least 16GB RAM” is as easy as asking for it.
With a simple web interface, you can add, commission, update and recycle your servers at will. As your needs change, you can respond rapidly, by adding new nodes and dynamically re-deploying them between services. When the time comes, nodes can be retired for use outside the MAAS.
As we enter an era in which ATOM is as important in the data centre as XEON, an operating system like Ubuntu makes even more sense. Its freedom from licensing restrictions, together with the labour saving power of tools like MAAS, make it cost-effective, finally, to deploy and manage hundreds of nodes at a time
Here’s another way to look at it: Ubuntu is bringing cloud semantics to the bare metal world. What a great foundation for your IAAS.
In the open source community, we celebrate having pieces that “do one thing well”, with lots of orthogonal tools compounding to give great flexibility. But that same philosophy leads to shortcomings on the GUI / UX front, where we want all the pieces to be aware of each other in a deeper way.
For example, we consciously place the notifications in the top right of the screen, avoiding space that is particularly precious (like new tab titles, and search boxes). But the indicators are also in the top right, and they make menus, which drop down into the same space a notification might occupy.
Since we know that notifications are queued, no notification is guaranteed to be displayed instantly, so a smarter notification experience would stay out of the way while you were using indicator menus, or get out of the way when you invoke them. The design story of focusayatana, where we balance the need for focus with the need for awareness, would suggest that we should suppress awareness-oriented things in favour of focus things. So when you’re interacting with an indicator menu, we shouldn’t pop up the notification. Since the notification system, and the indicator menu system, are separate parts, the UNIX philosophy sells us short in designing a smart, smooth experience because it says they should each do their thing individually.
Going further, it’s silly that the sound menu next/previous track buttons pop up a notification, because the same menu shows the new track immediately anyway. So the notification, which is purely for background awareness, is distracting from your focus, which is conveying exactly the same information!
But it’s not just the system menus. Apps can play in that space too, and we could be better about shaping the relationship between them. For example, if I’m moving the mouse around in the area of a notification, we should be willing to defer it a few seconds to stay out of the focus. When I stop moving the mouse, or typing in a window in that region, then it’s OK to pop up the notification.
It’s only by looking at the whole, that we can design great experiences. And only by building a community of both system and application developers that care about the whole, that we can make those designs real. So, thank you to all of you who approach things this way, we’ve made huge progress, and hopefully there are some ideas here for low-hanging improvements too
A remarkable thing happened this year: companies started adopting Ubuntu over RHEL for large-scale enterprise workloads, in droves:
The trend is even starker if you look at what we know of new-style services, like clouds and big data, but since most of that happens behind the firewall its all anecdata, while web services are a public affair.
The key driver of this has been that we added quality as a top-level goal across the teams that build Ubuntu – both Canonical’s and the community’s. We also have retained the focus on keeping the up-to-date tools available on Ubuntu for developers, and on delivering a great experience in the cloud, where computing is headed.
The headlines for Ubuntu have all been about the desktop and consumer-focused design efforts, with the introduction of Unity and the expansion of our goals to span the phone, the tablet, the TV as well as the PC. But underpinning those goals has been a raising of the quality game: OEMs and consumers demand a very high level of quality, and so we now have large-scale automated testing, improved upload processes, faster responses to issues that crop up inevitably during the development cycle, a broader base of users and contributors in the development release, and better engagements with the vendors who pre-install Ubuntu. So 12.04 LTS is a coming of age release for Ubuntu in the data centre as much as its the first LTS to sport the interface which was designed to span the full range of personal computing needs.
We’re also seeing the wider community respond to the goal of cadence. OpenStack’s Essex release is lined up to be a perfect fit for 12.04 LTS. That is not a coincidence, it’s a value to which both projects are committed. Upstream projects that care about their user’s and care about being adopted quickly, want an effective conduit of their goodness straight to users. By adopting the 6-month / 2-year cadence of step and LTS releases, and aligning those with Ubuntu’s release cycle, OpenStack ensures that a very large audience of system administrators, developers and enterprise decision makers can plan for their OpenStack deployment, and know they will have a robust and very widely deployed LTS platform together with a very widely supported release of OpenStack. Every dependency that Essex needs is exactly provided in 12.04 LTS, the way that all of the major public clouds based on OpenStack are using it. By adopting a common message on releases, we make both OpenStack and Ubuntu stronger, and do so in a way which is entirely transparent and accessible to other distributions.
Quality. Design. Cadence. You can count on them in Ubuntu, and OpenStack.
Governments are making increasingly effective use of Ubuntu in large-scale projects, from big data to little schools. There is growing confidence in open source in government quarters, and growing sophistication in how they engage with it.
But adopting open source is not just about replacing one kind of part with another. Open source is not just a substitute for shrink-wrapped proprietary software. It’s much more malleable in the hands of industry and users, and you can engage with it very differently as a result. I’m interested in hearing from thought leaders in the civil service on ways they think governments could get much more value with open source, by embracing that flexibility. For example, rather than one-size-fits-all software, why can’t we deliver custom versions of Ubuntu for different regions or countries or even departments and purposes? Could we enable the city government of Frankfurt to order PC’s with the Ubuntu German Edition pre-installed?
Or could we go further, and enable those governments to participate in the definition and production and certification process? So rather than having to certify exactly the same bits which everyone else is using, they could create a flavour which is still “certified Ubuntu” and fully compatible with the whole Ubuntu ecosystem, can still be ordered pre-installed from global providers like Dell and Lenovo, but has the locally-certified collection of software, customizations, and certifications layered on top?
If we expand our thinking beyond “replacing what went before”, how could we make it possible for the PC companies to deliver much more relevant offerings, and better value to governments by virtue of free software? Most of the industry processes and pipelines were set up with brittle, fixed, proprietary software in mind. But we’re now in a position to drive change, if there’s a better way to do it, and customers to demand it.
So, for a limited time only, you can reach me at firstname.lastname@example.org (there were just too many cultural references there to resist, and it’s not a mailbox that will be needed again soon ;). If you are in the public service, or focused on the way governments and civic institutions can use open source beyond simply ordering large numbers of machines at a lower cost, drop me a note and let’s strike up a conversation.
Here are a few seed thoughts for exploration and consideration.
Local or national Ubuntu editions, certified and pre-installed by global brands
Lots of governments now buy PC’s from the world market with Ubuntu pre-installed. Several Canadian tenders have been won by companies bidding with Ubuntu pre-installed on PC’s. The same is true in Brazil and Argentina, in China and India and Spain and Germany. We’re seeing countries or provinces that previously had their own-brand local Linux, which they had to install build locally and install manually, shifting towards pre-order with Ubuntu.
In part, this is possible because the big PC brands have built up enough experience and confidence working with Canonical and Ubuntu to be able to respond to those tenders. You can call up Dell or Lenovo and order tens of thousands of laptops or desktops with Ubuntu pre-installed, and they will show up on time, certified. The other brands are following. It has been a lot of work to reach that point, but we’ve got the factory processes all working smoothly from Shenzen to Taipei. If you want tens of thousands of units, it all works well.
But Ubuntu, or free software in general, is not Windows. You shouldn’t have to accept the one-size-fits all story. We saw all of those local editions, or “national linux”, precisely because of the desire that regions have to build something that really suits them well. And Ubuntu, with it’s diversity of packages, open culture and remix-friendly licensing is a very good place to start. Many of the Spanish regional distro’s, for example, are based on Ubuntu. They have the advantage of being shaped to suit local needs better than we can with vanilla Ubuntu, but the disadvantage of being hard to certify with major ISV’s or IHV’s.
I’m interested in figuring out how we can formalise that flexibility, so that we can get the best of both worlds: local customizations and preferences expressed in a compatible way with the rest of the Ubuntu ecosystem, so they can take advantage of all the software and skills and certifications that the ecosystem and brand bring. And so they can order it pre-installed from any major global PC company, no problem, and upgrade to the next version of Ubuntu without losing all the customization that work that they did.
Security certifications by local agencies, with policy frameworks and updates
A European defence force has recently adopted Ubuntu widely as part of an agility-enhancing strategy that gives soldiers and office workers secure desktop capabilities from remote locations like… home, or out in the field. There’s some really quite sexy innovation there, but there’s also Ubuntu as we know and love it. In the process of doing the work, it emerged that their government has certified some specific versions of key apps like OpenVPN, and it would be very useful to them if they could ensure that those versions were the ones in use widely throughout the government.
Of course, today, that means manually installing the right version every time, and tracking updates. But Ubuntu could do that work, if it knew enough about the requirements and the policies, and there was a secure way to keep those policies up to date. Could we make the operating system responsive to such policies, even where it isn’t directly managed by some central infrastructure? If Ubuntu “knows” that it’s supposed to behave in a particular way, can we make it do much of the work itself?
The same idea is useful in an organizational setting, too. And the key question is whether we can do that, while still retaining both access to the wider Ubuntu ecosystem, and compatibility with factory processes, so these machines could be ordered and arrive pre-installed and ready to go.
Local cultural customization
On a less securocratic note, the idea of Ubuntu being tailored to local culture is very appealing. Every region or community has its news sites, it’s languages, it’s preferred apps and protocols and conventions. Can we expand the design and definition of the Ubuntu experience so that it adapts naturally to those norms in a way much richer and more meaningful than we can with Windows today?
What would the key areas of customisation be? Who would we trust to define them? How would we combine the diversity of our LoCo communities with the leadership of Ubuntu and the formality of government or regional authorities? Would we *want* to do that? It’s a very interesting topic, because the value of having officially recognised platforms is just about on a par with the value of having agile, crowdsourced and community-driven customisation. Nevertheless, could we find a model whereby governments or civil groups could underwrite the creation of recognised editions of Ubuntu that adapt themselves to local cultural norms? Would we get a better experience for human beings if we did that?
Local skills development
Many of the “national linux” efforts focus on building small teams of engineers and designers and translators that are tasked with bringing a local flavour to the technology or content in the distro. We have contributors from almost (perhaps actually?) every country, and we have Canonical members in nearly 40 countries. Could those two threads weave together in an interesting way? I’m often struck, when I meet those teams, at the awkwardness of teams that feel like start-ups, working inside government departments – it’s never seemed an ideal fit for either party.
Sometimes the teams are very domain focused; one such local-Linux project is almost entirely staffed by teachers, because the genesis of the initiative was in school computing, and they have done well for that purpose.
But could we bring those two threads together? The Ubuntu-is-distributed-already and the local-teams-hired-to-focus-on-local-work threads seem highly complimentary; could we create teams which are skilled in distro development work, managed as part of the broader Ubuntu effort, but tasked with local priorities?
Public investments in sector leadership
Savvy governments are starting to ensure that research and development that they fund is made available under open licenses. Whether that’s open content licensing, or open source licensing, or RAND-Z terms, there’s a sensible view that information or tools paid for with public money should be accessible to that public on terms that let them innovate further or build businesses or do analysis of their own.
Some of that investment turns out to be software. For example, governments might prioritise genomics, or automotive, or aerospace, and along the way they might commission chunks of software that are relevant. How could we make that software instantly available to anybody running the relevant local flavour of Ubuntu? Would we do the same with content? How do we do that without delivering Newspeak to the desktop? Are there existing bodies of software which could be open sourced, but they don’t have a natural home, they’re essentially stuck on people’s hard drives or tapes?
There are multiple factors driving the move of public institutions to open source – mainly the recognition, after many years, of the quality and flexibility that an open platform provides. Austerity is another source of motivation to change. But participation, the fact that open source can be steered and shaped to suit the needs of those who use it simply through participating in open projects, hasn’t yet been fully explored. Food for thought.
And there’s much more to explore. If this is interesting to you, and you’re in a position to participate in building something that would actually get used in such a context, then please get in touch. Directly via The Governator, or via my office.
Our mission with Ubuntu is to deliver, in the cleanest, most economical and most reliable form, all the goodness that engineers love about free software to the widest possible audience (including engineers :)). We’ve known for a long time that free software is beautiful on the inside – efficient, accurate, flexible, modifiable. For the past three years, we’ve been leading the push to make free software beautiful on the outside too – easy to use, visually pleasing and exciting. That started with the Ubuntu Netbook Remix, and is coming to fruition in 12.04 LTS, now in beta.
For the first time with Ubuntu 12.04 LTS, real desktop user experience innovation is available on a full production-ready enterprise-certified free software platform, free of charge, well before it shows up in Windows or MacOS. It’s not ‘job done’ by any means, but it’s a milestone. Achieving that milestone has tested the courage and commitment of the Ubuntu community – we had to move from being followers and integrators, to being designers and shapers of the platform, together with upstreams who are excited to be part of that shift and passionate about bringing goodness to a wide audience. It’s right for us to design experiences and help upstreams get those experiences to be amazing, because we are closest to the user; we are the last mile, the last to touch the code, and the first to get the bug report or feedback from most users.
Thank you, to those who stood by Ubuntu, Canonical and me as we set out on this adventure. This was a big change, and in the face of change, many wilt, many panic, and some simply find that their interests lie elsewhere. That’s OK, but it brings home to me the wonderful fellowship that we have amongst those who share our values and interests – their affiliation, advocacy and support is based on something much deeper than a fad or an individualistic need, it’s based on a desire to see all of this intellectual wikipedia-for-code value unleashed to support humanity at large, from developers to data centre devops to web designers to golden-years-ganderers, serving equally the poorest and the bankers who refuse to serve them, because that’s what free software and open content and open access and level playing fields are all about.
To those of you who rolled up your sleeves and filed bugs and wrote the documentation and made the posters or the cupcakes, thank you.
I’m very serious about loving the recent changes. I think I’m a fair representative of the elderly community ………. someone who doesn’t particularly care to learn new things, but just wants things to make sense. I think we’re there! Lance
You’ll be as delighted with the coverage of Ubuntu for Android at MWC in Barcelona last week:
“one of the more eye-catching concepts being showcased” – v3 “sleeker, faster, potentially more disruptive” – IT Pro Portal “you can also use all the features of Android” – The Inquirer “I can easily see the time when I will be carrying only my smartphone” – UnwiredView “everything it’s been claimed to be” – Engadget “Efficiency, for the win!” – TechCrunch “phones that become traditional desktops have the potential to benefit from the extra processing power” – GigaOM “This, ladies and gentlemen, is the future of computing” – IntoMobile
Free software distils the smarts of those of us who care about computing, much like Wikipedia does. Today’s free software draws on the knowledge and expertise of hundreds of thousands of individuals, all over the world, all of whom helped to make this possible, just like Wikipedia. It’s only right that the benefits of that shared wisdom should accrue to everyone without charge, which is why contributing to Ubuntu is the best way to add leverage to the contributions made everywhere else, to ensure they have the biggest possible impact. It wouldn’t be right to have to pay to have a copy of Wikipedia on your desk at the office, and the same is true of the free software platform. The bits should be free, and the excellent commercial services optional. That’s what we do at Canonical and in the Ubuntu community, and that’s why we do it.
Engineers are human beings too!
We set out to refine the experience for people who use the desktop professionally, and at the same time, make it easier for the first-time user. That’s a very hard challenge. We’re not making Bob, we’re making a beautiful, easy to use LCARS ;-). We measured the state of the art in 2008 and it stank on both fronts. When we measure Ubuntu today, based on how long it takes heavy users to do things, and a first-timer to get (a different set of) things done, 12.04 LTS blows 10.04 LTS right out of the water and compares favourably with both MacOS and Windows 7. Unity today is better for both hard-core developers and first-time users than 10.04 LTS was. Hugely better.
For software developers:
A richer set of keyboard bindings for rapid launching, switching and window management
Pervasive search results in faster launching for occasional apps
Far less chrome in the shell than any other desktop; it gets out of your way
Much more subtle heuristics to tell whether you want the launcher to reveal, and to hint it’s about to
Integrated search presents a faster path to find any given piece of content
Magic window borders and the resizing scrollbar make for easier window sizing despite razor-thin visual border
Full screen apps can include just the window title and indicators – full screen terminal with all the shell benefits
… and many more. In 12.04 LTS, multi-monitor use cases got a first round of treatment, we will continue to refine and improve that every six months now that the core is stable and effective. But the general commentary from professionals, and software developers in particular, is “wow”. In this last round we have focused testing on more advanced users and use cases, with user journeys that include many terminal windows, and there is a measurable step up in the effectiveness of Unity in those cases. Still rough edges to be sure, even in this 12.04 release (we are not going to be able to land locally-integrated menus in time, given the freeze dates and need for focus on bug fixes) but we will SRU key items and of course continue to polish it in 12.10 onwards. We are all developers, and we all use it all the time, so this is in our interests too.
We care about efficiency, performance, quality, reliability. So do developers and engineers. We care about beauty and ease of use – turns out most engineers and developers care about that too. I’ve had lots of hard-core engineers tell me that they “love the challenges the design team sets”, because it’s hard to make easy software, and harder to make it pixel-perfect. And lots that have switched back to Ubuntu from the MacOS because devops on Ubuntu… rock.
The hard core Linux engineers can use… anything, really. Linus is probably equally comfortable with Linux-from-scratch as with Ubuntu. But his daughter Daniela needs something that works for human beings of all shapes, sizes, colours and interests. She’s in our audience. I hope she’d love Ubuntu if she tries it. She could certainly install it for herself while Dad isn’t watching 😉 Linus and other kernel hackers are our audience too, of course, but they can help himself if things get stuck. We have to shoulder the responsibility for the other 99%. That’s a really, really hard challenge – for engineers and artists alike. But we’ve made huge progress. And doing so brings much deeper meaning to the contributions of all the brilliant people that make free software, everywhere.
Again, thanks to the Ubuntu community, 500 amazing people at Canonical, the contributors to all of the free software that makes it possible, and our users.
We’ll show Ubuntu neatly integrated into Android at Mobile World Congress next week. Carry just the phone, and connect it to any monitor to get a full Ubuntu desktop with all the native apps you want, running on the same device at the same time as Android. Magic. Everything important is shared across the desktop and the phone in real time.
It’s a lightweight way to be – everything seamlessly available with the right interface for the right form factor, with no hassles syncing. It just works, the way Ubuntu should. Lots of work behind the scenes to make both systems share what they need to share, but the desktop is a no-compromise desktop.
This isn’t the “Ubuntu Phone”. The phone experience here is pure Android. This announcement is playing to a different story, which is the convergence of multiple different form factors into one most-personal device. Naturally, the most personal device is the phone, so we want to get all of these different personalities – phone, tablet and desktop – into the phone. When you need a desktop, you connect up to a screen and a keyboard. When you need a tablet, you dock to some very elegant glass.
Just for fun, we’ve integrated the Ubuntu TV experience too – so this isn’t just a desktop in your pocket, it’s a media centre too.
Come and say hello in Barcelona next week, and I’ll be glad to hear what you think of it in person. Everyone we’ve shown it to has had a “wow!” moment. For network operators who have long believed that the phone was the PC of the future for the next billion connected consumers, and for handset manufacturers who want to offer companies a single device for corporate computing, this is a delicious prospect. For those of us who love our desktops free, focused and mobile, it’s nirvana.
We’re publishing an initial version of the Ubuntu Business Desktop Remix today, based on Ubuntu 11.10.
Deployment teams have long been modifying their Ubuntu installs to remove features like music players or games and add components that are a standard part of their business workflow.
This remix takes the most common changes we’ve observed among institutional users and bundles them into one CD which can be installed directly or used as a basis for further customization. Before anyone gets all worked up and conspiratorial: everything in the remix is available from the standard Software Centre. Packages out, packages in. No secret sauce for customers only; we’re not creating a RHEL, we already have an enterprise-quality release cadence called LTS and we like it just the way it is. This is a convenience for anyone who wants it. Having a common starting point, or booting straight into a business-oriented image makes it easier for institutional users to evaluate Ubuntu Desktop for their specific needs.
This work was first discussed at the Ubuntu Developer Summit in October. We consulted with the Ubuntu Technical Board and Ubuntu Release Team, to ensure that the finished product met the standards of the Ubuntu project. Doing so resulted in a commitment to enable community participation in the packaging of some of the pieces that are important to enterprise users.
Ubuntu makes a point of openness to heterogeneous environments. We celebrate the point that the Ubuntu desktop can be highly useful, beautiful, functional and complete without any proprietary applications at all, while recognising that some people need to work with proprietary software on occasion, making sure that software is available and certified for Ubuntu, and making it easy to install. Remixes can include non-free software and still retain the Ubuntu name, as long as they can be brought back to the standard Ubuntu experience with straightforward package management tools and no risk of divergence on the hardware and security front.
Since we established the system of remixes, the Technical Board has defined guidelines for additional package archives which are exposed to Ubuntu users through the Software Centre. We’ve clarified with the Technical Board that remixes can draw from any such archives.
<blink>Registration required</blink> 😉 Some applications like VMWare View are included in this release under a proprietary license so download is covered by an EULA, and this image can’t be mirrored unless you make prior arrangements with the relevant ISVs. Boring, but better to do it once than for every individual app. We will ask users who download it to provide feedback on how we might improve the product, and provide them with details of Canonical’s deployment services and management solutions.
The desktop remains central to our everyday work and play, despite all the excitement around tablets, TV’s and phones. So it’s exciting for us to innovate in the desktop too, especially when we find ways to enhance the experience of both heavy “power” users and casual users at the same time. The desktop will be with us for a long time, and for those of us who spend hours every day using a wide diversity of applications, here is some very good news: 12.04 LTS will include the first step in a major new approach to application interfaces.
This work grows out of observations of new and established / sophisticated users making extensive use of the broader set of capabilities in their applications. We noticed that both groups of users spent a lot of time, relatively speaking, navigating the menus of their applications, either to learn about the capabilities of the app, or to take a specific action. We were also conscious of the broader theme in Unity design of leading from user intent. And that set us on a course which led to today’s first public milestone on what we expect will be a long, fruitful and exciting journey.
The menu has been a central part of the GUI since Xerox PARC invented ’em in the 70’s. It’s the M in WIMP and has been there, essentially unchanged, for 30 years.
The original Macintosh desktop, circa 1984, courtesy of Wikipedia
We can do much better!
Say hello to the Head-Up Display, or HUD, which will ultimately replace menus in Unity applications. Here’s what we hope you’ll see in 12.04 when you invoke the HUD from any standard Ubuntu app that supports the global menu:
Snapshot of the HUD in Ubuntu 12.04
The intenterface – it maps your intent to the interface
This is the HUD. It’s a way for you to express your intent and have the application respond appropriately. We think of it as “beyond interface”, it’s the “intenterface”. This concept of “intent-driven interface” has been a primary theme of our work in the Unity shell, with dash search as a first class experience pioneered in Unity. Now we are bringing the same vision to the application, in a way which is completely compatible with existing applications and menus.
The HUD concept has been the driver for all the work we’ve done in unifying menu systems across Gtk, Qt and other toolkit apps in the past two years. So far, that’s shown up as the global menu. In 12.04, it also gives us the first cut of the HUD.
Menus serve two purposes. They act as a standard way to invoke commands which are too infrequently used to warrant a dedicated piece of UI real-estate, like a toolbar button, and they serve as a map of the app’s functionality, almost like a table of contents that one can scan to get a feel for ‘what the app does’. It’s command invocation that we think can be improved upon, and that’s where we are focusing our design exploration.
As a means of invoking commands, menus have some advantages. They are always in the same place (top of the window or screen). They are organised in a way that’s quite easy to describe over the phone, or in a text book (“click the Edit->Preferences menu”), they are pretty fast to read since they are generally arranged in tight vertical columns. They also have some disadvantages: when they get nested, navigating the tree can become fragile. They require you to read a lot when you probably already know what you want. They are more difficult to use from the keyboard than they should be, since they generally require you to remember something special (hotkeys) or use a very limited subset of the keyboard (arrow navigation). They force developers to make often arbitrary choices about the menu tree (“should Preferences be in Edit or in Tools or in Options?”), and then they force users to make equally arbitrary effort to memorise and navigate that tree.
The HUD solves many of these issues, by connecting users directly to what they want. Check out the video, based on a current prototype. It’s a “vocabulary UI”, or VUI, and closer to the way users think. “I told the application to…” is common user paraphrasing for “I clicked the menu to…”. The tree is no longer important, what’s important is the efficiency of the match between what the user says, and the commands we offer up for invocation.
In 12.04 LTS, the HUD is a smart look-ahead search through the app and system (indicator) menus. The image is showing Inkscape, but of course it works everywhere the global menu works. No app modifications are needed to get this level of experience. And you don’t have to adopt the HUD immediately, it’s there if you want it, supplementing the existing menu mechanism.
It’s smart, because it can do things like fuzzy matching, and it can learn what you usually do so it can prioritise the things you use often. It covers the focused app (because that’s where you probably want to act) as well as system functionality; you can change IM state, or go offline in Skype, all through the HUD, without changing focus, because those apps all talk to the indicator system. When you’ve been using it for a little while it seems like it’s reading your mind, in a good way.
We’ll resurrect the (boring) old ways of displaying the menu in 12.04, in the app and in the panel. In the past few releases of Ubuntu, we’ve actively diminished the visual presence of menus in anticipation of this landing. That proved controversial. In our defence, in user testing, every user finds the menu in the panel, every time, and it’s obviously a cleaner presentation of the interface. But hiding the menu before we had the replacement was overly aggressive. If the HUD lands in 12.04 LTS, we hope you’ll find yourself using the menu less and less, and be glad to have it hidden when you are not using it. You’ll definitely have that option, alongside more traditional menu styles.
Voice is the natural next step
Searching is fast and familiar, especially once we integrate voice recognition, gesture and touch. We want to make it easy to talk to any application, and for any application to respond to your voice. The full integration of voice into applications will take some time. We can start by mapping voice onto the existing menu structures of your apps. And it will only get better from there.
But even without voice input, the HUD is faster than mousing through a menu, and easier to use than hotkeys since you just have to know what you want, not remember a specific key combination. We can search through everything we know about the menu, including descriptive help text, so pretty soon you will be able to find a menu entry using only vaguely related text (imagine finding an entry called Preferences when you search for “settings”).
There is lots to discover, refine and implement. I have a feeling this will be a lot of fun in the next two years
Even better for the power user
The results so far are rather interesting: power users say things like “every GUI app now feels as powerful as VIM”. EMACS users just grunt and… nevermind ;-). Another comment was “it works so well that the rare occasions when it can’t read my mind are annoying!”. We’re doing a lot of user testing on heavy multitaskers, developers and all-day-at-the-workstation personas for Unity in 12.04, polishing off loose ends in the experience that frustrated some in this audience in 11.04-10. If that describes you, the results should be delightful. And the HUD should be particularly empowering.
Even casual users find typing faster than mousing. So while there are modes of interaction where it’s nice to sit back and drive around with the mouse, we observe people staying more engaged and more focused on their task when they can keep their hands on the keyboard all the time. Hotkeys are a sort of mental gymnastics, the HUD is a continuation of mental flow.
Ahead of the competition
There are other teams interested in a similar problem space. Perhaps the best-known new alternative to the traditional menu is Microsoft’s Ribbon. Introduced first as part of a series of changes called Fluent UX in Office, the ribbon is now making its way to a wider set of Windows components and applications. It looks like this:
You can read about the ribbon from a supporter (like any UX change, it has its supporters and detractors ;-)) and if you’ve used it yourself, you will have your own opinion about it. The ribbon is highly visual, making options and commands very visible. It is however also a hog of space (I’m told it can be minimised). Our goal in much of the Unity design has been to return screen real estate to the content with which the user is working; the HUD meets that goal by appearing only when invoked.
Instead of cluttering up the interface ALL the time, let’s clear out the chrome, and show users just what they want, when they want it.
Time will tell whether users prefer the ribbon, or the HUD, but we think it’s exciting enough to pursue and invest in, both in R&D and in supporting developers who want to take advantage of it.
Other relevant efforts include Enso and Ubiquity from the original Humanized team (hi Aza &co), then at Mozilla.
Our thinking is inspired by many works of science, art and entertainment; from Minority Report to Modern Warfare and Jef Raskin’s Humane Interface. We hope others will join us and accelerate the shift from pointy-clicky interfaces to natural and efficient ones.
Roadmap for the HUD
There’s still a lot of design and code still to do. For a start, we haven’t addressed the secondary aspect of the menu, as a visible map of the functionality in an app. That discoverability is of course entirely absent from the HUD; the old menu is still there for now, but we’d like to replace it altogether not just supplement it. And all the other patterns of interaction we expect in the HUD remain to be explored. Regardless, there is a great team working on this, including folk who understand Gtk and Qt such as Ted Gould, Ryan Lortie, Gord Allott and Aurelien Gateau, as well as designers Xi Zhu, Otto Greenslade, Oren Horev and John Lea. Thanks to all of them for getting this initial work to the point where we are confident it’s worthwhile for others to invest time in.
We’ll make sure it’s easy for developers working in any toolkit to take advantage of this and give their users a better experience. And we’ll promote the apps which do it best – it makes apps easier to use, it saves time and screen real-estate for users, and it creates a better impression of the free software platform when it’s done well.
From a code quality and testing perspective, even though we consider this first cut a prototype-grown-up, folk will be glad to see this:
Overall coverage rate:
lines......: 87.1% (948 of 1089 lines)
functions..: 97.7% (84 of 86 functions)
branches...: 63.0% (407 of 646 branches)
Landing in 12.04 LTS is gated on more widespread testing. You can of course try this out from a PPA or branch the code in Launchpad (you will need thesetwo branches). Or dig deeper with blogs on the topic from Ted Gould, Olli Ries and Gord Allott. Welcome to 2012 everybody!
I upgraded my primary laptop to Precise yesterday. Very smoooooth! Kudos to the Ubuntu team for the way they are running this cycle; their commitment to keeping the Precise Pangolin usable from opening to release as 12.04 LTS is very evident.
The three legs of our engineering practice are cadence, quality and design. For those teams which maintain their own codebases (unity, juju, bzr, lp and many more) the quality position is a easier to define, because we can make test coverage and continuous tested integration standard practices. It’s more challenging for the platform team and Ubuntu community, who integrate thousands of packages from all sorts of places into one product: Ubuntu. We’ve traditionally focused on items like security, where participation in a global security process helps us ensure Ubuntu gets world-class security support and has established a world-leading track record of security patches and proactive security.
Nevertheless, the last year has seen some amazing leaps forward in our ability to manage quality across the entire platform. In large part, that’s thanks to the leadership of Rick Spencer and Pete Graner, who made smoke-testing and benchmarking a rigorous part of the process for every change to the platform, and lead the work to make that commitment sane in practice across all the hundreds of people, inside and outside Canonical, who needed to be on board with it. And it’s thanks to tools like Jenkins and LAVA which automate the testing and reporting across a vast array of problem spaces, architectures and packages.
So we have a daily weather report for Precise, which gives you a feeling for where things stand right now, as well as tighter integration of the test suites being run by Canonical upstreams on code destined for Precise with the test harness used by the platform team integrating that work into the distribution. I’ll take the liberty of repeating some of Rick’s core points here:
For upstreams, it boils down to “treat your trunk as sacred”. Practically, it requires:
There is a trunk of code bound for Ubuntu.
This trunk always builds automatically.
This trunk has tests that are always passing automatically.
All branches are properly reviewed for having both good tests and good implementation before merged into trunk.
Any branch that breaks trunk by causing automated tests to fail or causes trunk to stop building, are immediately reverted.
For Ubuntu Engineering, the responsibilities include:
Every maintainer in Ubuntu must have a test plan for upstream trunks that are run before uploading to the development release.
Tests in the test plan that are automated can be run with the help of the QA team.
Tests in the test plan that are manual can be run with the help of Nicholas, the new community QA Lead
Refrain from uploading a trunk into Ubuntu if there are serious bugs found in testing that will slow down people using the development release for testing and development.
Revert uploads that break Ubuntu, as there is no point in having the latest of a trunk in Ubuntu if it’s broken and just slowing everyone down.
Add tests to upstream projects for the Ubuntu test plan if serious bugs do get through that cause a revert.
Now that the harnesses are in place, we’re going to crank up the sensitivity of the test suite, by adding more tests and flagging more of them as critical issues for immediate resolution when they break. Key items to add next are daily tests on software center changes, and tests of the multi-monitor work that is under way for 12.04 in Unity (using some pretty magical hardware setups).
There are a variety of additional practices and processes in place too, such as testing of the dialy ISO’s, reversion of changes that cross specific thresholds of stability for specific types of users, pro-active smoke testing of archive sanity throughout the cycle, and a dedicated vanguard quality team that aim to keep velocity high for everyone despite these additional gates and checks.
This isn’t limited to Canonical team members; didrocks and the French Musketeers have built a Unity SRU testing process which should let us crowdsource perspectives on the quality improvements or regressions of changes in Unity. Ara’s ongoing work around component and system testing is giving us a very useful database of known issues at the hardware level. Work on Checkbox and related tools continues to ensure that people can contribute data and help prioritise the issues which will have the widest benefit for millions of community adopters.
Where upstreams have test suites, we’re integrating those into the automated QA framework. In an ideal world, whenever a package is changed, we’d have an upstream test suite to run for that package AND for every package which depends on it. That way, we’d catch breakage in the package itself, but more importantly, we’d catch consequential damage elsewhere, which is much harder for upstreams to catch themselves.
We’re already running that program, and as upstreams start to take testing more seriously, coverage across the whole platform will improve significantly. It’s been Canonical practice to have test suites for several years, and it’s very encouraging to see other upstreams adopting TDD and at the least rigorous unit and functional testing, one at a time. Open source projects love to talk about quality – but it’s important to back that with measurable practices and data. As an example in a complex case, we run the LTP against every kernel SRU, in addition to our own kernel and hardware cert tests.
In future, it should be possible to link this to the existing daily builds of tip (we have over 500 upstreams running daily builds on Launchpad, which is fantastic). THAT would give upstreams the ability to know when commits to their tip break tests in dependent packages. It would suck a large amount of compute, but it would provide a fantastic early warning system of collisions between independent changes in diverse but related projects.
There’s a lot more we will do, by integrating Apport for crash data collection, and routing those reports through a big data sieve we should be able to identify the issues which are having the biggest impact on the most users. But that’s a blog for another day. For now, well done, team Ubuntu!