Cloudy prognosis for mainframes

Monday, October 24th, 2011

The death of the mainframe is about as elusive as the year of the Linux desktop. But cloud computing might finally present a terminal opportunity, so to speak, to those stalwarts of big business computing, by providing a compelling answer to the twin stories of reliability and throughput that have always been highlights of the big iron pitch.

Advocates of big iron talk about reliability. But with public clouds, we’re learning how to build services that achieve very high levels of reliability despite having low individual node reliability. It doesn’t matter if a single node in the cloud fails – cloud-style architectures route around that damage and keep the overall service available. Just as we dial storage reliability up or down by designing RAID arrays for the right balance of performance and resilience to failure, you can dial service reliability up or down in the cloud by allowing for redundancy. That comes at a price, of course, but the price of an extra 9 is substantially lower when you tackle it cloud-style than when you try and achieve it on a single piece of hardware.

The other big strength of big iron was always throughput. Customers will pay for it, so mainframe vendors were always happy to oblige them. But again, it’s hard to beat the throughput of a Hadoop cluster, and even harder to scale the throughput of a mainframe as cost-effectively as one can scale a private cloud infrastructure underneath Hadoop.

I’m not suggesting insurance companies will throw away their mainframes. They’re working, they’re paid for, so they’ll stick around. But the rapid adoption of cloud-based architectures is going to make it very difficult to consolidate future IT onto mainframes (something that happened in every prior generation) and is also going to reduce the incentive for doing so in the first place. After 20 years of imminent irrelevance, there’s finally a real reason to think their time is up.

Precision Planning; Prepping for 12.04 LTS

Thursday, October 20th, 2011

In just over a week, quite a large cross-section of the Ubuntu community and representatives from many free software projects and companies will gather in Orlando to map out the Precise Pangolin. Now’s the time to prepare for the event, with 11.10 out (well done everybody!) and the key infrastructure slotting into place.

Figuring out the optimal balance of goals is the work of the summit, but we can lay out some over-arching themes that have been in progress during this meta-cycle and come to their full fruition in the LTS release. We can also remind ourselves of the ways in which an LTS is different, and the impact that will have on our choices in Orlando.

Being an LTS

As Dustin pointed out, this is the fourth Ubuntu LTS release, and as such it needs to carry on, and entrench, the reputation of the LTS as a carrier-grade platform for mission-critical server deployments and large scale desktop deployments. That means:

  • Adjusting the cycle to allocate more time for resolving issues
  • Introducing minimal new infrastructure or platform-visible change
  • Goal-driven and continuously benchmarked programs of action around performance
  • First-class accessibility for those with special interaction needs
  • Enablement and certification of the sorts of hardware people will deploy at scale and in the datacenter
Rick Spencer and his team have put some thought into one of the critical challenges that LTS releases face, which is the need to support newer hardware over a longer period of time. Traditionally, Linux distributions have tried to prioritize items to backport, but that puts the stability of known-good configurations very much at risk. Rick will outline the strategy we’ll adopt for this at UDS, which I think makes the most out of the work done for every release of Ubuntu.

Carrier-grade Cloud Infrastructure and Guest

Ubuntu is the #1 OS for cloud computing, whether you measure it by the number of instances running on all the major public clouds, the number of Ubuntu-based cloud appliances, the number of public and private clouds running on Ubuntu host OS. The extraordinary diversity of the Ubuntu community, the calibre of collaboration between Ubuntu and OpenStack, and the focused efforts of Canonical to make Ubuntu useful in the cloud have all contributed to that position. In 12.04 LTS we must deliver:

  • world’s best cloud infrastructure powered by OpenStack’s corresponding major release
  • perfect support for cloud-oriented hardware from Canonical’s partner IHV’s
  • a great hybrid-cloud story, for those using a mixture of private and public clouds
  • world’s best guest OS on AWS, Rackspace and other public cloud infrastructures
A key focus is making it easy to bootstrap and manage services across public, private and hybrid clouds, and Juju charms are the magic by which we’re flattening all those cloud substrates and bringing devops practices into the Ubuntu administrator toolbox. Those who attended the recent OpenStack Summit will have caught the buzz around Juju, which brings APT-like semantics to cloud service deployments. There’s a rapidly growing collection of Juju charms which define common services and allow you to get started immediately on all the major public and private cloud infrastructures; I keep hearing how clean and easy it is to charm a new piece of software for cloud deployment so I’m sure both the number of charms and charmers will grow exponentially.
Right now Juju charms can be deployed on bare-metal farms of hardware with no virtualisation, such as Hadoop or Condor compute clusters, Amazon’s public cloud infrastructure, Ubuntu’s OpenStack-based cloud infrastructure, and on the developer workstation using LXC containers so developers can use charms locally which are then re-used by administrators deploying to the cloud. I think there are Juju contributors working on support for a few other cloud infrastructures too, it will be interesting to see what lands by 12.04.

Pangolin-worthy Server Release

We have a proud heritage from Debian which 12.04 LTS needs to celebrate and maintain; although we have some key advantages for enterprises deploying Ubuntu over Debian in our ability to enable some additional security features in the Linux kernel and toolchain, as well as support, certification and assurance, the lean-mean-green-machine nature of the Ubuntu Server experience owes much to Debian’s focus on quality and precision.

12.04 will be the first LTS to support the ARM architecture on selected ARM SoC parts. In a world where computational density is increasingly prioritized over single-thread performance, the entry of ARM to the server market is a very interesting shift. Ubuntu has established a very strong competence in ARM and I think the 12.04 LTS release will power a new generation of power-focused hardware for the data centre.

Pixel-perfect desktop

The nail-biting transitions to Unity and Gnome 3 are behind us, so this cycle is an opportunity to put perfection front and center. We have a gorgeous typeface that was designed for readability, which is now available in Light and Medium as well as Regular and Bold, and has a Mono variant as well. That’s an opportunity to work through the whole desktop interface and make sure we’re using exactly the right weight in each place, bringing the work we’ve been doing for several cycles fully into focus.

We also need to do justice to the fact that 12.04 LTS will be the preferred desktop for many of the world’s biggest Linux desktop deployments, in some cases exceeding half a million desktops in a single institution. So 12.04 is also an opportunity to ensure that our desktop is manageable at scale, that it can be locked down in the ways institutions need, and that it can be upgraded from 10.04 LTS smoothly as promised. Support for multiple monitors will improve, since that’s a common workplace requirement.

During UDS we’ll build out the list of areas for refinement, polish and ‘precisioneering’, but the theme for all of this work is one of continuous improvement; no new major infrastructure, no work on pieces which are not design-complete at the conclusion of the summit.

While there are some remaining areas we’d like to tweak the user experience, they will probably be put on hold so we can focus on polish, performance and predictability. I’d like to improve the user experience around Workspaces for power users, and we’ll publish our design work for that, but I think it would be wisest for us to defer that unless we get an early and effective contribution of that code.

It’s going to be a blast in Orlando, as UDS always manages to bring together a fantastic crowd. And it’s going to be a beautiful, memorable release of Ubuntu in April 2012!

Welcoming the new Community Council

Sunday, October 16th, 2011

Congratulations to those elected to the 2011-2013 CC, and thanks both to those who were willing to serve and all of those who participated in the poll. We’ll use the results of the poll should we need to fill in for any members who cannot for any reason complete their two year term.

This is an important CC, as I think there is an opportunity to develop a response to the challenge thrown down recently, which is to give *purpose* to community leadership in the project.

Every role has purpose in its own context; those who set out to achieve a goal, like producing complete server documentation, or moderating a difficult mailing list (you know who you are ;-)) or translating a work into a new language, have no trouble identifying their purpose. And there are essentially no limits on the goals one can set for oneself in the project; we have community members engaged in pretty much everything we do.

Nevertheless, there has been a shift in the nature of the project, and that shift is not yet fully reflected in community leadership. Specifically, our mission has shifted from being defined by integration-and-delivery, to one that includes design and development as well as integration and delivery.

When we started, we said we would deliver the world’s free software, on a tightly integrated and free basis, on a cadence. We made some choices about defaults, but broadly left it up to others to define what ‘the software’ would do.

After doing that for several years, it became clear to me that limiting ourselves to that pattern meant we were leaving it to others to decide if we could really deliver an alternative to proprietary platforms for modern computing. We were doing a lot of work, which was not recognised by some of the projects we were supporting heavily, and still treading water when it came to the real fight for hearts and minds, against Windows, against MacOS, and against Android. So, even though it was clearly going to be a difficult choice, we set out to grow the contribution Canonical makes directly to the body of open source. We said we’d be design-led, and we’d focus on the areas that matter most to pioneer adopters; the free software desktop, mobile computing, and the cloud.

The result is work like Unity, uTouch, and Juju. I’m proud of all three, I think they are worthy bannermen in our effort to bring free software to a much wider audience, and I think without them we would have no chance of fixing bug #1.

At the same time, we’ve now created a whole new dimension to Ubuntu: the design and definition of products, essentially. And that begs the question: what’s the community role in defining and designing those products.

We haven’t taken a step backwards. It’s not as though there are responsibilities that have been taken away from anybody. It’s just that we’ve taken on some bolder, bigger challenges, and community folk rightly say “how can we be part of that?” And that’s an interesting question, which the new CC will be in a good position to discuss with me and Jono.

It’s not healthy to offer the ability to vote for money. Nobody should feel they have a right to decide how someone else spends their time or money. But I do think the relationship between Canonical and community is as important now as ever, and there is an opportunity to break new ground. Ubuntu represents the best chance GNU/Linux has to bring free software to the foreground of everyday computing. I have no doubt of that. After us, it’s Android, and that’s not quite the same. So our interests are all very aligned; there is a huge opportunity, and a once-in-a-lifetime chance to use what we know and love in a way that changes millions of lives for the better.

Community Council nominations and poll

Friday, October 7th, 2011

It’s governance season here at Ubuntu. Next up, we’re polling all Ubuntu project members for a view on preferred candidates for the Community Council, our most senior board responsible for all community governance. The CC delegates their authority on membership and leadership to a whole range of boards, so electing a team which understands the diversity of the project is very important, and electing a team which can in turn pick good leaders for key aspects of the project is vital to our long term health.

The following folk have expressed a willingness to serve on the Council, and are nominated by me to do so. Daniel Holbach has kindly setup a CIVS poll and all Ubuntu members should have received an invitation to cast their ballot. For interest, the candidates are:

The poll will run for only a week, so please do head over there and make your preferences known!

P is for…

Wednesday, October 5th, 2011

It’s a perennial pleasure to pick pertinent and/or pithy placeholder names for Ubuntu releases. At least, I like to think of them as pertinent and/or pithy. I’ve had diverse feedback, shall we say. Nevertheless, it’s now a tradition, and it’s a pressing priority as we approach the release of Oneiric.

So, what will be our mascot for 12.04 LTS?

The letter P is pretty perfect. It’s also plentiful – my inbox has been rather full of suggestions – and we have options ranging from pacific to purposeful, via puckish and prudent. We’ll steer clear of the posh and the poncey, much as some would revel in the Portentious Palomino or the Principled Paca, those aren’t the winning names. Having spent the last six months elucidating the meaning of “oneiric” I think it might also be worth skipping the parenthetical or paralogical options too; so sadly I had to exclude the Perspicacious Panda and Porangi Packhorse (though being an LTS, that Packhorse was a near thing).

Being generally of a cheerful nature, I thought we’d avoid the Predatory Panther and Primeval Possum. Neither sounds like great company for a seven year journey, really. Same goes for the Peccable Peccary, Pawky Python and Perfidious Puku. So many bullets to dodge round here!

We’re looking for something phonetic, something plausible and something peaceful too. We’ll avoid the petulant, the pestilent, the phlegmy (phooey!), the parochial, the palliative and the psychotic. We’re aiming for mildly prophetic, and somewhat potent, without wanting to be all pedantic and particular. Phew.

So, what might work?

There are lots of lovely candidates. I have a fondness for phat. The Phat Platypus has a can-do kind of ring to it, but I don’t think it’ll fly.

I also like punchy and perky (the Perky Penguin is a nice nostalgic option) and persistent (better than permanent, peerless or penultimate) and playful and plucky and poised. Others like prescient and peaceable and pervasive (!) and pivotal. Pukka rings a nice old-world bell, but it’s possibly pejorative.

As you can see, it’s been something of a challenge to get this right.

Let’s ask the question differently – what are we trying to convey? 12.04 is an LTS. So we want it to be tough and long-lasting, reliable, solid as a rock and well defended. It’s also going to be the face of Ubuntu for large deployments for a long time, so we want it to have no loose ends, we want it to be coherent, neat.

We’ve told the story of the cloud in previous releases, and that comes to fruition in 12.04 with the first LTS that supports both the cloud guest, and cloud infrastructure, across ARM and x86 architectures. We’ve also told the story of Unity in previous releases, and that comes to fruition in a fast, lean interface that works well across clients both thick and thin. 12.04 is going to be a lot more than all that, but for the full reveal, you’ll need to wait till UDS! Nevertheless, we can take reliability, precision, and polish as a given.

Balancing all of those options, I think we have just the right mix in our designated mascot for 12.04 LTS. Ladies and gentlemen, I give you the Precise Pangolin.

Now, I’ve recently spent a few hours tracking a pangolin through the Kalahari. I can vouch for their precision – there wasn’t an ant hill in the valley that he missed. Their scales are a wonder of detail and quite the fashion statement. I can also vouch for their toughness; pangolin’s regularly survive encounters with lions. All in all, a perfect fit. There’s no sassier character, and no more cheerful digger, anywhere in those desert plains. If you want a plucky partner, the pangolin’s your match. Let’s pack light for a wonderful adventure together. See you in Orlando!

Technical Board 2011

Wednesday, October 5th, 2011

After the recent poll of Ubuntu developers I’m delighted to introduce the Technical Board 2011-2013. I think it’s worth noting that three of the members of this generation of technical leaders are not Canonical employees, though admittedly they are all former members of that team. I think there’s cause for celebration on both fronts: broader institutional and independent representation in the senior governance structures of Ubuntu is valuable, and the fact that personal interest persists regardless of company affiliation is also indicative of the character of the whole community, both full-time and volunteer. We’re in this together, for mutual interests.

Without further ado, here they are, in an order you are welcome to guess ;-)

  • Stéphane Graber
  • Kees Cook
  • Martin Pitt
  • Matt Zimmerman
  • Colin Watson
  • Soren Hansen
Please join me in congratulating each of them, and thanking those who were willing to stand, who were nominated, and those who participated in the poll.
From my perspective, it was a very rich field of nominations. We had several candidates with no historic link to Canonical, which was very encouraging in terms of the diversity of engagement in the project. For the first time, I felt we had too many candidates and so I whittled down the final list of nominations – as it happens, all of the non-Canonical nominees made the shortlist, though that was not a criteria for my support.
Welcome aboard, all!

Building clouds for fun and profit

Monday, September 19th, 2011

So you’d like to spin up an internal cloud for hadoop or general development, shifting workloads from AWS to your own infrastructure or prototyping some new cloud services?

Call Canonical’s cloud infrastructure design and consulting team.

There are a couple of scenarios that we’re focused on at the moment, where we can offer standardised engagements:

  • Telco’s building out cloud infrastructures for public cloud services. These are aiming for specific markets based on geography or network topology – they have existing customers and existing networks and a competitive advantage in handling outsourced infrastructure for companies that are well connected to them, as well as a jurisdictional advantage over the global public cloud providers.
  • Cloud infrastructure prototypes at a division or department level. These are mostly folk who want the elasticity and dynamic provisioning of AWS in a private environment, often to work on products that will go public on Rackspace or AWS in due course, or to demonstrate and evaluate the benefits of this sort of architecture internally.
  • Cloud-style legacy deployments. These are folk building out HPC-type clusters running dedicated workloads that are horizontally scaled but not elastic. Big Hadoop deployments, or Condor deployments, fall into this category.

Cloud has become something of a unifying theme in many of our enterprise and server-oriented conversations in the past six months. While not everyone is necessarily ready to shift their workloads to a dynamic substrate like Ubuntu Cloud Infrastructure (powered by OpenStack) it seems that most large-scale IT deployments are embracing cloud-style design and service architectures, even when they are deploying on the metal. So we’ve put some work into tools which can be used in both cloud and large-scale-metal environments, for provisioning and coordination.

With 12.04 LTS on the horizon, OpenStack exploding into the wider consciousness of cloud-savvy admins, and projects like Ceph and CloudFoundry growing in stature and capability, it’s proving to be a very dynamic time for IT managers and architects. Much as the early days of the web presented a great deal of hype and complexity and options, only to settle down into a few key standard practices and platforms, cloud infrastructure today presents a wealth of options and a paucity of clarity; from NoSQL choices, through IAAS choices, through PAAS choices. Over the next couple of months I’ll outline how we think the cloud stack will shape up. Our goal is to make that “clean, crisp, obvious” deployment Just Work, bringing simplicity to the cloud much as we strive to bring it on the desktop.

For the moment, though, it’s necessary to roll up sleeves and get hands a little dirty, so the team I mentioned previously has been busy bringing some distilled wisdom to customers embarking on their cloud adventures in a hurry. Most of these engagements started out as custom consulting and contract efforts, but there are now sufficient patterns that the team has identified a set of common practices and templates that help to accelerate the build-out for those typical scenarios, and packaged those up as a range of standard cloud building offerings.

 

Surveying participation

Friday, September 9th, 2011

Just a brief note to celebrate Jono and team’s recent work on gathering insight into our membership and developer participation processes. Thanks also to those who took time to comment for the surveys. The results are worth a read if you care about the vibrancy and dynamism of our community. Kudos Jono, and thanks!

Innovation and OpenStack: Lessons from HTTP

Thursday, September 8th, 2011

OpenStack is facing an important choice: does define a new set of API’s, one of many such efforts in cloud infrastructure, or does it build around the existing AWS API’s?  So far, OpenStack has had it both ways, with some new API work and also some AWS-based effort. I’m writing to make the case for a tighter definition of mission around the de facto standard infrastructure API’s of EC2, S3 and a few other elements of AWS.

What prompted this blog was my overhearing (or, seeing an email on a list) the statement that cloud infrastructure projects like OpenStack, Eucalyptus and others should “innovate at the level of the API and infrastructure concepts”. I’m of the view that any projects which try to do so will fail and are not worth spending your or my time on. They are going to be about as successful as projects that try to reinvent HTTP to make it better/faster/cleaner/whatever. Which is to say – not successful at all, because no new protocol with the same conceptual goals will match the ecosystem that exists today around HTTP. There will of course be protocol innovation, the last word is never written, but for the web, it’s a done deal. All the proprietary and ad-hoc things that preceded HTTP have died, and good riddance. Similarly, cloud infrastructure will converge around a standard API which will be imperfect but real. Innovation is all about how that API is implemented, not which API it is.

Nobody would say the web server market lacks innovation. There are many, many different companies and communities that make and market web server solutions. And each of those is innovating in some way – focusing on a different audience, or trying a different approach. Yet that entire market is constrained by a public standard: HTTP, which evolves far more slowly than the products that implement it.

There are also a huge number of things that wrap themselves around HTTP, from cache accelerators to 3G content compressors; the standardisation of that thin layer has created a massive ecosystem and driven fantastic innovation, even as many of the core concepts that drove HTTP’s initial design have eroded or softened. For example, HTTP was relentlessly stateless, but we’ve added cookies and cacheing to address issues caused by that (at the time radical) design constraint.

Today, cloud infrastructure is looking for its HTTP. I think that standard already exists in de facto form today at AWS, with EC2, S3 and some of the credential mechanisms being essentially the core primitives of cloud infrastructure management. There is enormous room for innovation in cloud infrastructure *implementations*, even within the constraints of that minimalist API. The hackers and funders and leaders and advocates of OpenStack, and any number of other cloud infrastructure projects both open source and proprietary, would be better off figuring out how to leverage that standardisation than trying to compete with it, simply because no other API is likely to gain the sort of ecosystem we see around AWS today.

It’s true that those API’s would better be defined in a clean, independent forum analogous to the W3C than inside the boiler-room of development at any single cloud provider, but that’s a secondary issue. And over time, it can be engineered to work that way.

More importantly for the moment, those who make an authentic effort to fit into the AWS protocol standard immediately gain access to chunks of the AWS gene pool, effectively gratis. From services like RightScale to tools like ElasticFox, your cloud is going to be more familiar, more effective and more potent if it can ease the barriers to porting from AWS. No two implementations will magically Just Work, but the rough edges and gotchas can get ironed out much more easily if there is a clear standard and reference implementations. So the cost of “porting” will always be lower between clouds that have commonality by design or heritage.

For OpenStack itself, until that standard is codified, I would describe the most successful mission statement as “to be the reference public cloud provider scale implementation of cloud infrastructure compatible with AWS core API’s”. That’s going to give all the public cloud providers who want to compete with Amazon the best result: they’ll be able to compete on service terms, while convincing early adopters that the move to their offering will be relatively painless. All it takes, really, is some humility and the wisdom to recognise the right place to innovate.

There will be many implementations of those core API’s. One or other will be the Apache, the “just start here” option. But it doesn’t matter so much which one that is, frankly. I think OpenStack has the best possible chance to be that, but only if they stick to this crisp mission and don’t allow themselves to be drawn into front-end differentiation for the sake of it. Should that happen, OpenStack will be vulnerable to another open source project which credibly aims to achieve the goals outlined here. Very vulnerable. Witness the ways in which Eucalyptus is rightly pointing out its superior AWS compatibility in comparison with OpenStack.

For the public cloud providers that hope to build on OpenStack, API differentiation is poison in a juicy steak. It looks tasty, but it’s going to cost you the race prematurely. There were lots of technical reasons why alternatives to Windows were *better*, they just failed to become de facto standards. As long as Amazon doesn’t package up AWS as an on-premise solution, it’s possible to establish a de facto standard around something else, but that something else (perhaps OpenStack) needs to be AWS-compatible in some meaningful way to get enough momentum to matter. That means there’s a window of opportunity to get this right, which is not going to stay open indefinitely. Either Amazon, or another open source project, could close that window on OpenStack’s fingers. And that would be a pity, since the community around OpenStack has tons of energy and goodwill. In order to succeed, it will need to channel that energy into innovation on the implementation, not on trying to redefine an existing standard.

Of course, all this would be much easier if there were a real HTTP-like standard defining those API’s. The web had the enormous advantage of being founded by Tim Berners-Lee, in an institution like CERN, with the vision to setup the W3C. In the case of today’s cloud infrastructure, there isn’t the same dynamic or set of motivations. Amazon’s position of vagueness on the AWS API’s is tactically perfect for them right now, and I would expect them to maintain that line while knowing full well there is no real proprietary claim in a public network API, and no real advantage to be had from claiming otherwise. What’s needed is simply to start codifying existing practice as a draft standard in a credible forum of experts, with a roadmap and the prospect of support from multiple vendors. I think that would be relatively easy to arrange, if we could get Rackspace, IBM and HP to sit down and commit to doing it. We already have HP and Rackspace at the OpenStack table, so the signs are encouraging.

A good standard would:

* be pragmatic about the fact that Amazon has already made a bunch of decisions we’ll live with for ever.
* have a commitment from folk like OpenStack and Eucalyptus to aim for compliance
* include a real automated functional test suite that becomes the interop benchmark of choice over time
* be open to participation by Amazon, though that will not likely come for some time
* be well documented and well managed, like HTTP and CSS and HTML
* not be run but the ITU or ISO

I’m quite willing to contribute resources to getting such a standard off the ground. Forget big consortiums or working groups or processes or lobbying forums, what’s needed are a few savvy folk who know AWS, Eucalyptus and OpenStack, together with a very few technical writers. Let me know if you’re interested.

Now, I started out by saying that I was writing to make the case for OpenStack to be focused on a particular area. It’s a bit cheeky for me to write anything of the sort, of course, because OpenStack is a well run project that has an excellent steering group, which recently held a poll of contributors to appoint some new members, none of which was me. I’ve every confidence in the leadership of the project, despite the tremendous pressure they are under to realise the hopes of so many diverse users and companies. I’m optimistic for the potential OpenStack has to accelerate cloud technology, and in Canonical we put a considerable amount of effort into making OpenStack deployment a smooth experience for Ubuntu users and Canonical customers. Ubuntu Cloud Infrastructure depends now on OpenStack. And I have a few old friends who are also leaders in the OpenStack community, so for all those reasons I thought it worth making this perspective public.

Dash takes shape for 11.10 Unity

Tuesday, August 16th, 2011

Our goal with Unity is unprecedented ease of use, visual style and performance on the Linux desktop. With feature freeze behind us, we have a refined target render of the Dash for Oneiric, and here it is:

click for the full size render.

Scopes and Lenses

We’ve moved from the idea of “Places” to a richer set of “Scopes and Lenses”. Scopes are data sources, and can tap into any online or offline data set as long as they can generate categorised results for a search, describe a set of filters and support some standard interfaces. Lenses are various ways to present the data that come from Scopes.

The Scopes have a range of filtering options they can use, such as ratings (“show me all the 5 star apps in the Software Center please”) and categories (“… that are games or media related”). Over time, the sophistication of this search system will grow but the goal is to keep it visual and immediate – something anyone can drive at first attempt.

This delivers on the original goal of creating a device-like experience that was search driven. Collaboration with the always-excellent Zeitgeist crew (quite a few of whom are now full time on the Unity team!) has improved the search experience substantially, kudos to them for the awesome work they’ve put in over the past six months. Since we introduced the Dash as a full screen device-like search experience, the same idea has made its way into several other shells, most notably Mac OS X Lion. While we’re definitely the outsider in this contest, I think we can stay one step ahead in the game given the support of our community.

The existing Places are all in the process of being updated to the Scopes and Lenses model, it’s a bit of a construction site at the moment so hard-hats are advised but dive in if you have good ideas for some more interesting scopes. I’ve heard all sorts of rumours about cool scopes in the pipeline ;-) and I bet this will be fertile ground for innovation. It’s pretty straightforward to make a scope, I’m sure others will blog and document the precise mechanisms but for those who want a head start, just use the source, Luke.

Panel evolution

In the panel, you’ll see that the top left corner is now consistently used to close whatever has the focus. Maximising a window keeps the window controls in the same position relative to the window – the top left corner. We have time to refine the behaviour of this based on user testing of the betas; for example, in one view, one should be able to close successive windows from that top left corner even if they are not maximised.

It’s taken several releases of careful iteration to get to this point. Even though we had a good idea where we were headed, each step needed to be taken one release at a time. Perhaps this might make a little clearer the reasons for the move of window controls to the left – it was the only place where we could ultimately keep them consistent all the way up to a maximised window with the title bar integrated into the panel. I’m confident this part will all be settled by 12.04.

As part of this two-step shuffle, the Dash invocation is now integrated in the Launcher. While this is slightly less of a Fitts-fantastic location, we consider it appropriate for a number of reasons. First, it preserves the top left corner for closing windows. Second, the Dash is best invoked with the Super key (sometimes erroneously and anachronistically referred to as the “Windows” key, for some reason ;-)). And finally, observations during user testing showed people as more inclined to try clicking on items in the Launcher than on the top left icon in the panel, unless that icon was something explicit like a close button for the window. Evidence based design rules.

Visual refinements

Rather than a flat darkening, we’re introducing a wash based on the desktop colour. The dash thus adjusts to your preferred palette based on your wallpaper. The same principle will drive some of the login experience – choosing a user will shift the login screen towards that users wallpaper and palette.

We’ve also integrated the panel and the dash, so indicators are rendered in a more holographic fashion inside the dash. Together with efforts to mute the contrast of Launcher icons the result is a more striking dash altogether: content is presented more dramatically.

Since we have raw access to the GL pipeline, we’re taking advantage of that with some real-time blur effects to help the readability and presentation of overlay content in the Dash, too. Both Nux in the case of Unity-3D and Qt in the case of Unity-2D have rich GL capabilities, and we’d like to make the most of whatever graphics stack you have on your hardware, while still running smoothly on the low end.

Growing community and ecosystem

A project like this needs diverse perspectives, talents and interests to make it feel rounded and complete. While Canonical is carrying the core load, and we’re happy to do so in order to bring this level of quality to the Ubuntu desktop user experience, what makes me particularly optimistic is the energy of the contributors both to Unity directly and to the integration of many other components and applications with the platform.

On the contribution front, a key goal for the Unity community is to maintain velocity in contributor patch flows. You should expect a rapid review and, all being well, landing, for contributions to Unity that are in line with the design goals. In a few cases we’ve also accepted patches that make it possible to use Unity in ways that are different to the design goals, especially where the evidence doesn’t lean very heavily one way or the other. Such contributions add some complexity but also give us the opportunity to test alternatives in a very rich way; the winning alternative would certainly survive, the other might not.

Contrary to common prognostication, this community is shaping up to be happy and productive. For those who do so for love and personal interest, participating in such a community should be fun and stimulating, an opportunity to exercise skills or pursue interests, give something back that will be appreciated and enjoyed by many, and help raise the bar for Linux experiences. I’d credit Jorge and others for their stewardship of this so far, and my heartfelt thanks to all of those who have helped make Unity better just for the fun of it.

Outside of the core, the growing number of apps that integrate sweetly with the launcher (quicklists), dash (scopes), indicators (both app-specific and category indicators) is helping to ensure that API’s are useful, refined and well implemented, as well as improving the experience of Ubuntu users everywhere. Now that we’re moving to Unity by default for both 2D and 3D, that’s even more valuable.

Under the hood

In this round, Unity-3D and Unity-2D have grown together and become twin faces on the same underlying model. They now share a good deal of common code and common services and – sigh – common bugs :-). But we’re now at the point where we can be confident that the Unity experience is available on the full range of hardware, from lightweight thin client systems made of ARM or Atom CPU’s to CADstations with oodles of GPU horsepower.

There’s something of a competition under way between proponents of the QML based Unity-2D, who believe that the GL support here is good enough to compete both at the high end and on the low end, and the GL-heads in Unity-3D, who think that the enhanced experiences possible with raw GL access justify the additional complexity of working in C++ and GL on the metal. Time will tell! Since a lot of the design inspiration for Unity came from game interfaces, I lean to the “let’s harness all the GL we can for the full 3D experience” side of the spectrum, but I’m intrigued with what the QML team are doing.