Unity on Wayland

Thursday, November 4th, 2010

The next major transition for Unity will be to deliver it on Wayland, the OpenGL-based display management system. We’d like to embrace Wayland early, as much of the work we’re doing on uTouch and other input systems will be relevant for Wayland and it’s an area we can make a useful contribution to the project.

We’re confident we’ll be able to retain the ability to run X applications in a compatibility mode, so this is not a transition that needs to reset the world of desktop free software. Nor is it a transition everyone needs to make at the same time: for the same reason we’ll keep investing in the 2D experience on Ubuntu despite also believing that Unity, with all its GL dependencies, is the best interface for the desktop. We’ll help GNOME and KDE with the transition, there’s no reason for them not to be there on day one either.

Timeframes are difficult. I’m sure we could deliver *something* in six months, but I think a year is more realistic for the first images that will be widely useful in our community. I’d love to be proven conservative on that 🙂 but I suspect it’s more likely to err the other way. It might take four or more years to really move the ecosystem. Progress on Wayland itself is sufficient for me to be confident that no other initiative could outrun it, especially if we deliver things like Unity and uTouch with it. And also if we make an early public statement in support of the project. Which this is!

In coming to this view, several scenarios were considered.

One is the continued improvement of X, which is a more vibrant project these days than it once was. X will be around a long time, hence the importance of our confidence levels on the idea of a compatibility environment. But we don’t believe X is setup to deliver the user experience we want, with super-smooth graphics and effects. I understand that it’s *possible* to get amazing results with X, but it’s extremely hard, and isn’t going to get easier. Some of the core goals of X make it harder to achieve these user experiences on X than on native GL, we’re choosing to prioritize the quality of experience over those original values, like network transparency.

We considered the Android compositing environment. It’s great for Android, but we felt it would be more difficult to bring the whole free software stack along with us if we pursued that direction.

We considered and spoke with several proprietary options, on the basis that they might be persuaded to open source their work for a new push, and we evaluated the cost of building a new display manager, informed by the lessons learned in Wayland. We came to the conclusion that any such effort would only create a hard split in the world which wasn’t worth the cost of having done it. There are issues with Wayland, but they seem to be solvable, we’d rather be part of solving them than chasing a better alternative. So Wayland it is.

In general, this will all be fine – actually *great* – for folks who have good open source drivers for their graphics hardware. Wayland depends on things they are all moving to support: kernel modesetting, gem buffers and so on. The requirement of EGL is new but consistent with industry standards from Khronos – both GLES and GL will be supported. We’d like to hear from vendors for whom this would be problematic, but hope it provides yet another (and perhaps definitive) motive to move to open source drivers for all Linux work.

Two weeks with Mir

Tuesday, July 9th, 2013

Mir has been running smoothly on my laptop for two weeks now. It’s an all-Intel Dell XPS, so the driver stack on Ubuntu is very clean, but I’m nonetheless surprised that the system feels *smoother* than it did pre-Mir. It might be coincidence, Saucy is changing pretty fast and new versions of X and Compiz have both landed while I’ve had Mir running. But watching top suggests that both Xorg and Compiz are using less memory and fewer CPU cycles under Mir than they were with X handling the hardware directly.

Talking with the Mir team, they say others have seen the same thing, and they attribute it to more efficient buffering of requests on the way to the hardware. YMMV but it’s definitely worth trying. I have one glitch which catches me out – Chromium triggers an issue in the graphics stack which freezes the display. Pressing Alt-F1 unfreezes it (it causes Compiz to invoke something which twiddles the right bits to bring the GPU back from it’s daze). I’m told that will get sorted trivially in a coming update to the PPA.

The overall impression I have is that Mir has delivered what we hoped. Perhaps it had the advantage of being able to study what went before – SurfaceFlinger, Wayland, X – and perhaps also the advantage of looking at things through the perspective of a mobile lens, where performance and efficiency are a primary concern, but regardless, it’s lean, efficient, high quality and brings benefits even when running a legacy X stack.

We take a lot of flack for every decision we make in Ubuntu, because so many people are affected. But I remind the team – failure to act when action is needed is as much a failure as taking the wrong kind of action might be. We have a responsibility to our users to explore difficult territory. Many difficult choices in the past are the bedrock of our usefulness to a very wide audience today.

Building a graphics stack is not a decision made lightly – it’s not an afternoon’s hacking. The decision was taken based on a careful consideration of technical factors. We need a graphics stack that works reliably across a very wide range of hardware, that performs predictably, that provides a consistent quality of user experience on many different desktop environments.

Of course, there is competition out there, which we think is healthy. I believe Mir will be able to evolve faster than the competition, in part because of the key differences and choices made now. For example, rather than a rigid protocol that can only be extended, Mir provides an API. The implementation of that API can evolve over time for better performance, while it’s difficult to do the same if you are speaking a fixed protocol. We saw with X how awkward life becomes when you have a fixed legacy protocol and negotiate over extensions which themselves might be versioned. Others have articulated the technical rationale for the Mir approach better than I can, read what they have to say if you’re interested in the ways in which Mir is different, the lessons learned from other stacks, and the benefits we see from the architecture of Mir.

Providing Mir as an option is easy. Mir is a very focused part of the stack, it has far fewer tentacles and knock-on consequences for app developers than, say, the init system, which means we should be able to work with a very tight group of communities to get great performance. It’s much easier for a distro to engage with Mir than to move to SystemD, for example; instead of an impact on every package, there is a need to coordinate in just a few packages for great results. We’ve had a very positive experience working with the Qt and WebKit communities, for example, so we know those apps will absolutely fly and talk Mir natively. Good upstreams want their code widely useful, so I’ve no doubt that the relevant toolkits will take patches that provide enhanced capabilities on Mir when available. And we also know that we can deliver a high-performance X stack on Mir, which means any application that talks X, or any desktop environment that talks X, will perform just as well with Mir, and have smoother transitions in and out thanks to the system compositor capabilities that Mir provides.

On Ubuntu, we’re committed that every desktop environment perform well with Mir, either under X or directly. We didn’t press the ‘GO’ button on Mir until we were satisfied that the whole Ubuntu community, and other distributions, could easily benefit from the advantages of a leaner, cleaner graphics stack. We’re busy optimising performance for X now so that every app and every desktop environment will work really well in 13.10 under Mir, without having to make any changes. And we’re taking patches from people who want Mir to support capabilities they need for native, super-fast Mir access. Distributions should be able to provide Mir as an option for their users to experiment with very easily – the patch to X is very small (less than 500 lines). For now, if you want to try it, the easiest way to do so is via the Ubuntu PPA. It will land in 13.10 just as soon as our QA and release teams are happy that its ready for very widespread testing.

All the faces of Ubuntu

Thursday, March 7th, 2013

Harald,

Of course what Kubuntu and Xubuntu and Ubuntu GNOME Remix et al do matters. If it didn’t, we wouldn’t invest a ton of time and energy in finding ways to share the archives effectively. And I consider it one of the lovely things about Ubuntu that there is room for all of us here. As long as there are people willing to make it happen, there’s room for a new face.

You all make the broad Ubuntu family more diverse and more interesting. For which I’m grateful.

In return, you get the benefit of an enormous and concentrated investment in making a core platform that can be widely consumed (on top of the already enormous efforts of the open source community, Debian, and any number of other groups). That investment brings with it a pace of change, and a willingness to be focused on specific outcomes. Mir, which is a fantastic piece of engineering by a very talented team that has looked hard at the problem and is motivated to do something that will work well, is just one example. Every week, we’re figuring out how to coordinate changes. Why blow a gasket over this one? I’ve absolutely no doubt that Kwin will work just fine on top of Mir. And I’m pretty confident Mir will be on a lot more devices than Wayland. Which would be good for KDE and Kubuntu and Plasma Active.

So, before you storm off, have a cup of tea and think about the gives and gets of our relationship. Seriously.

Mark

Linaro at work: porting, testing, and Android

Thursday, November 11th, 2010

Congratulations to Team Linaro on their first full release yesterday. For those not yet in the know, Linaro is a collaborative forum with dedicated engineers making sure that Linux rocks on ARM (and potentially other architectures). Staffed by a combination of Canonical and new Linaro engineers, together with secondees from the major ARM silicon vendors, it’s solving the problems of fragmentation in Linux across that ecosystem and reducing the time to market for ARM devices.

Linaro uses the same cadence as Ubuntu and we’re able to collaborate on the selection, integration and debugging of key components like the kernel, toolchain, X.org (still ;-)), and hundreds of small-but-important libraries and tools in between. Team Linaro was @UDS and it was very cool to see the extent to which their sessions drew attendance from the wider Ubuntu community – I think there’s a growing interest in efficient computing across the Ubuntu landscape.

The Linaro team is pleased to announce the release of Linaro 10.11.
10.11 is the first public release that brings together the huge amount
of engineering effort that has occurred within Linaro over the past 6
months. In addition to officially supporting the TI OMAP3 (Beagle
Board and Beagle Board XM) and ARM Versatile Express platforms, the
images have been tested and verified on a total of 7 different platforms
including TI OMAP4 Panda Board, IGEPv2, Freescale iMX51 and ST-E U8500.

The advances that have happened in this cycle are numerous but include a
completely rebuilt archive using GCC 4.4.4 and the latest ARM optimised
tool chain, the Linux kernel version 2.6.35, support for
cross-compiling, a new hardware pack way of building images, 3D
acceleration improvements, u-boot enhancements and initial device tree
support, a new QA tracking structure, the list goes on.

Android in the house

The road ahead looks even more interesting. For the next cycle, the Linaro team is going to build an Android environment on the same kernel and toolchain that we collaborate on with Ubuntu. For folks building devices, picking a board that’s part of the Linaro process means you’ll be able to get either an Ubuntu-style or Android-style core environment up and running at Day 1, which should reduce time to market for everyone.

If the Linaro team pulls this off, it will mean that Linaro provides an intersection point for the majority of the consumer electronics x86 and ARM ecosystem, regardless of the end OS. I’m sure over time we’ll find more groups that are interested in joining the process, and I see no reason why they couldn’t be accommodated in this cadence-driven model.

More players, more diversity in services

It was also good to see folks from Montavista and Mentor at Linaro@UDS this year. Whether the Linaro kernel and toolchain plug into their own distros, or they start to offer their services around the Linaro/Ubuntu/Android BSP’s, the result is a healthier ecosystem with fewer snags and gotchas for device makers.

One group asked me explicitly if Linaro was a Canonical show, and I was glad to say it isn’t. Canonical can’t possibly do everything that embedded Linux needs done, but our competence in cadence and release management makes us good custodians of a public project, which is what we do with Ubuntu itself. Participation and collaboration are at the heart, and they benefit from being partnered with a commitment to delivery and deadlines. We can’t do everything in a single cycle, but we can provide a roadmap for things like kernel defragmentation, the device-tree work, enablement of an ever-increasing cross-section of the ARM ecosystem, and transitions between versions of GCC or Python or X or even Wayland. So Canonical makes a good anchor, but Linaro has room for lots of other service-providers. Having multiple companies participate in Linaro means that the products we’re all shipping get better, faster.

Testing

The Linaro team is also going to focus on repeatable, rigorous testing of the core platform in the next cycle. That harmonises nicely with our growing focus on quality in Ubuntu, and the need for better quality and testing in open source in general. I’m interested to see what tools and results the Linaro team can produce in the next six months. Open source *can* be bulletproof, but it can also degrade in quality if we don’t put the right processes in place upstream and downstream, so this is a very welcome initiative.