Archive for the 'free software' Category

Unity on Wayland

Thursday, November 4th, 2010

The next major transition for Unity will be to deliver it on Wayland, the OpenGL-based display management system. We’d like to embrace Wayland early, as much of the work we’re doing on uTouch and other input systems will be relevant for Wayland and it’s an area we can make a useful contribution to the project.

We’re confident we’ll be able to retain the ability to run X applications in a compatibility mode, so this is not a transition that needs to reset the world of desktop free software. Nor is it a transition everyone needs to make at the same time: for the same reason we’ll keep investing in the 2D experience on Ubuntu despite also believing that Unity, with all its GL dependencies, is the best interface for the desktop. We’ll help GNOME and KDE with the transition, there’s no reason for them not to be there on day one either.

Timeframes are difficult. I’m sure we could deliver *something* in six months, but I think a year is more realistic for the first images that will be widely useful in our community. I’d love to be proven conservative on that :-) but I suspect it’s more likely to err the other way. It might take four or more years to really move the ecosystem. Progress on Wayland itself is sufficient for me to be confident that no other initiative could outrun it, especially if we deliver things like Unity and uTouch with it. And also if we make an early public statement in support of the project. Which this is!

In coming to this view, several scenarios were considered.

One is the continued improvement of X, which is a more vibrant project these days than it once was. X will be around a long time, hence the importance of our confidence levels on the idea of a compatibility environment. But we don’t believe X is setup to deliver the user experience we want, with super-smooth graphics and effects. I understand that it’s *possible* to get amazing results with X, but it’s extremely hard, and isn’t going to get easier. Some of the core goals of X make it harder to achieve these user experiences on X than on native GL, we’re choosing to prioritize the quality of experience over those original values, like network transparency.

We considered the Android compositing environment. It’s great for Android, but we felt it would be more difficult to bring the whole free software stack along with us if we pursued that direction.

We considered and spoke with several proprietary options, on the basis that they might be persuaded to open source their work for a new push, and we evaluated the cost of building a new display manager, informed by the lessons learned in Wayland. We came to the conclusion that any such effort would only create a hard split in the world which wasn’t worth the cost of having done it. There are issues with Wayland, but they seem to be solvable, we’d rather be part of solving them than chasing a better alternative. So Wayland it is.

In general, this will all be fine – actually *great* – for folks who have good open source drivers for their graphics hardware. Wayland depends on things they are all moving to support: kernel modesetting, gem buffers and so on. The requirement of EGL is new but consistent with industry standards from Khronos – both GLES and GL will be supported. We’d like to hear from vendors for whom this would be problematic, but hope it provides yet another (and perhaps definitive) motive to move to open source drivers for all Linux work.

Marcus and Ivanka in the Canonical Design team sat me down for some words of wisdom a few months ago. “You think you need a logo, but what you really need is a new font. One that sets the standard for both professional design, and embracing the values of Ubuntu in the way it’s produced.”

And how right they were.

Figuring that we wanted to do this once, properly, we said we’d build a complete family: various weights, variable-width and mono, across some of the key language groups of our community. We knew we couldn’t do everything but we figured we could establish a rigorous core upon which everything could be done. We’d fully hint and kern the work too, so it’s good enough to be a default interface font for something we all use all day long. A huge project, one that will take some time to finish. But today we’re publishing the first source for Ubuntu, the font, a milestone worth celebrating.

Marcus introduced Bruno Maag of Dalton Maag, who expressed a willingness to engage around an open font, and we agreed to buy the rights to the work completely, so that it could be licensed freely.

Bruno pulled together a very energetic team of typographers: Lukaz, Amelie, Shiraaz, Malcolm and more, all folks who live and breathe type and typography and keen to explore this rather crazy idea of inviting crowds into the inner sanctum of type design.

We knew at the start we were bringing together two very different worlds. We wanted a process which would ensure participation without drowning out the clear leadership needed for a coherent result. Bruno steered Marcus, Ivanka, me and others through a core initial process where we defined the range and scope of what we wanted to take on, and the values we wanted reflected in the result. I learned that a font is grounded in real values, and fortunately we have a strong expression of the six attributes that we value in Ubuntu and Canonical: collaboration, freedom, precision, reliability, adroitness, accessibility. That small team was best positioned to distill those into the typeface, and shape the broad strokes of the work.

Ubuntu is a global phenomenon, and we knew at the start we didn’t have the breadth of eyeballs close at hand to keep the font on track as it expanded. So we planned a process of expanding consultation. First within Canonical, which has folks from nearly 30 countries, and then within the Ubuntu community. We published the font to Ubuntu Members, because we wanted folks who participate and contribute directly to Ubuntu to have the strongest say in the public process of designing the font. We heard from Greek, Turkish, Hebrew, Arabic, Indian, Chinese and many other cultures. Not everyone has glyphs in this first round, but everyone has had a hand in bringing us to this milestone.

The design team needed help with this outreach program, and it turned out that a longstanding member of the community, Paul Sladen, has a personal interest in typography. We noticed a marked uptick in the pace of bug triage when Paul got involved, and it was going so well we asked him to tackle it semi-professionally. The result has been really fast feedback to people making comments. I’d like to thank Paul for bringing that crucial stewardship to bear on the community engagement process, we would not have made it to the deadline without him.

We also had the benefit of a tool produced by Richard Lee and others in the design team, which lets people identify specific issues in the font, particularly as rendered in various web browsers on various platforms. fonttest.design.canonical.com is very cool: it lets you pick the characters, weight and size, takes a screenshot for you in most browsers, or helps you capture the essential details for the bug report. Fonts are software, but they are not software as we know it, Jim. So the tool helps us keep track of all the tricky details that might help debug a problem someone’s having.

A key open question, of course, was licensing. There are two obvious candidates, among quite a large field: the OFL, from SIL, and the GPLv3 with a font-specific clause added. Digging into this in more detail turned up a tricky situation: both approaches have issues which precluded us from adopting them immediately. We started speaking in some detail with Nicolas Spalinger of SIL, and Dave Crossland, who has done extensive analysis on the libre font process and dynamics. We offered to underwrite an SFLC review of the OFL, and SIL has expressed a willingness to participate in that, with a view to finding common ground that would bring Dave, ourselves, and many others under one common font licence, but we were running out of time. So we came to the compromise of an interim license, which you can find at bzr branch lp:ubuntu-font-licence While licence proliferation sucks, I’m optimistic we’ll converge in due course. James Vasile from the SFLC will help ensure the final result is wiser with the help of all the experience the SFLC gained in stewarding the GPLv3, and SIL and Dave will bring deep typographic industry insight.

Dalton Maag have started talking more widely about their experiences so far in the process. I was worried that they might be put off by the rowdy nature of open commentary, but I would credit them with a sterling constitution and thank them for the way they stepped up once the bug tracker really started to hum. There are few issues that are escalated which don’t get a rapid response and framing. Of course, there are differences of opinion, but in many cases genuine issues have been identified and handled. The team at DM have gotten into a great cadence of weekly iterations, and Paul has been ensuring that work makes it into the hands of Ubuntu users. As of today, *all* Maverick users have it installed by default (I believe this is true for Kubuntu as well, at least I answer questions in support of that goal).

What’s really interesting is that DM have said there is world-wide interest in the project. Many professional typographers are starting to think about open fonts. Now is the time to set a very high standard for what is achievable. There are hard questions to be answered about how the business of typography will evolve in the face of open and free type, but historically, those questions have best been answered by the bold: those who get involved, those who put themselves in the front line.

Going forward?

In due course, we’d like the Ubuntu font to reflect the full, extraordinary diversity of the Ubuntu community. We can’t do it all at once, and so we’re proposing a process for communities and cultures that feel part of the Ubuntu family to participate. If you want the Ubuntu font to speak your language, you need to do a few things to prepare for it. The hard, hard part is that you’ll need to find a qualified, local typographer who is interested in participating and in leading the design of your glyphs. You may need to find several, as we won’t necessarily embrace the first candidate. This is a serious matter: we welcome the crowdsourcing of bugs, glitches, rendering problems, hinting and kerning issues, but we want coherent, professional contributions on the core design. If that sounds exclusive: yes it is. Quality takes time, quality takes precedence. There are other fonts with lots of coverage, we have only one shot to get your glyphs done really beautifully then freeze them, metrically, for all time in the Ubuntu font.

The broader process looks like this.

First, you need to create a wiki page for your language / culture / glyphset (could be Klingon! Phoenician! Elvish ;-)) on wiki.ubuntu.com/UbuntuFont/Coverage. There, you need to document the glyph set you think is required, and any historical quirks that are peculiar to doing it well, such as OpenType features or alternative approaches.

Second, you need to file a bug on launchpad.net/ubuntu-font-family called “Ubuntu Font should support [Klingon]“. If you want, you can invite members of your community to note that they are affected by the bug. We’ll be looking for ways to prioritise communities for attention.

Third, you need to contact local typographers, and tell them about Ubuntu, open content, open typography. If they are still listening, you have just opened the door on the future for them and given them a big head start :-). They will need to be willing to contribute to the font. They will know how much work that will be. They won’t be paid to do it, unless the local community can find a way to raise the funds, but since there is a genuine sense of excitement in the air about open typography and this project in particular, we think you’ll find bold and insightful typographers who are keen to be part of it. Add their details to the wiki page, especially details of their typographic portfolio. Update the bug with that information.

The tools used for open font design are in a state of flux. There are some exceptional technical pieces, and some dark swampy bits too. Dalton Maag will be leading sessions at UDS with folks from the open typography community, with a view to producing what Dave Crossland described as a “lovely long list” (I’m paraphrasing) of bugs and suggestions. Be there if you want to get a professional typographers insight on the toolchain today and what might be possible in the future. All of the Ubuntu font sources are published, though the license does not require source to be published.

Nevertheless, the process for designing your community glyphs will likely involve a mix of free and proprietary tools, at least for the next months. We’ll ask DM to review the portfolios of candidate typographers, and make recommendations for who should be given the go-ahead to lead the work, language by language. Once core glyphs are designed, we’ll facilitate LoCo-based community feedback, much as we did for the main font. We want local Ubuntu members to have the strongest public voice in feedback to their typographer. And Canonical, with DM, will provide feedback aimed at keeping the whole consistent.

Once the glyph design process is wrapped, the typographer will lead hinting and kerning. That’s the tough, detailed part of the job, but essential for an interface font that will be used on screen, everywhere on screen, all the time. And at that point we’ll start automating feedback, using fonttest, as well as starting to integrate those glyphs into the main Ubuntu font. We’ll publish point releases to the main Ubuntu font, with major releases designating points where we update the set of “fixed and metrically frozen” glyphs, point releases denoting occasions where we add or update beta glyphs in the public test font.

In each point release, we’ll include perhaps one or two new glyph sets for beta testing. We’ll prioritize those communities who have followed the process, and have the most substantial community interest in testing.

Phew. If you got this far, you’re interested :-). This is going to be one of those things that lives a very long time. It will take a long time to get everybody represented. But we’re going to do it, together.

A kind invitation

Thursday, September 23rd, 2010

Delighted to receive this today, and to proxy it through to Planets U and G:

Dear Ubuntu Community Council members,

on behalf of the openSUSE Board, I would like to extend this
invitation to you and your community to join us at the
openSUSE Conference in Nuremberg, Germany October 20-23, 2010.

This year more than seventy talks and workshops explore the theme of
‘Collaboration Across Borders‘ in Free and Open Source software
communities, administration and development. We believe that the
program, which includes tracks about distributions, the free desktop and
community, reaches across the borders between our projects and we
would like to ask you to encourage your community to visit the
conference so we get the chance to meet face to face, talk to and
inspire each other.

More information including the program and details about the event you
can find in our announcement at http://bit.ly/oconf2010

Thank you in advance and see you in October! :-)

Henne Vogelsang
openSUSE Board Member

We’ll gladly sponsor a member of the Ubuntu community council to go, busy finding out if anyone can make it. I can’t, but appreciate the sentiment and the action and think it would be great if members of the Ubuntu community can take up the invitation.

Regardless, best wishes for the conference!

Daily dose of Scribus trunk

Friday, September 10th, 2010

We’ll be using Scribus for much of the DTP internal to Canonical. Our templates etc will be published in Scribus, so folks who need to knock up a flyer or brochure have the pieces they need ready to hand. However, there’s a problem, in that the stable Scribus package is really quite old.

The Scribus team is making good progress on the next version of Scribus, but I couldn’t find an easy way to test their trunk. So I thought to make a PPA with a daily build. Whenever I’m testing or evaluating a new app I like to check out trunk, just to get a feel for the pace of activity and quality of the work. A crisp, clean, stable trunk is a sign of good quality work, which will likely mean good quality elsewhere like documentation and project governance. Chaos on trunk means… chaos generally, as a rule.

I wrote to Oleksander Moskalenko, one of the upstream developers and Debian maintainer of the Scribus packages, including a complete set of Ubuntu packages with pretty awesome documentation for how to get the newer versions for testing. He kindly offered to review the package and made some suggestions for things to look out for. And then I got lucky, mentioning that I wanted to do this on #kubuntu-devel, because Philip Muskovac turns out to be in the middle of a quest to do daily build PPA’s of most of KDE!

We already had a Bzr import of Scribus trunk for some reason, so tip is easily accessible via LP and bzr.

Philip knocked up a package recipe combining trunk with a clean packaging branch based on Oleksandr’s scribus-ng package. Et voila, LP is now doing all the work to deliver us a nice dose of Scribus goodness every day. So here’s an invitation to DTP-heads everywhere: if you’d like to see the very latest work of the Scribus team, just add that PPA to your sources and grab scribus-trunk:

sudo apt-add-repository ppa:scribus/ppa
sudo apt-get update
sudo apt-get install scribus-trunk

Generally, if the packaging branch is clean, a daily build is pretty stable, it might need a tweak now and then but that work is useful to the packager as an early warning on packaging changes needed for the next version, anyway. And it’s usually easier to fix something if you know exactly what changed to break it :-)

I’d like to thank Philip and Oleksandr for rocking the park, and the Scribus folks for a wonderful tool that will get wider use now within Canonical and, hopefully, elsewhere too.

The Scribus trunk packages seem to work very well on Unity, the Qt/dbusmenu integration is tight in this newer version, so it’s very usable with the panel menu and launching it full-screen feels right on my laptop. I’m enjoying the extra detailed control that Scribus gives with the use of fonts over apps like OO.o and AbiWord, since I’m becoming a font nerd these days with all the work on Ubuntu.

There is a flag day transition to be aware of, though, as newer Scribus files are not compatible with those of the stable scribus. Nevertheless, both this trunk build and the scribus-ng packages Oleksandr maintains seem pretty stable to me, so we’ll be using the newer format and holding our breath till the actual release. No pressure, team Scribus ;-)

Update: Philip’s published Lucid packages as well!

Gestures with multitouch in Ubuntu 10.10

Monday, August 16th, 2010

Multitouch is just as useful on a desktop as it is on a phone or tablet, so I’m delighted that the first cut of Canonical’s UTouch framework has landed in Maverick and will be there for its release on 10.10.10.

You’ll need 4-finger touch or better to get the most out of it, and we’re currently targeting the Dell XT2 as a development environment so the lucky folks with that machine will get the best results today. By release, we expect you’ll be able to use it with a range of devices from major manufacturers, and with addons like Apple’s Magic Trackpad.

The design team has lead the way, developing a “touch language” which goes beyond the work that we’ve seen elsewhere. Rather than single, magic gestures, we’re making it possible for basic gestures to be chained, or composed, into more sophisticated “sentences”. The basic gestures, or primitives, are like individual verbs, and stringing them together allows for richer interactions. It’s not quite the difference between banging rocks together and conducting a symphony orchestra, but it feels like a good step in the right direction ;-)

The new underlying code is published on Launchpad under the GPLv3 and LGPLv3, and of course there are quite a lot of modules for things like X and Gtk which may be under licenses preferred by those projects. There’s a PPA if you’re interested in tracking the cutting edge, or just branch / push/ merge on LP if you want to make it better. Details in the official developer announcement. The bits depend on Peter Hutterer’s recently published update to the X input protocols related to multi-touch, and add gesture processing and gesture event delivery. I’d like to thank Duncan McGreggor for his leadership of the team which implemented this design, and of course all the folks who have worked on it so far: Henrik Rydberg, Rafi Rubin, Chase Douglas, Stephen Webb at the heart of it, and many others who have expanded on their efforts.

In Maverick, quite a few Gtk applications will support gesture-based scrolling. We’ll enhance Evince to show some of the richer interactions that developers might want to add to their apps. Window management will be gesture-enabled in Unity, so 10.10 Netbook Edition users with touch screens or multi-touch pads will have sophisticated window management at their fingertips. Install Unity on your desktop for a taste of it, just apt-get install ubuntu-netbook and choose the appropriate session at login.

The roadmap beyond 10.10 will flesh out the app developer API and provide system services related to gesture processing and touch. It would be awesome to have touch-aware versions of all the major apps – browser, email, file management, chat, photo management and media playback – for 11.04, but that depends on you! So if you are interested in this, let’s work up some branches :-) Here’s the official Canonical blog post, too.

Healing old wounds

Monday, August 2nd, 2010

Greg, thank you for your sincere and gracious apology.

When one cares deeply about something, criticism hurts so much more. And the free software world is loaded with caring, which is why our differences can so easily become vitriolic.

All of us that work on free software share the belief that our work has meaning far beyond the actual technology we produce. We are working to achieve goals that transcend the merits of the specific products we build: putting software freedom on a firm economic footing means that it can realistically become the de facto standard way that the software world works, carried forward by powerful forces of investment and return and less dependent on what feels like the heroic efforts of relatively few software outsiders swimming against the tide.

Red Hat’s success in proving a viable business model around a distribution was a very significant milestone in that quest, for all of us. I don’t mean to diminish that achievement when I point out that it’s come at the cost of dividing the world into those that buy RHEL, and those that can’t or won’t. Red Hat’s success is well deserved, and our work at Canonical is not in any sense motivated by desire to take that away. Red Hat is here to stay, there will always be a market for the product, and as a result, we all have the reassurance that our contributions can find a sustainable path into the hands of at least part of the world’s population.

Canonical’s mission is to expand the options, to find out if it’s possible to have a sustainable platform without that dividing line. We know that our quest would not be possible without your pioneering, but we don’t feel that’s riding on anybody’s coat-tails. We feel we have to break new ground, do new things, add new ingredients, and all of that is a substantial contribution in turn. But we don’t do it because we think Red Hat is “wrong”, and we don’t expect it to take anything away from Red Hat at all. We do it to add to the options, not to replace them.

We should start every discussion in free software with a mutual reminder of the fact that we have far more in common than we have differences, that individual successes enrich all of us far more in our open commons-based economy than they would in a traditional proprietary one, that it’s better for us to find a way to encourage others to continue to participate even if they aren’t necessarily chasing exactly the same bugs that we are, than to chastise them for thinking differently.

On that note, let’s shake hands.

Mark

Unity, and Ubuntu Light

Monday, May 10th, 2010

A few months ago we took on the challenge of building a version of Ubuntu for the dual-boot, instant-on market. We wanted to be surfing the web in under 10 seconds, and give people a fantastic web experience. We also wanted it to be possible to upgrade from that limited usage model to a full desktop.

The fruit of that R&D is both a new desktop experience codebase, called Unity, and a range of Light versions of Ubuntu, both netbook and desktop, that are optimised for dual-boot scenarios.

The dual-boot, web-focused use case is sufficiently different from general-purpose desktop usage to warrant a fresh look at the way the desktop is configured. We spent quite a bit of time analyzing screenshots of a couple of hundred different desktop configurations from the current Ubuntu and Kubuntu user base, to see what people used most. We also identified the things that are NOT needed in lightweight dual-boot instant-on offerings. That provided us both with a list of things to focus on and make rich, and a list of things we could leave out.

Instant-on products are generally used in a stateless fashion. These are “get me to the web asap” environments, with no need of heavy local file management. If there is content there, it would be best to think of it as “cloud like” and synchronize it with the local Windows environment, with cloud services and other devices. They are also not environments where people would naturally expect to use a wide range of applications: the web is the key, and there may be a few complementary capabilities like media playback, messaging, games, and the ability to connect to local devices like printers and cameras and pluggable media.

We also learned something interesting from users. It’s not about how fast you appear to boot. It’s about how fast you actually deliver a working web browser and Internet connection. It’s about how fast you have a running system that is responsive to the needs of the user.

Unity: a lightweight netbook interface

There are several driving forces behind the result.

The desktop screenshots we studied showed that people typically have between 3 and 10 launchers on their panels, for rapid access to key applications. We want to preserve that sense of having a few favorite applications that are instantly accessible. Rather than making it equally easy to access any installed application, we assume that almost everybody will run one of a few apps, and they need to switch between those apps and any others which might be running, very easily.

We focused on maximising screen real estate for content. In particular, we focused on maximising the available vertical pixels for web browsing. Netbooks have screens which are wide, but shallow. Notebooks in general are moving to wide screen formats. So vertical space is more precious than horizontal space.

We also want to embrace touch as a first class input. We want people to be able to launch and switch between applications using touch, so the launcher must be finger friendly.

Those constraints and values lead us to a new shape for the desktop, which we will adopt in Ubuntu’s Netbook Edition for 10.10 and beyond.

First, we want to move the bottom panel to the left of the screen, and devote that to launching and switching between applications. That frees up vertical space for web content, at the cost of horizontal space, which is cheaper in a widescreen world. In Ubuntu today the bottom panel also presents the Trash and Show Desktop options, neither of which is relevant in a stateless instant-on environment.

Second, we’ll expand that left-hand launcher panel so that it is touch-friendly. With relatively few applications required for instant-on environments, we can afford to be more generous with the icon size there. The Unity launcher will show what’s running, and support fast switching and drag-and-drop between applications.

Third, we will make the top panel smarter. We’ve already talked about adopting a single global menu, which would be rendered by the panel in this case. If we can also manage to fit the window title and controls into that panel, we will have achieved very significant space saving for the case where someone is focused on a single application at a time, and especially for a web browser.

We end up with a configuration like this:

Mockup of Unity

Mockup of Unity Launcher and Panel with maximised application

The launcher and panel that we developed in response to this challenge are components of Unity. They are now in a state where they can be tested widely, and where we can use that testing to shape their evolution going forward. A development milestone of Unity is available today in a PPA, with development branches on Launchpad, and I’d very much like to get feedback from people trying it out on a netbook, or even a laptop with a wide screen. Unity is aimed at full screen applications and, as I described above, doesn’t really support traditional file management. But it’s worth a spin, and it’s very easy to try out if you have Ubuntu 10.04 LTS installed already.

Ubuntu Light

Instant-on, dual boot installations are a new frontier for us. Over the past two years we have made great leaps forward as a first class option for PC OEM’s, who today ship millions of PC’s around the world with Ubuntu pre-installed. But traditionally, it’s been an “either/or” proposition – either Windows in markets that prefer it, or Ubuntu in markets that don’t. The dual-boot opportunity gives us the chance to put a free software foot forward even in markets where people use Windows as a matter of course.

And it looks beautiful:

Ubuntu Light

Ubuntu Light, showing the Unity launcher and panel

In those cases, Ubuntu Netbook Light, or Ubuntu Desktop Light, will give OEM’s the ability to differentiate themselves with fast-booting Linux offerings that are familiar to Ubuntu users and easy to use for new users, safe for web browsing in unprotected environments like airports and hotels, focused on doing that job very well, but upgradeable with a huge list of applications, on demand. The Light versions will also benefit from the huge amount of work done on every Ubuntu release to keep it maintained – instant-on environments need just as much protection as everyday desktops, and Ubuntu has a deep commitment to getting that right.

The Ubuntu Light range is available to OEM’s today. Each image will be hand-crafted to boot fastest on that specific hardware, the application load reduced to the minimum, and it comes with tools for Windows which assist in the management of the dual-boot experience. Initially, the focus is on the Netbook Light version based on Unity, but in future we expect to do a Light version of the desktop, too.

Given the requirement to customise the Light versions for specific hardware, there won’t be a general-purpose downloadable image of Ubuntu Light on ubuntu.com.

Evolving Unity for Ubuntu Netbook Edition 10.10

Unity exists today, and is great for the minimalist, stateless configurations that suit a dual-boot environment. But in order embrace it for our Netbook UI, we’ll need to design some new capabilities, and implement them during this cycle.

Those design conversations are taking place this week at UDS, just outside Brussels in Belgium. If you can’t be there in person, and are interested in the design challenges Unity presents for the netbook form factor, check out the conference schedule and participate in the discussion virtually.

The two primary pieces we need to put in place are:

  • Support for many more applications, and adding / removing applications. Instant-on environments are locked down, while netbook environments should support anybody’s applications, not just those favored in the Launcher.
  • Support for file management, necessary for an environment that will be the primary working space for the user rather than an occasional web-focused stopover.

We have an initial starting point for the design, called the Dash, which presents files and applications as an overlay. The inspiration for the Dash comes from consoles and devices, which use full-screen, media-rich presentation. We want the Dash to feel device-like, and use the capabilities of modern hardware.

Unity Dash

The Unity Dash, showing the Applications Place

The instant-on requirements and constraints proved very useful in shaping our thinking, but the canvas is still blank for the more general, netbook use case. Unity gives us the chance to do something profoundly new and more useful, taking advantage of ideas that have emerged in computing from the console to the handheld.

Relationship to Gnome Shell

Unity and Gnome Shell are complementary for the Gnome Project. While Gnome Shell presents an expansive view of how people work in complex environments with multiple simultaneous activities, Unity is designed to address the other end of the spectrum, where people are focused on doing one thing at any given time.

Unity does embrace the key technologies of Gnome 3: Mutter, for window management, and Zeitgeist will be an anchor component of our file management approach. The interface itself is built in Clutter.

The design seed of Unity was in place before Gnome Shell, and we decided to build on that for the instant-on work rather than adopt Gnome Shell because most of the devices we expect to ship Ubuntu Light on are netbooks. In any event, Unity represents the next step for the Ubuntu Netbook UI, optimised for small screens.

The Ubuntu Netbook interface is popular with Gnome users and we’re fortunate to be working inside an open ecosystem that encourages that level of diversity. As a result, Gnome has offerings for mobile, netbook and desktop form factors. Gnome is in the lucky position of having multiple vendors participating and solving different challenges independently. That makes Gnome stronger.

Relationship to FreeDesktop and KDE

Unity complies with freedesktop.org standards, and is helping to shape them, too. We would like KDE applications to feel welcome on a Unity-based netbook. We’re using the Ayatana indicators in the panel, so KDE applications which use AppIndicators will Just Work. And to the extent that those applications take advantage of the Messaging Menu, Sound Indicator and Me Menu, they will be fully integrated into the Unity environment. We often get asked by OEM’s how they can integrate KDE applications into their custom builds of Ubuntu, and the common frameworks of freedesktop.org greatly facilitate doing so in a smooth fashion.

Looking forward to the Maverick Meerkat

It will be an intense cycle, if we want to get all of these pieces in line. But we think it’s achievable: the new launcher, the new panel, the new implementation of the global menu and an array of indicators. Things have accelerated greatly during Lucid so if we continue at this pace, it should all come together. Here’s to a great summer of code.

Even though the idea of formal alignment between the freezes of Debian and Ubuntu didn’t hold, there has been some good practical collaboration between the maintainers of key subsystems. There are real benefits to this, because maintainers have a much more fruitful basis for sharing patches when they are looking at the same underlying version.

Harmonization for Ubuntu 10.04 LTS and Debian Squeeze

I think this is where we stand now:

Ubuntu Debian RHEL SLES
Kernel 2.6.32 + drm-33 2.6.32 + drm-33 2.6.32 2.6.32
GCC 4.4 4.4
Python 2.6 2.6
OpenOffice.org 3.2 3.2
Perl 5.10.1 5.10.1
Boost 1.40 1.40
X Server 1.7 1.7
Mesa 7.7 7.7
Mono (thanks Jo Shields) 2.4-snapshot 2.4-snapshot

I’m sure there are inaccuracies, please help me keep this up to date, sabdfl on freenode is the best way to reach me. The RHEL and SLES numbers are third-hand, so up-to-date information would be appreciated.

The actual release dates of Ubuntu LTS and Debian will vary of course, because of different priorities. And there’s no requirement that the same base version be used for every major component – there may well be differences allowing for different approaches. But where we do have it, we’ll be able to collaborate much more effectively on bug fixes for key upstream pieces. If a lot of distributions pick the same base upstream version, it greatly increases the value of extended shared maintenance and point releases of that upstream.

Why every two years?

Two years is a compromise between those who want 1 year releases for better support of cutting-edge hardware and those who want 7 year releases so their software stack doesn’t change before their job description does ;-).

A whole-year multiple has several advantages. It means we can schedule the processes that are needed for collaboration at the same time of year whenever we need them – unlike 1.5 or 2.5 year cycles. Three years was felt to be too long for hardware support. Two years is perceived to be the Goldilocks Cadence – just right.

What are the criteria for choosing a common base version?

In both the Ubuntu and Debian cases, we’ll be making a release that we support for many years. So be looked for versions of key upstreams that will pass the test of time. Sometimes, that means they can’t be too old, because they’ll be completely obsolete or unmaintainable in the life of the release. And sometimes that means they can’t be too young. In general, it would be better to be reviewing code that is already out there. But there are also lots of upstreams that do a credible job of release management, so we could commit to shipping a version that is not yet released, based on the reputation of the community it’s coming from.

What if there’s no agreement on a particular kernel, or X or component-foo?

We will almost certainly diverge on some components, and that’s quite OK. This is about finding opportunities to do a better job for upstreams and for users, not about forcing any distro to make a particular choice. If anyone feels its more important to them to use a particular version than another, they’ll do that.

Open invitations

It’s really helpful to have upstreams and other distributions participate in this process.

If you’re an upstream, kick off a thread in your mailing list or forums about this. Upstreams don’t need to do anything different if they don’t want to, we’ll still just make the best choices we can. But embracing a two year cadence is the best way you have to be sure which versions of your software are going to be in millions of hands in the future – it’s a great opportunity to influence how your users will experience your work.

Of course, we’d also like to have more distributions at the table. There’s no binding commitment needed – collaboration is opportunistic. But without participating in the conversation one can’t spot those opportunities! If you represent a distribution and are interested, then please feel free to contact me, or Matt Zimmerman, or anyone on the Debian release management team about it.

I think this is a big win for the free software community. Many upstreams have said “we’d really like to help deliver a great stable release, but which distro should we arrange that around?” Upstreams should not have to play favourites with distributions, and it should be no more work to support 10 distributions as to support one. If we can grow the number of distributions that embrace this cadence, the question becomes moot – upstreams can plan around that cycle knowing that many distributions will deliver their work straight to users.

Six-month cycles are great. Now let’s talk about meta-cycles: broader release cycles for major work. I’m very interested in a cross-community conversation about this, so will sketch out some ideas and then encourage people from as many different free software communities as possible to comment here. I’ll summarise those comments in a follow-up post, which will no doubt be a lot wiser and more insightful than this one :-)

Background: building on the best practice of cadence

The practice of regular releases, and now time-based releases, is becoming widespread within the free software community. From the kernel, to GNOME and KDE, to X, and distributions like Ubuntu, Fedora, the idea of a regular, predictable cycle is now better understood and widely embraced. Many smarter folks than me have articulated the benefits of such a cadence: energising the whole community, REALLY releasing early and often, shaking out good and bad code, rapid course correction.

There has been some experimentation with different cycles. I’m involved in projects that have 1 month, 3 month and 6 month cycles, for different reasons. They all work well.

..but addressing the needs of the longer term

But there are also weaknesses to the six-month cycle:

  • It’s hard to communicate to your users that you have made some definitive, significant change,
  • It’s hard to know what to support for how long, you obviously can’t support every release indefinitely.

I think there is growing insight into this, on both sides of the original “cadence” debate.

A tale of two philosophies, perhaps with a unifying theory

A few years back, at AKademy in Glasgow, I was in the middle of a great discussion about six month cycles. I was a passionate advocate of the six month cycle, and interested in the arguments against it. The strongest one was the challenge of making “big bold moves”.

“You just can’t do some things in six months” was the common refrain. “You need to be able to take a longer view, and you need a plan for the big change.” There was a lot of criticism of GNOME for having “stagnated” due to the inability to make tough choices inside a six month cycle (and with perpetual backward compatibility guarantees). Such discussions often become ideological, with folks on one side saying “you can evolve anything incrementally” and others saying “you need to make a clean break”.

At the time of course, KDE was gearing up for KDE 4.0, a significant and bold move indeed. And GNOME was quite happily making its regular releases. When the KDE release arrived, it was beautiful, but it had real issues. Somewhat predictably, the regular-release crowd said “see, we told you, BIG releases don’t work”. But since then KDE has knuckled down with regular, well managed, incremental improvements, and KDE is looking fantastic. Suddenly, the big bold move comes into focus, and the benefits become clear. Well done KDE :-)

On the other side of the fence, GNOME is now more aware of the limitations of indefinite regular releases. I’m very excited by the zest and spirit with which the “user experience MATTERS” campaign is being taken up in Gnome, there’s a real desire to deliver breakthrough changes. This kicked off at the excellent Gnome usability summit last year, which I enjoyed and which quite a few of the Canonical usability and design folks participated in, and the fruits of that are shaping up in things like the new Activities shell.

But it’s become clear that a change like this represents a definitive break with the past, and might take more than a single six month release to achieve. And most important of all, that this is an opportunity to make other, significant, distinctive changes. A break with the past. A big bold move. And so there’s been a series of conversations about how to “do a 3.0″, in effect, how to break with the tradition of incremental change, in order to make this vision possible.

It strikes me that both projects are converging on a common set of ideas:

  • Rapid, predictable releases are super for keeping energy high and code evolving cleanly and efficiently, they keep people out of a deathmarch scenario, they tighten things up and they allow for a shakeout of good and bad ideas in a coordinated, managed fashion.
  • Big releases are energising too. They are motivational, they make people feel like it’s possible to change anything, they release a lot of creative energy and generate a lot of healthy discussion. But they can be a bit messy, things can break on the way, and that’s a healthy thing.

Anecdotally, there are other interesting stories that feed into this.

Recently, the Python community decided that Python 3.0 will be a shorter cycle than the usual Python release. The 3.0 release is serving to shake out the ideas and code for 3.x, but it won’t be heavily adopted itself so it doesn’t really make sense to put a lot of effort into maintaining it – get it out there, have a short cycle, and then invest in quality for the next cycle because 3.x will be much more heavily used than 3.0. This reminds me a lot of KDE 4.0.

So, I’m interesting in gathering opinions, challenges, ideas, commitments, hypotheses etc about the idea of meta-cycles and how we could organise ourselves to make the most of this. I suspect that we can define a best practice, which includes regular releases for continuous improvement on a predictable schedule, and ALSO defines a good practice for how MAJOR releases fit into that cadence, in a well structured and manageable fashion. I think we can draw on the experiences in both GNOME and KDE, and other projects, to shape that thinking.

This is important for distributions, too

The major distributions tend to have big releases, as well as more frequent releases. RHEL has Fedora, Ubuntu makes LTS releases, Debian takes cadence to its logical continuous integration extreme with Sid and Testing :-).

When we did Ubuntu 6.06 LTS we said we’d do another LTS in “2 to 3 years”. When we did 8.04 LTS we said that the benefits of predictability for LTS’s are such that it would be good to say in advance when the next LTS would be. I said I would like that to be 10.04 LTS, a major cycle of 2 years, unless the opportunity came up to coordinate major releases with one or two other major distributions – Debian, Suse or Red Hat.

I’ve spoken with folks at Novell, and it doesn’t look like there’s an opportunity to coordinate for the moment. In conversations with Steve McIntyre, the current Debian Project Leader, we’ve identified an interesting opportunity to collaborate. Debian is aiming for an 18 month cycle, which would put their next release around October 2010, which would be the same time as the Ubuntu 10.10 release. Potentially, then, we could defer the Ubuntu LTS till 10.10, coordinating and collaborating with the Debian project for a release with very similar choices of core infrastructure. That would make sharing patches a lot easier, a benefit both ways. Since there will be a lot of folks from Ubuntu at Debconf, and hopefully a number of Debian developers at UDS in Barcelona in May, we will have good opportunities to examine this opportunity in detail. If there is goodwill, excitement and broad commitment to such an idea from Debian, I would be willing to promote the idea of deferring the LTS from 10.04 to 10.10 LTS.

Questions and options

So, what would the “best practices” of a meta-cycle be? What sorts of things should be considered in planning for these meta-cycles? What problems do they cause, and how are those best addressed? How do short term (3 month, 6 month) cycles fit into a broader meta-cycle? Asking these questions across multiple communities will help test the ideas and generate better ones.

What’s a good name for such a meta-cycle? Meta-cycle seems…. very meta.

Is it true that the “first release of the major cycle” (KDE 4.0, Python 3.0) is best done as a short cycle that does not get long term attention? Are there counter-examples, or better examples, of this?

Which release in the major cycle is best for long term support? Is it the last of the releases before major new changes begin (Python 2.6? GNOME 2.28?) or is it the result of a couple of quick iterations on the X.0 release (KDE 4.2? GNOME 3.2?) Does it matter? I do believe that it’s worthwhile for upstreams to support an occasional release for a longer time than usual, because that’s what large organisations want.

Is a whole-year cycle beneficial? For example, is 2.5 years a good idea? Personally, I think not. I think conferences and holidays tend to happen at the same time of the year every year and it’s much, much easier to think in terms of whole number of year cycles. But in informal conversations about this, some people have said 18 months, others have said 30 months (2.5 years) might suit them. I think they’re craaaazy, what do you think?

If it’s 2 years or 3 years, which is better for you? Hardware guys tend to say “2 years!” to get the benefit of new hardware, sooner. Software guys say “3 years!” so that they have less change to deal with. Personally, I am in the 2 years camp, but I think it’s more important to be aligned with the pulse of the community, and if GNOME / KDE / Kernel wanted 3 years, I’d be happy to go with it.

How do the meta-cycles of different projects come together? Does it make sense to have low-level, hardware-related things on a different cycle to high-level, user visible things? Or does it make more sense to have a rhythm of life that’s shared from the top to the bottom of the stack?

Would it make more sense to stagger long term releases based on how they depend on one another, like GCC then X then OpenOffice? Or would it make more sense to have them all follow the same meta-cycle, so that we get big breakage across the stack at times, and big stability across the stack at others?

Are any projects out there already doing this?

Is there any established theory or practice for this?

A cross-community conversation

If you’ve read this far, thank you! Please do comment, and if you are interested then please do take up these questions in the communities that you care about, and bring the results of those discussions back here as comments. I’m pretty sure that we can take the art of software to a whole new level if we take advantage of the fact that we are NOT proprietary, and this is one of the key ways we can do it.

New notification work lands in Jaunty

Saturday, February 21st, 2009

Thanks to the concerted efforts of Martin Pitt, Sebastien Bacher and several others, notify-osd and several related components landed in Jaunty last week. Thanks very much to all involved! And thanks to David Barth, Mirco Muller and Ted Gould who lead the development of notify-osd and the related messaging indicator.

Notify-OSD handles both application notifications and keyboard special keys like brightness and volume

Notify-OSD handles both application notifications and keyboard special keys like brightness and volume

MPT has posted an overview of the conceptual framework for “attention management” at https://wiki.ubuntu.com/NotificationDesignGuidelines, which puts ephemeral notification into context as just one of several distinct tools that applications can use when they don’t have the focus but need to make users aware of something. That’s a draft, and when it’s at 1.0 we’ll move it to a new site which will host design patterns on Canonical.com.

There is also a detailed specification for our implementation of the notification display agent, notify-osd, which can be found at https://wiki.ubuntu.com/NotifyOSD and which defines not only the expected behaviour of notify-osd but also all of the consequential updates we need to make across the packages in main an universe to ensure that those applications use notification and other techniques consistently.

There are at least 35 apps that need tweaking, and there may well be others! If you find an app that isn’t using notifications elegantly, please add it to the notification design guidelines page, and if you file a bug on the package, please tag it “notifications” so we can track these issues in a single consistent way.

Together with notify-osd, we’ve uploaded a new panel indicator which is used to provide a way to respond to messaging events, such as email and IRC pings. If someone IM’s you, then you should see an ephemeral notification, and the messaging indicator will give you a way to respond immediately. Same for email. Pidgin and Evolution are the primary focuses of the work, over time we’ll broaden that to the full complement of IM and email apps in the archive – patches welcome :-)

There will be rough patches. Apps which don’t comply with the FreeDesktop.org spec and send actions on notifications even when the display agent says it does not support them, will have their notifications translated into alerts. That’s the primary focus of the effort now, the find and fix those apps. Also, we know there are several cases where a persistent response framework is required. The messaging indicator gets most of them, we will have additional persistent tools in place for Karmic in October.