Archive for the 'free software' Category

ACPI, firmware and your security

Monday, March 17th, 2014

ACPI comes from an era when the operating system was proprietary and couldn’t be changed by the hardware manufacturer.

We don’t live in that era any more.

However, we DO live in an era where any firmware code running on your phone, tablet, PC, TV, wifi router, washing machine, server, or the server running the cloud your SAAS app is running on, is a threat vector against you.

If you read the catalogue of spy tools and digital weaponry provided to us by Edward Snowden, you’ll see that firmware on your device is the NSA’s best friend. Your biggest mistake might be to assume that the NSA is the only institution abusing this position of trust – in fact, it’s reasonable to assume that all firmware is a cesspool of insecurity courtesy of incompetence of the worst degree from manufacturers, and competence of the highest degree from a very wide range of such agencies.

In ye olden days, a manufacturer would ship Windows, which could not be changed, and they wanted to innovate on the motherboard, so they used firmware to present a standard interface for things like power management to a platform that could not modified to accommodate their innovation.

Today, that same manufacturer can innovate on the hardware and publish a patch for Linux to express that innovation – and Linux is almost certainly the platform that matters. If Windows enters this market then the Windows driver model can evolve to give manufacturers this same ability to innovate in the Windows world, where proprietary unverifiable blobs are the norm.

Arguing for ACPI on your next-generation device is arguing for a trojan horse of monumental proportions to be installed in your living room and in your data centre. I’ve been to Troy, there is not much left.

We’ve spent a good deal of time working towards a world where you can inspect the code that is running on any device you run. In Ubuntu we work hard to make sure that any issues in that code can be fixed and delivered right away to millions of users. Bruce Schneier wisely calls security a process, not a product. But the processes for finding and fixing problems in firmware are non-existent and not improving.

I would very much like to be part of FIXING the security problem we engineers have created in our rush to ship products in the olden days. I’m totally committed to that.

So from my perspective:

  • Upstream kernel is the place to deliver the software portion of the innovation you’re selling. We have great processes now to deliver that innovation to users, and the same processes help us improve security and efficiency too.
  • Declarative firmware that describes hardware linkages and dependencies but doesn’t include executable code is the best chance we have of real bottom-up security. The Linux device tree is a very good starting point. We have work to do to improve it, and we need to recognise the importance of being able to fix declarations over the life of a product, but we must not introduce blobs in order to short cut that process.

Let’s do this right. Each generation gets its turn to define the platforms it wants to pass on – let’s pass on something we can be proud of.

Our mission in Ubuntu is to give the world’s people a free platform they can trust.  I suspect a lot of the Linux community is motivated by the same goal regardless of their distro. That also means finding ways to ensure that those trustworthy platforms can’t be compromised elsewhere. We can help vendors innovate AND ensure that users have a fighting chance of privacy and security in this brave new world. But we can’t do that if we cling to the tools of the past. Don’t cave in to expediency. Design a better future, it really can be much healthier than the present if we care and act accordingly.

 

The very best edge of all

Saturday, March 8th, 2014

Check out “loving the bottom edge” for the most important bit of design guidance for your Ubuntu mobile app.

This work has been a LOT of fun. It started when we were trying to find the zen of each edge of the screen, a long time back. We quickly figured out that the bottom edge is by far the most fun, by far the most accessible. You can always get to it easily, it feels great. I suspect that’s why Apple has used the bottom edge for their quick control access on IOS.

progresion

We started in the same place as Apple, thinking that the bottom edge was so nice we wanted it for ourselves, in the system. But as we discussed it, we started to think that the app developer was the one who deserved to do something really distinctive in their app with it instead. It’s always tempting to grab the tastiest bit for oneself, but the mark of civility is restraint in the use of power and this felt like an appropriate time to exercise that restraint.

Importantly you can use it equally well if we split the screen into left and right stages. That made it a really important edge for us because it meant it could be used equally well on the Ubuntu phone, with a single app visible on the screen, and on the Ubuntu tablet, where we have the side stage as a uniquely cool way to put phone apps on tablet screens alongside a bigger, tablet app.

The net result is that you, the developer, and you, the user, have complete creative freedom with that bottom edge. There are of course ways to judge how well you’ve exercised that freedom, and the design guidance tries to leave you all the freedom in the world while still providing a framework for evaluating how good the result will feel to your users. If you want, there are some archetypes and patterns to choose from, but what I’d really like to see is NEW patterns and archetypes coming from diverse designs in the app developer community.

Here’s the key thing – that bottom edge is the one thing you are guaranteed to want to do more innovatively on Ubuntu than on any other mobile platform. So if you are creating a portable app, targeting a few different environments, that’s the thing to take extra time over for your Ubuntu version. That’s the place to brainstorm, try out ideas on your friends, make a few mockups. It’s the place you really express the single most important aspects of your application, because it’s the fastest, grooviest gesture in the book, and it’s all yours on Ubuntu.

Have fun!

Raring community skunkworks

Thursday, October 18th, 2012

Mapping out the road to 13.04, there are a few items with high “tada!” value that would be great candidates for folk who want to work on something that will get attention when unveiled. While we won’t talk about them until we think they are ready to celebrate, we’re happy to engage with contributing community members that have established credibility (membership, or close to it) in Ubuntu, who want to be part of the action.

This would provide early community input and review, without spoiling the surprise when we think the piece is ready. It would allow community members to work on something that will be widely covered at release (at least, on OMG ;-))

The skunkworks approach has its detractors. We’ve tried it both ways, and in the end, figured out that critics will be critics whether you discuss an idea with them in advance or not. Working on something in a way that lets you refine it till it feels ready to go has advantages: you can take time to craft something, you can be judged when you’re ready, you get a lot more punch when you tell your story, and you get your name in lights (though not every headline is one you necessarily want ;)).

So, we thought we would extend the invitation to people who trust us and in whom we have reason to trust, to work together on some sexy 13.04 surprises. The projects range from webby (javascript, css, html5) to artistic (do you obsess about kerning and banding) to scientific (are you a framerate addict) to glitzy (pixel shader sherpas wanted) to privacy-enhancing (how is your crypto?) to analytical (big daddy, big brother, pick your pejorative). But they all make the Ubuntu experience better for millions of users, they are all groundbreaking in free software, they will all result in code under the GPL (or an existing upstream license if they are extensions to existing projects). No NDA’s needed but we will need to trust you not to talk in your sleep ;). We’ll also need to trust you to write code that is thorough and tested, stuff you’ll be as proud of as we are of the rest of the Ubuntu experience. Of course.

There’s also plenty going on that doesn’t warrant the magician’s reveal. But if you are game for a bit of the spotlight, bring some teflon and ping Michael Hall at mhall119 on Freenode.

Microsoft has built an impressive new entrant to the Infrastructure-as-a-Service market, and Ubuntu is there for customers who want to run workloads on Azure that are best suited to Linux. Windows Azure was built for the enterprise market, an audience which is increasingly comfortable with Ubuntu as a workhorse for scale-out workloads; in short, it’s a good fit for both of us, and it’s been interesting to do the work to bring Ubuntu to the platform.

Given that it’s normal for us to spin up 2,000-node Hadoop clusters with Juju, it will be very valuable to have a new enterprise-oriented cloud with which to evaluate performance, latency, reliability, scalability and many other key metrics for production deployment scenarios.

As IAAS grows in recognition as a standard part of the enterprise toolkit, it will be important to have a wide range of infrastructures that are addressable, with diverse strengths. In the case of Windows Azure, there is clearly a deep connection between Windows-based IT and the new IAAS. But I think Microsoft has set their sights on a bigger story, which is high-quality enterprise-oriented infrastructure that is generally useful. That’s why Ubuntu is important to them, and why it was worthwhile for us to work together despite our differences. Just as we need to ensure that customers can run Ubuntu and Windows together inside their data centre and on the LAN, we want to ensure that cloud workloads play nicely.

The team leading Azure has a sophisticated understanding of Ubuntu and Linux in general. They are taking a pragmatic approach that will raise eyebrows around the Redmond campus, but is exactly what customers want to see. We have taken a similar view. I know there will be members of the free software community that will leap at the chance to berate Microsoft for its very existence, but it’s not very Ubuntu to do so: let’s argue our perspective, work towards our goals, be open to those who are open to us, and build great stuff. There is nothing proprietary in Ubuntu-for-Azure, and no about-turn from us on long-held values. This is us making sure our audience, and especially the enterprise audience, can benefit from the work our community and Canonical do no matter where they want to do it.

Windows Azure IAAS is in beta. If you are using the cloud today, or interested in it, I highly recommend you try it out. There’s no better way to make yourself heard over there.

Unsung heroes

Monday, March 26th, 2012

The new privacy features in Ubuntu 12.04 are a lovely example of collaboration and contribution. I’d like to thank Manish Sinha and Stefano Candori who contributed significantly to that effort and hadn’t received a shout-out despite being central to the success. The body of contributors to Ubuntu and Unity continues to grow, and I know the team finds it immensely rewarding to help folk land patches or changes that bring the experience closer to the designed goal. Manish, Stafano, thank you!

Ubuntu vs RHEL in enterprise computing

Wednesday, March 14th, 2012

A remarkable thing happened this year: companies started adopting Ubuntu over RHEL for large-scale enterprise workloads, in droves:

w3tech.com historical analysis of web server operating systems

The trend is even starker if you look at what we know of new-style services, like clouds and big data, but since most of that happens behind the firewall its all anecdata, while web services are a public affair.

The key driver of this has been that we added quality as a top-level goal across the teams that build Ubuntu – both Canonical’s and the community’s. We also have retained the focus on keeping the up-to-date tools available on Ubuntu for developers, and on delivering a great experience in the cloud, where computing is headed.

The headlines for Ubuntu have all been about the desktop and consumer-focused design efforts, with the introduction of Unity and the expansion of our goals to span the phone, the tablet, the TV as well as the PC. But underpinning those goals has been a raising of the quality game: OEMs and consumers demand a very high level of quality, and so we now have large-scale automated testing, improved upload processes, faster responses to issues that crop up inevitably during the development cycle, a broader base of users and contributors in the development release, and better engagements with the vendors who pre-install Ubuntu. So 12.04 LTS is a coming of age release for Ubuntu in the data centre as much as its the first LTS to sport the interface which was designed to span the full range of personal computing needs.

We’re also seeing the wider community respond to the goal of cadence. OpenStack’s Essex release is lined up to be a perfect fit for 12.04 LTS. That is not a coincidence, it’s a value to which both projects are committed. Upstream projects that care about their user’s and care about being adopted quickly, want an effective conduit of their goodness straight to users. By adopting the 6-month / 2-year cadence of step and LTS releases, and aligning those with Ubuntu’s release cycle, OpenStack ensures that a very large audience of system administrators, developers and enterprise decision makers can plan for their OpenStack deployment, and know they will have a robust and very widely deployed LTS platform together with a very widely supported release of OpenStack. Every dependency that Essex needs is exactly provided in 12.04 LTS, the way that all of the major public clouds based on OpenStack are using it. By adopting a common message on releases, we make both OpenStack and Ubuntu stronger, and do so in a way which is entirely transparent and accessible to other distributions.

Quality. Design. Cadence. You can count on them in Ubuntu, and OpenStack.

… for human beings

Monday, March 5th, 2012

Our mission with Ubuntu is to deliver, in the cleanest, most economical and most reliable form, all the goodness that engineers love about free software to the widest possible audience (including engineers :)). We’ve known for a long time that free software is beautiful on the inside – efficient, accurate, flexible, modifiable. For the past three years, we’ve been leading the push to make free software beautiful on the outside too – easy to use, visually pleasing and exciting. That started with the Ubuntu Netbook Remix, and is coming to fruition in 12.04 LTS, now in beta.

For the first time with Ubuntu 12.04 LTS, real desktop user experience innovation is available on a full production-ready enterprise-certified free software platform, free of charge, well before it shows up in Windows or MacOS. It’s not ‘job done’ by any means, but it’s a milestone. Achieving that milestone has tested the courage and commitment of the Ubuntu community – we had to move from being followers and integrators, to being designers and shapers of the platform, together with upstreams who are excited to be part of that shift and passionate about bringing goodness to a wide audience. It’s right for us to design experiences and help upstreams get those experiences to be amazing, because we are closest to the user; we are the last mile, the last to touch the code, and the first to get the bug report or feedback from most users.

Thank you, to those who stood by Ubuntu, Canonical and me as we set out on this adventure. This was a big change, and in the face of change, many wilt, many panic, and some simply find that their interests lie elsewhere. That’s OK, but it brings home to me the wonderful fellowship that we have amongst those who share our values and interests – their affiliation, advocacy and support is based on something much deeper than a fad or an individualistic need, it’s based on a desire to see all of this intellectual wikipedia-for-code value unleashed to support humanity at large, from developers to data centre devops to web designers to golden-years-ganderers, serving equally the poorest and the bankers who refuse to serve them, because that’s what free software and open content and open access and level playing fields are all about.

To those of you who rolled up your sleeves and filed bugs and wrote the documentation and made the posters or the cupcakes, thank you.

You’ll be as happy to read this comment on unity-design:

I’m very serious about loving the recent changes. I think I’m a fair representative of the elderly community ………. someone who doesn’t particularly care to learn new things, but just wants things to make sense. I think we’re there! Lance

You’ll be as delighted with the coverage of Ubuntu for Android at MWC in Barcelona last week:

“one of the more eye-catching concepts being showcased”v3
“sleeker, faster, potentially more disruptive” - IT Pro Portal
“you can also use all the features of Android” - The Inquirer
“I can easily see the time when I will be carrying only my smartphone” - UnwiredView
“everything it’s been claimed to be” - Engadget
“Efficiency, for the win!” - TechCrunch
“phones that become traditional desktops have the potential to benefit from the extra processing power” - GigaOM
“This, ladies and gentlemen, is the future of computing” - IntoMobile

Free software distils the smarts of those of us who care about computing, much like Wikipedia does. Today’s free software draws on the knowledge and expertise of hundreds of thousands of individuals, all over the world, all of whom helped to make this possible, just like Wikipedia. It’s only right that the benefits of that shared wisdom should accrue to everyone without charge, which is why contributing to Ubuntu is the best way to add leverage to the contributions made everywhere else, to ensure they have the biggest possible impact. It wouldn’t be right to have to pay to have a copy of Wikipedia on your desk at the office, and the same is true of the free software platform. The bits should be free, and the excellent commercial services optional. That’s what we do at Canonical and in the Ubuntu community, and that’s why we do it.

Engineers are human beings too!

We set out to refine the experience for people who use the desktop professionally, and at the same time, make it easier for the first-time user. That’s a very hard challenge. We’re not making Bob, we’re making a beautiful, easy to use LCARS ;-). We measured the state of the art in 2008 and it stank on both fronts. When we measure Ubuntu today, based on how long it takes heavy users to do things, and a first-timer to get (a different set of) things done, 12.04 LTS blows 10.04 LTS right out of the water and compares favourably with both MacOS and Windows 7. Unity today is better for both hard-core developers and first-time users than 10.04 LTS was. Hugely better.

For software developers:

  • A richer set of keyboard bindings for rapid launching, switching and window management
  • Pervasive search results in faster launching for occasional apps
  • Far less chrome in the shell than any other desktop; it gets out of your way
  • Much more subtle heuristics to tell whether you want the launcher to reveal, and to hint it’s about to
  • Integrated search presents a faster path to find any given piece of content
  • Magic window borders and the resizing scrollbar make for easier window sizing despite razor-thin visual border
  • Full screen apps can include just the window title and indicators – full screen terminal with all the shell benefits

… and many more. In 12.04 LTS, multi-monitor use cases got a first round of treatment, we will continue to refine and improve that every six months now that the core is stable and effective. But the general commentary from professionals, and software developers in particular, is “wow”. In this last round we have focused testing on more advanced users and use cases, with user journeys that include many terminal windows, and there is a measurable step up in the effectiveness of Unity in those cases. Still rough edges to be sure, even in this 12.04 release (we are not going to be able to land locally-integrated menus in time, given the freeze dates and need for focus on bug fixes) but we will SRU key items and of course continue to polish it in 12.10 onwards. We are all developers, and we all use it all the time, so this is in our interests too.

For the adventurous, who really want to be on the cutting edge, the (totally optional) HUD is our first step to a totally new kind of UI for complex apps. We’re deconstructing the traditional UI, expressing goodness from the inside out. It’s going to be a rich vein of innovation and exploration, and the main beneficiaries will be those who use computers to create amazing things, whether it’s the kernel, or movies. Yes, we are moving beyond the desktop, but we are also innovating to make the desktop itself, better.

We care about efficiency, performance, quality, reliability. So do developers and engineers. We care about beauty and ease of use – turns out most engineers and developers care about that too. I’ve had lots of hard-core engineers tell me that they “love the challenges the design team sets”, because it’s hard to make easy software, and harder to make it pixel-perfect. And lots that have switched back to Ubuntu from the MacOS because devops on Ubuntu… rock.

The hard core Linux engineers can use… anything, really. Linus is probably equally comfortable with Linux-from-scratch as with Ubuntu. But his daughter Daniela needs something that works for human beings of all shapes, sizes, colours and interests. She’s in our audience. I hope she’d love Ubuntu if she tries it. She could certainly install it for herself while Dad isn’t watching ;) Linus and other kernel hackers are our audience too, of course, but they can help himself if things get stuck. We have to shoulder the responsibility for the other 99%. That’s a really, really hard challenge – for engineers and artists alike. But we’ve made huge progress. And doing so brings much deeper meaning to the contributions of all the brilliant people that make free software, everywhere.

Again, thanks to the Ubuntu community, 500 amazing people at Canonical, the contributors to all of the free software that makes it possible, and our users.

The desktop remains central to our everyday work and play, despite all the excitement around tablets, TV’s and phones. So it’s exciting for us to innovate in the desktop too, especially when we find ways to enhance the experience of both heavy “power” users and casual users at the same time. The desktop will be with us for a long time, and for those of us who spend hours every day using a wide diversity of applications, here is some very good news: 12.04 LTS will include the first step in a major new approach to application interfaces.

This work grows out of observations of new and established / sophisticated users making extensive use of the broader set of capabilities in their applications. We noticed that both groups of users spent a lot of time, relatively speaking, navigating the menus of their applications, either to learn about the capabilities of the app, or to take a specific action. We were also conscious of the broader theme in Unity design of leading from user intent. And that set us on a course which led to today’s first public milestone on what we expect will  be a long, fruitful and exciting journey.

The menu has been a central part of the GUI since Xerox PARC invented ‘em in the 70′s. It’s the M in WIMP and has been there, essentially unchanged, for 30 years.

Screenshot of the original Macintosh desktop

The original Macintosh desktop, circa 1984, courtesy of Wikipedia

We can do much better!

Say hello to the Head-Up Display, or HUD, which will ultimately replace menus in Unity applications. Here’s what we hope you’ll see in 12.04 when you invoke the HUD from any standard Ubuntu app that supports the global menu:

HUD for 12.04

Snapshot of the HUD in Ubuntu 12.04

The intenterface – it maps your intent to the interface

This is the HUD. It’s a way for you to express your intent and have the application respond appropriately. We think of it as “beyond interface”, it’s the “intenterface”.  This concept of “intent-driven interface” has been a primary theme of our work in the Unity shell, with dash search as a first class experience pioneered in Unity. Now we are bringing the same vision to the application, in a way which is completely compatible with existing applications and menus.

The HUD concept has been the driver for all the work we’ve done in unifying menu systems across Gtk, Qt and other toolkit apps in the past two years. So far, that’s shown up as the global menu. In 12.04, it also gives us the first cut of the HUD.

Menus serve two purposes. They act as a standard way to invoke commands which are too infrequently used to warrant a dedicated piece of UI real-estate, like a toolbar button, and they serve as a map of the app’s functionality, almost like a table of contents that one can scan to get a feel for ‘what the app does’. It’s command invocation that we think can be improved upon, and that’s where we are focusing our design exploration.

As a means of invoking commands, menus have some advantages. They are always in the same place (top of the window or screen). They are organised in a way that’s quite easy to describe over the phone, or in a text book (“click the Edit->Preferences menu”), they are pretty fast to read since they are generally arranged in tight vertical columns. They also have some disadvantages: when they get nested, navigating the tree can become fragile. They require you to read a lot when you probably already know what you want. They are more difficult to use from the keyboard than they should be, since they generally require you to remember something special (hotkeys) or use a very limited subset of the keyboard (arrow navigation). They force developers to make often arbitrary choices about the menu tree (“should Preferences be in Edit or in Tools or in Options?”), and then they force users to make equally arbitrary effort to memorise and navigate that tree.

The HUD solves many of these issues, by connecting users directly to what they want. Check out the video, based on a current prototype. It’s a “vocabulary UI”, or VUI, and closer to the way users think. “I told the application to…” is common user paraphrasing for “I clicked the menu to…”. The tree is no longer important, what’s important is the efficiency of the match between what the user says, and the commands we offer up for invocation.

In 12.04 LTS, the HUD is a smart look-ahead search through the app and system (indicator) menus. The image is showing Inkscape, but of course it works everywhere the global menu works. No app modifications are needed to get this level of experience. And you don’t have to adopt the HUD immediately, it’s there if you want it, supplementing the existing menu mechanism.

It’s smart, because it can do things like fuzzy matching, and it can learn what you usually do so it can prioritise the things you use often. It covers the focused app (because that’s where you probably want to act) as well as system functionality; you can change IM state, or go offline in Skype, all through the HUD, without changing focus, because those apps all talk to the indicator system. When you’ve been using it for a little while it seems like it’s reading your mind, in a good way.

We’ll resurrect the  (boring) old ways of displaying the menu in 12.04, in the app and in the panel. In the past few releases of Ubuntu, we’ve actively diminished the visual presence of menus in anticipation of this landing. That proved controversial. In our defence, in user testing, every user finds the menu in the panel, every time, and it’s obviously a cleaner presentation of the interface. But hiding the menu before we had the replacement was overly aggressive. If the HUD lands in 12.04 LTS, we hope you’ll find yourself using the menu less and less, and be glad to have it hidden when you are not using it. You’ll definitely have that option, alongside more traditional menu styles.

Voice is the natural next step

Searching is fast and familiar, especially once we integrate voice recognition, gesture and touch. We want to make it easy to talk to any application, and for any application to respond to your voice. The full integration of voice into applications will take some time. We can start by mapping voice onto the existing menu structures of your apps. And it will only get better from there.

But even without voice input, the HUD is faster than mousing through a menu, and easier to use than hotkeys since you just have to know what you want, not remember a specific key combination. We can search through everything we know about the menu, including descriptive help text, so pretty soon you will be able to find a menu entry using only vaguely related text (imagine finding an entry called Preferences when you search for “settings”).

There is lots to discover, refine and implement. I have a feeling this will be a lot of fun in the next two years :-)

Even better for the power user

The results so far are rather interesting: power users say things like “every GUI app now feels as powerful as VIM”. EMACS users just grunt and… nevermind ;-). Another comment was “it works so well that the rare occasions when it can’t read my mind are annoying!”. We’re doing a lot of user testing on heavy multitaskers, developers and all-day-at-the-workstation personas for Unity in 12.04, polishing off loose ends in the experience that frustrated some in this audience in 11.04-10. If that describes you, the results should be delightful. And the HUD should be particularly empowering.

Even casual users find typing faster than mousing. So while there are modes of interaction where it’s nice to sit back and drive around with the mouse, we observe people staying more engaged and more focused on their task when they can keep their hands on the keyboard all the time. Hotkeys are a sort of mental gymnastics, the HUD is a continuation of mental flow.

Ahead of the competition

There are other teams interested in a similar problem space. Perhaps the best-known new alternative to the traditional menu is Microsoft’s Ribbon. Introduced first as part of a series of changes called Fluent UX in Office, the ribbon is now making its way to a wider set of Windows components and applications. It looks like this:

Sample of Microsoft Ribbon

You can read about the ribbon from a supporter (like any UX change, it has its supporters and detractors ;-)) and if you’ve used it yourself, you will have your own opinion about it. The ribbon is highly visual, making options and commands very visible. It is however also a hog of space (I’m told it can be minimised). Our goal in much of the Unity design has been to return screen real estate to the content with which the user is working; the HUD meets that goal by appearing only when invoked.

Instead of cluttering up the interface ALL the time, let’s clear out the chrome, and show users just what they want, when they want it.

Time will tell whether users prefer the ribbon, or the HUD, but we think it’s exciting enough to pursue and invest in, both in R&D and in supporting developers who want to take advantage of it.

Other relevant efforts include Enso and Ubiquity from the original Humanized team (hi Aza &co), then at Mozilla.

Our thinking is inspired by many works of science, art and entertainment; from Minority Report to Modern Warfare and Jef Raskin’s Humane Interface. We hope others will join us and accelerate the shift from pointy-clicky interfaces to natural and efficient ones.

Roadmap for the HUD

There’s still a lot of design and code still to do. For a start, we haven’t addressed the secondary aspect of the menu, as a visible map of the functionality in an app. That discoverability is of course entirely absent from the HUD; the old menu is still there for now, but we’d like to replace it altogether not just supplement it. And all the other patterns of interaction we expect in the HUD remain to be explored. Regardless, there is a great team working on this, including folk who understand Gtk and Qt such as Ted Gould, Ryan Lortie, Gord Allott and Aurelien Gateau, as well as designers Xi Zhu, Otto Greenslade, Oren Horev and John Lea. Thanks to all of them for getting this initial work to the point where we are confident it’s worthwhile for others to invest time in.

We’ll make sure it’s easy for developers working in any toolkit to take advantage of this and give their users a better experience. And we’ll promote the apps which do it best – it makes apps easier to use, it saves time and screen real-estate for users, and it creates a better impression of the free software platform when it’s done well.

From a code quality and testing perspective, even though we consider this first cut a prototype-grown-up, folk will be glad to see this:

Overall coverage rate:
   lines......: 87.1% (948 of 1089 lines)
   functions..: 97.7% (84 of 86 functions)
   branches...: 63.0% (407 of 646 branches)

Landing in 12.04  LTS is gated on more widespread testing.  You can of course try this out from a PPA or branch the code in Launchpad (you will need these two branches). Or dig deeper with blogs on the topic from Ted Gould, Olli Ries and Gord Allott. Welcome to 2012 everybody!

Technical Board 2011

Wednesday, October 5th, 2011

After the recent poll of Ubuntu developers I’m delighted to introduce the Technical Board 2011-2013. I think it’s worth noting that three of the members of this generation of technical leaders are not Canonical employees, though admittedly they are all former members of that team. I think there’s cause for celebration on both fronts: broader institutional and independent representation in the senior governance structures of Ubuntu is valuable, and the fact that personal interest persists regardless of company affiliation is also indicative of the character of the whole community, both full-time and volunteer. We’re in this together, for mutual interests.

Without further ado, here they are, in an order you are welcome to guess ;-)

  • Stéphane Graber
  • Kees Cook
  • Martin Pitt
  • Matt Zimmerman
  • Colin Watson
  • Soren Hansen
Please join me in congratulating each of them, and thanking those who were willing to stand, who were nominated, and those who participated in the poll.
From my perspective, it was a very rich field of nominations. We had several candidates with no historic link to Canonical, which was very encouraging in terms of the diversity of engagement in the project. For the first time, I felt we had too many candidates and so I whittled down the final list of nominations – as it happens, all of the non-Canonical nominees made the shortlist, though that was not a criteria for my support.
Welcome aboard, all!

Note to the impatient: this is a long post and it only gets to free software ecosystem dynamics towards the end. The short version is that we need to empower software companies to participate in the GNU/Linux ecosystem, and not fear them. Rather than undermining their power, we need to balance it through competition.

Church schools in apartheid South Africa needed to find creative ways to teach pupils about the wrongs of that system. They couldn’t actively foment revolt, but they could teach alternative approaches to governance. That’s how, as a kid in South Africa, I spent a lot of time studying the foundations of the United States, a system of governance defined by underdogs who wanted to defend not just against the abuses of the current power, but abuses of power in general.

My favourite insight in that regard comes from James Madison in the Federalist Papers, where he describes the need to understand and harness human nature as a force: to pit ambition against ambition, as it is often described. The relevant text is worth a read if you don’t have time for the whole letter:

But the great security against a gradual concentration of the several powers in the same department, consists in giving to those who administer each department the necessary constitutional means and personal motives to resist encroachments of the others. The provision for defense must in this, as in all other cases, be made commensurate to the danger of attack. Ambition must be made to counteract ambition. The interest of the man must be connected with the constitutional rights of the place. It may be a reflection on human nature, that such devices should be necessary to control the abuses of government. But what is government itself, but the greatest of all reflections on human nature? If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself. A dependence on the people is, no doubt, the primary control on the government; but experience has taught mankind the necessity of auxiliary precautions.

When we debate our goals, principles and practices in the FLOSS community, we devote a great deal of energy to “how things should be”, and to the fact that “men are not angels”. I think the approach of James Madison is highly relevant to those discussions.

The conservation of power

Just as energy, momentum, charge and other physical properties of a system are conserved, so in a sense is power. If your goal is to reduce the power of one agency in government, the most effective strategy is to strengthen the position of another. We know that absolute monarchies are bad: they represent unbalanced power.

Within a system, power will tend to consolidate. We have antitrust agencies specifically to monitor the consolidation of economic power and to do something about it. We setup independent branches of government to ensure that some kinds of power simply cannot be consolidated.

Undermining power in one section of an ecosystem inevitably strengthens the others.

Since we humans tend to think the grass is greener on the other side of the fence, and since power takes a little while to get properly abused, you can often see societies oscillate in the allocation of power. When things seem a little out of control, we give more power to the police and other securocrats. Then, when they become a little thuggish, we squeeze their power through regulation and oversight, and civil liberties gain in power, until the pendulum swings again.

The necessity of concentrated power

Any power can be abused. I had a very wise headmaster at that same school who used to say that the only power worth having was power that was worth abusing. This was not a call to the abuse of power, you understand, merely a reflection on the fact that power comes with the real responsibility of restraint.

So, if power can be abused, why do we tolerate it at all? Why not dissolve authority down to the individual? Because the absence of power leads to chaos, which ironically is an easy place to establish despotic authority. Power isn’t seized – it’s given. We give people power over us. And in a state of chaos, all it takes is a few people to gain some power and they have a big advantage over everyone else. That’s why early leaders in new ecosystems tend to become unbeatable very quickly.

Also, power clears the path for action. In a world with no power, little gets done at all. We are better off with large companies that have the power to organise themselves around a goal than trying to achieve the same goal with a collection of individuals; try making a Boeing from an equivalent group of artisans, and you’ll see what I mean. Artisans form guilds and companies to increase their reach and impact. Individual volunteers join professional institutions to get more effective: consider the impact of handing out food yourself, versus helping sustain a network of soup kitchens, even in the purely non-profit world. Having some clout on your side is nothing to sniff at, even if you have purely philanthropic goals.

Power and innovation

If you have all the power already, there’s no spur to innovate. So kingdoms stagnate, eventually.

But power makes space for good things, too. It’s the powerful (and rich) who fund the arts in most societies. Innovation needs breathing space; companies with economic power can incubate new ideas to the point where they become productive.

Too much competition can thus limit innovation: look how difficult it has been for the Windows-based PC manufacturers, who live in a brutally competitive world and have little margin, to innovate. They are trapped between a highly efficient parts supply ecosystem, which feeds them all the same stuff at the same price, and a consumer market that requires them all to provide PC’s which run the same stuff the same way. As a result, they have little power, little margin, little innovation.

The trick is not to fear power itself, but instead, to shape, balance and channel it. You don’t want to aim for the absence of power, you want the Goldilocks effect of having “just enough”. And that was James Madison’s genius.

Verticals, competition and the balance of power

Of course, competition between rivals is the balance of power in business. We resent monopolies because they are either abusing their power, or stagnating.

In economics, we talk about “verticals” as the set of supply dependencies needed for a particular good. So, to make an aircraft, you need various things like engines and alloys, and those suppliers all feed the same pool of aircraft manufacturers.

In order to have a healthy ecosystem, you need a balance of power both between suppliers at the same level of the stack, and vertically, between the providers of parts and providers of the finished product. That’s because innovation needs both competition AND margin to stimulate and nurture it.

In the PC case, the low margins in the PC sector helped reinforce the Windows monopoly. Not only was there no competition for Microsoft, there was no ability for a supplier further down the chain to innovate around them. The only player in that ecosystem that had the margin to innovate was Microsoft, and since they faced no competition, there was little stimulus to embrace their own R&D, no matter how much they spent on it.

Power in the FLOSS ecosystem: upstreams and distributions

So, where do we stand in the free software and open source ecosystem?

The lines between upstreams and distributions aren’t perfectly clear, of course. Simplistic versions of that picture are often used to prove points, but in fact, all the distributions are also in some sense upstreams, and even derivative distributions end up being leaders of those they derive from in some pieces or markets. Nevertheless, I think it’s worth looking at the balance of power between upstream projects and distributions, as it is today and as it could be.

Also, I think it’s worth looking at related parties, companies and institutions which work a lot with FLOSS but have orthogonal interests.

If one uses margin, or profit, as an indicator of power, it’s clear that the distributions today are in a far stronger position than most individual projects or upstreams. The vast majority of software-related revenue in the FLOSS ecosystem goes to distributions.

Within that segment, Red Hat claims 80% market share of paid Linux, a number that is probably accurate. Novell, the de facto #2, is in the midst of some transition, but indicators are that it continues to weaken. Oracle’s entry into the RHEL market has had at best marginal impact on RHEL economics (the substantial price rises in RHEL 6 are a fairly clear signal of the degree to which Red Hat believes it faces real competition). The existence of “unpaid RHEL” in the form of CentOS, as well as OEL, essentially strengthens the position of RHEL itself. Ubuntu and Debian have large combined levels of adoption, but low revenue.

So clearly, there is work to do just to balance power in the distribution market. And it will take work – historically, platforms tend to monopolies, and in the absence of a definitive countervailing force that establishes strength outside the RHEL gravity well, that’s what we’ll have. But that’s not the most interesting piece. What’s more interesting is the dynamic between distributions and upstreams.

Today, most upstreams are weak. They have little institutional strength. It’s generally difficult to negotiate and do business with an upstream. In many cases, that’s by design – the teams behind a project are simply not interested, or they are explicitly non-profit, as in the case of the FSF, which makes them good leaders of specific values, but difficult to engage with commercially.

As a result, those who need to do business with open source go to distributions, even in cases where they really want to be focused on a particular component. This greatly amplifies the power of the distributions: they essentially are the commercial vehicles for ALL of open source. The weakness of individual upstreams turns into greater strength for distributions.

You can imagine that distributions like it that way, and it would be surprising to see a distribution, or company that backs a distribution, arguing for stronger upstreams. But that’s exactly the position I take: FLOSS needs stronger upstreams, and as a consequence, weaker distributions.

Stronger upstreams will result in more innovation in FLOSS than stronger distributions. Essentially, like Microsoft, a distribution receives cash for the whole platform and allocates it to specific areas of R&D. That means the number of good ideas that receive funding in our ecosystem, today, is dependent on the insights of a very few companies. Just as Microsoft invested a lot in R&D and yet seemed to fall behind, upstream innovation will be strangled if it’s totally dependent on cash flow via distributions.

It’s not just innovation that suffers because we don’t have more power, or economic leverage, in the hands of upstreams. It’s also the myriad of things beyond code itself. When you have a company behind a project, they tend to take care of a lot more than just the code: QA, documentation, testing, promotion. It’s easy, as a developer, to undervalue those things, or to see them as competing for resources with the “real work” of code. But that competition is necessary, and they make a great contribution to the dynamism of the final product.

Consider the upstream projects which have been very successful over the long term. Qt and MySQL, for example, both had companies behind them that maintained strong leverage over the product. That leverage was often unpopular, but the result was products available to all of us under a free license that continued to grow in stature, quality and capability despite the ups and downs of the broader market, and without being too dependent on the roving spotlight of “coolness”, which tends to move quickly from project to project.

There are of course successful upstream projects which do not have such companies. The best example is probably the Linux kernel itself. However, those projects fall into a rather unusual category: they are critical to some large number of companies that make money in non-software ways, and those companies are thus forced to engage with the project and contribute. In the case of the kernel, hardware companies directly and indirectly underwrite the vast majority of the boring but critical work that, in other projects, would be covered by the sponsoring institution. And despite that, there are many gaps in the kernel. You don’t have to dig very hard to find comments from key participants bemoaning the lack of testing and documentation. Nevertheless, it gets by quite well under the circumstances.

But most ecosystems will have very few projects that are at such a confluence. Most upstream projects are the work of a few people, the “coolness” spotlight shines on them briefly if at all. They need either long term generosity from core contributors, or an institution to house and care for them, if they want to go the distance. The former rarely works for more than a few years.

Projects which depend on indirect interests, such as those sponsored by hardware companies, have another problem. Their sponsoring institutions are generally not passionate about software. They don’t really need or want to produce GREAT software. And if you look at the projects which get a lot of such contributions, that becomes very obvious. Compare and contrast the quality of apps from companies which deeply care about software from those which come from hardware companies, and you see what I mean.

We FLOSS folk like to tell ourselves that the Windows hegemony was purely a result of the manipulations of its sponsor, and the FLOSS as we do it today is capable of doing much more if it only had a fair chance. I don’t think, having watched the success of iOS and Android as new ecosystems, that we can justify that position any longer. I think we have to be willing to think hard about what we are willing to change if we want to have the chance of building an ecosystem as strong, but around GNU/Linux. Since that’s my goal, I’m thinking very hard about that, and creatively. I think it’s possible, but not without challenging some sacred cows and figuring out what values we want to preserve and which we can remould.

Power is worth having in your ecosystem, despite its occasional abuse

There’s no doubt that power created will be abused. That’s true of a lot of important rights and powers. For example, we know that free speech is often abused, but we nevertheless value it highly in many societies that are also big contributors to FLOSS. You probably know the expression, “I disagree with what you are saying entirely, but I will defend to the death your right to say it”.

Similarly, in our ecosystem, power will be abused. But it’s still worth helping institutions acquire it, even those we dislike or distrust, or those we compete with. At Canonical, we’ve directly and indirectly helped lots of institutions that you could describe that way – Oracle, Novell, Red Hat, Intel and many others. The kneejerk reaction is usually “no way”, but upon deeper thought, we figured that it is better to have an ecosystem of stronger players, considering the scale of the battle with the non-FLOSS world.

I often find people saying “I would help an institution if I thought I could trust it”. And I think that’s a red herring, because just as power will be abused, trust will be abused too. If you believe that this is a battle of ecosystems and platforms, you want to have as many powerful competitors in your ecosystem as possible, even though you probably cannot trust any of them in the very long term. It’s the competition between them that really creates long term stability, to come back to the thinking of James Madison. It’s pitting ambition against ambition, not finding angels, which makes that ecosystem a winner. If you care about databases, don’t try to weaken MySQL, because you need it strong when you need it. Rather figure out how to strengthen PostGRES alongside it.

How Canonical fits in

Canonical is in an interesting position with regard to all of this. As a distribution, we could stay silent on the issue, and reasonably expect to grow in power over time, on the same basis that Red Hat has. And there are many voices in Canonical that say exactly that: don’t rock the boat, essentially.

However, perhaps unlike other Linux distributions, Canonical very much wants to see end users running free software, and not just IT professionals. That raises the bar dramatically in terms of the quality of the individual pieces. It means that it’s not good enough for us to work in an ecosystem which produces prototype or rough cut products, which we then aggregate and polish at the distribution level. Unlike those who have gone before, we don’t want to be the sole guarantor of quality in our ecosystem, because that will not scale.

For that reason, looking at the longer term, it’s very important to me that we figure out how to give more power to upstreams, so that they in turn can invest in producing components or works which have the completeness and quality that end-users expect. I enjoy working with strong commercial institutions in the open source ecosystem – while they always represent some competitive tension, they also represent the opportunity to help our ecosystem scale and out-compete the proprietary world. So I’d like to find ways to strengthen the companies that have products under free software, and encourage more that have proprietary projects to make them available under free licenses, even if that’s not the only way they publish them.

If you’ve read this far, you probably have a good idea where I’m going with this. But I have a few more steps before actually getting there. More soon.

Till then, I’m interested in how people think we can empower upstream projects to be stronger institutionally.

There are a couple of things that are obvious and yet don’t work. For example, lots of upstreams think they should form a non-profit institution to house their work. The track record of those is poor: they get setup, and they fail as soon as they have to file their annual paperwork, leaving folks like the SFLC to clean up the mess. Not cool. At the end of the day, such new institutions add paperwork without adding funding or other sources of energy. They don’t broaden out the project the same way a company writing documentation and selling services usually does. On the other hand, non-profits like the FSF which have critical mass are very important, though, which is why on occasion we’ve been happy to contribute to them in various ways.

Also, I’m interested in how we can reshape our attitudes to power. Today, the tenor of discussion in most FLOSS debates is simplistic: we fear power, and we attempt to squash it always, celebrating the individual. But that misses the point that we are merely strengthening the power elsewhere; in distributions, in other ecosystems. We need a richer language for describing “the Goldilocks power” balance, and how we can move beyond FUD.

So, what do you think we could do to create more Mozilla’s, more MySQL’s, more Qt’s and more OpenStacks?

I’ll summarise interesting comments and threads in the next post.