Archive for the 'thoughts' Category

Competition is tough on the contestants, but it gets great results for everyone else. We like competitive markets, competitive technologies, competitive sports, because we feel the end result for consumers or the audience is as good as it possibly could be.

In larger organisations, you get an interesting effect. You get *internal* competition as well as external competition. And that’s healthy too. You get individuals competing for responsibility, and of course you want to make sure that the people who will make the wisest choices carry the responsibilities that demand wisdom, while those who have the most energy carry the responsibilities for which that’s most needed. You get teams competing with each other too – for resources, for attention, for support elsewhere, for moral authority, for publicity. And THAT’s the hardest kind of competition to manage, because it can be destructive to the organisation as a whole.

Even though it’s difficult to manage, internal competition is extremely important, and should not be avoided out of fear. The up side is that you get to keep the best ideas because you allowed them to compete internally. If you try and avoid it, you crowd out new ideas, and end up having to catch up to them. Usually, what goes wrong is that one group gets control of the organisation’s thinking, and takes the view that any ideas which do not come from within that group are a threat, and should be stopped. That’s very dangerous – it’s how great civilisations crash; they fail to embrace new ideas which are not generated at the core.

In Ubuntu, we have a lot of internal competition. Ubuntu and Kubuntu and Xubuntu and Edubuntu and *buntu-at-large have to collaborate and also, to a certain extent, compete. We handle that very well, I think, though occasionally some muppet calls Kubuntu the blue-headed-stepchild etc etc. It’s absolutely clear to everyone, though, that we have a shared interest in delivering ALL these experiences together with as much shared vision and commonality as possible. I consider the competition between these teams healthy and constructive and worth maintaining, even though it requires some fancy footwork and causes occasional strains.

The challenge for Gnome leadership

The sound and fury writ large in blog comments this week is all about how competition is managed.

Gnome is, or should be, bigger than any of the contributing individuals or companies. Gnome leadership should be in a position to harness competition effectively for the good of the project. That does, however, require very firm leadership, and very gutsy decisions. And it requires vigilance against inward thinking. For example, I’ve seen the meme reiterated multiple times that “one should not expect Gnome to embrace ideas which were not generated and hosted purely within Gnome”. That’s chronic inward thinking. Think of all the amazing bits of goodness in the free software stack which were NOT invented in Gnome but are a part of it today. Think how much better it is when goodness is adopted across multiple desktop environments, and how much harder it is to achieve that when something is branded “K” or “G”.

When we articulated our vision for Unity, we were very clear that we wanted to deliver it under the umbrella of Gnome. We picked Gnome-friendly technologies by and large, and where we felt we needed to do something different, that decision required substantial review. We described Unity as “a shell for Gnome” from the beginning, and we have been sincere in that view. We have worked successfully and happily with many, many Gnome projects to integrate Unity API’s into their codebase.

This is because we wanted to be sure that whatever competitive dynamics arose were *internal* to Gnome, and thus contributing to a better result overall in Gnome in the long term.

We’ve failed.

Much of the language, and much of the decision making I’ve observed within Gnome, is based on the idea that Unity is competition WITH Gnome, rather than WITHIN Gnome.

The key example of that is the rejection of Unity’s indicator API’s as external dependencies. That was the opportunity to say “let’s host this competition inside Gnome”. Even now, there’s a lack of clarity as to what was intended by that rejection, with some saying “it was just a reflection of the fact that the API’s were new and not used in any apps”. If that were the case, there would be no need for prior approval as an external dependency; the rejection was clearly an attempt to prevent Gnome applications from engaging around these API’s. It’s substantially failed, as many apps have happily done the work to blend in beautifully in the Unity environment, but there has been a clear attempt to prevent that by those who feel that Unity is a threat to Gnome rather than an opportunity for it.

Dave Neary has to his credit started to ask “what’s really going on here”?

In his blog post, he quoted the rationale given for the rejection of Canonical’s indicator API’s, which I’ll re-quote here and analyze in this light:

it doesn’t integrate with gnome-shell

That’s it – right there. Remember, this was a proposal for the indicator API’s to be an *external* dependency for Gnome. That means, Gnome apps can use those API’s *optionally* when they are being run on a platform where they are useful. It has NOTHING to do with the core Gnome vision. External API’s exist precisely BECAUSE it’s useful to encourage people to use Gnome apps on all sorts of platforms, including proprietary ones like Windows and MacOS and Solaris, and they should shine there too.

So the premier reason given for the rejection of these API’s is a reason that, as best we can tell, has never been used against an external dependency proposal before: “it’s different to Gnome”. At the heart of this statement is something deeper: “it’s competition with an idea someone in Gnome wants to pursue”.

What made this single statement heartbreaking for me to see was that it spoke clearly to the end of one of Gnome’s core values: code talks. Here we had API’s which were real, tested code, with patches to many Gnome apps available, that implemented a spec that had been extensively discussed on FreeDesktop.org. This was real code. Yet it was blocked because someone – a Gnome Shell designer – wanted to explore other ideas, ideas which at the time were not working code at all. There’s been a lot of commentary on that decision. Most recently, Aaron Seigo pointed out that this decision was as much a rejection of cross-desktop standards as it was a rejection of Canonical’s code.

Now, I can tell you that I was pretty disgusted with this result.

We had described the work we wanted to do (cleaning up the panel, turning panel icons into menus) to the Gnome Shell designers at the 2008 UX hackfest. McCann denies knowledge today, but it was a clear decision on our part to talk about this work with him at the time, it was reported to me that the conversation had happened, and that we’d received the assurance that such work would be “a valued contribution to the shell”. Clearly, by the time it was delivered, McCann had decided that such assurances were not binding, and that his interest in an alternative panel story trumped both that assurance and the now-extant FreeDesktop.org discussions and spec.

But that’s not the focus of this blog. My focus here is on the management of healthy competition. And external dependencies are the perfect way to do so: they signal that there is a core strategy (in this case whatever Jon McCann wants to do with the panel) and yet there are also other, valid approaches which Gnome apps can embrace. This decision failed to grab that opportunity with both hands. It said “we don’t want this competition WITHIN Gnome”. But the decision cannot remove the competitive force. What that means is that the balance was shifted to competition WITH Gnome.

probably depends on GtkApplication, and would need integration in GTK+ itself

Clearly, both of these positions are flawed. The architecture of the indicator work was designed both for backward compatibility with the systray at the time, and for easy adoption. We have lots of apps using the API’s without either of these points being the case.

we wished there was some constructive discussion around it, pushed by the libappindicator developers; but it didn’t happen

We made the proposal, it was rejected. I can tell you that the people who worked on the proposal consider themselves Gnome people, and they feel they did what was required, and stopped when it was clear they were not going to be accepted. I’ve had people point to this bullet and say “you should have pushed harder”. But proposing an *external* dependency is not the same as trying to convince Shell to adopt something as the mainstream effort. It’s saying “hey, here’s a valid set of API’s apps might want to embrace, let’s let them do so”.

there’s nothing in GNOME needing it

This is a very interesting comment. It’s saying “no Gnome apps have used these API’s”. But the Gnome apps in question were looking to this very process for approval of their desire to use the API’s. You cannot have a process to pre-approve API’s, then decline to do so because “nobody has used the API’s which are not yet approved”. You’re either saying “we just rubber stamp stuff here, go ahead and use whatever you want”, or you’re being asinine.

It’s also saying that Unity is not “in GNOME”. Clearly, a lot of Unity work depends on the adoption of these API’s for a smooth and well-designed panel experience. So once again, we have a statement that Unity is “competition with Gnome” and not “competition within Gnome”.

And finally, it’s predicating this decision on the idea being “in Gnome” is the sole criterion of goodness. There is a cross-desktop specification which defines the appindicator work clearly. The fact that KDE apps Just Work on Unity is thanks to the work done to make this a standard. Gnome devs participated in the process, but appeared not to stick with it. Many or most of the issues they raised were either addressed in the spec or in the implementations of it. They say now that they were not taken seriously, but a reading of the mailing list threads suggests otherwise.

It’s my view that cross-desktop standards are really important. We host both Kubuntu and Ubuntu under our banner, and without such collaboration, that would be virtually impossible. I want Banshee to work as well under Kubuntu as Amarok can under Ubuntu.

What can be done?

This is a critical juncture for the leadership of Gnome. I’ll state plainly that I feel the long tail of good-hearted contributors to Gnome and Gnome applications are being let down by a decision-making process that has let competitive dynamics diminish the scope of Gnome itself. Ideas that are not generated “at the core” have to fight incredibly and unnecessarily hard to get oxygen. Ask the Zeitgeist team. Federico is a hero, but getting room for ideas to be explored should not feel like a frontal assault on a machine gun post.

This is no way to lead a project. This is a recipe for a project that loses great people to environments that are more open to different ways of seeing the world. Elementary. Unity.

Embracing those other ideas and allowing them to compete happily and healthily is the only way to keep the innovation they bring inside your brand. Otherwise, you’re doomed to watching them innovate and then having to “relayout” your own efforts to keep up, badmouthing them in the process.

We started this with a strong, clear statement: Unity is a shell for Gnome. Now Gnome leadership have to decide if they want the fruit of that competition to be an asset to Gnome, or not.

A blessing in disguise

Aaron’s blog post made me think that the right way forward might be to bolster and strengthen the forum for cross-desktop collaboration: FreeDesktop.org.

I have little optimism that the internal code dynamics of Gnome can be fixed – I have seen too many cases where a patch which implements something needed by Unity is dissed, then reimplemented differently, or simply left to rot, to believe that the maintainers in Gnome who have a competitive interest on one side or the other will provide a level playing field for this competition.

However, we have shown a good ability to collaborate around FD.o with KDE and other projects. Perhaps we could strengthen FreeDesktop.org and focus our efforts at collaboration around the definition of standards there. Gnome has failed to take that forum seriously, as evidenced by the frustrations expressed elsewhere. But perhaps if we had both Unity and KDE working well there, Gnome might take a different view. And that would be very good for the free software desktop.

I spent a lot of time observing our community, this release. For some reason I was curious to see how our teams work together, what the dynamic is, how they work and play together, how they celebrate and sadly, also how they mourn. So I spent a fair amount more time this cycle reading lists from various Ubuntu teams, reading minutes from governance meetings for our various councils, watching IRC channels without participating, just to get a finger on the pulse.

Everywhere I looked I saw goodness: organised, motivated, cheerful and constructive conversations. Building a free OS involves an extraordinary diversity of skills, and what’s harder is that it requires merging the contributions from so many diverse disciplines and art forms. And yet, looking around the community, we seem to have found patterns for coordination and collaboration that buffer the natural gaps between all the different kinds of activities that go on.

There are definitely things we can work on. We have to stay mindful of the fact that Ubuntu is primarily a reflection of what gets done in the broader open source ecosystem, and stay committed to transmitting their work effectively, in high quality (and high definition :-)) to the Ubuntu audience. We have to remind those who are overly enthusiastic about Ubuntu that fanboyism isn’t cool, I saw a bit of “We rock you suck” that’s not appropriate. But I also saw folks stepping in and reminding those who cross the line that our values as a community are important, and the code of conduct most important of all.

So I have a very big THANK YOU for everyone. This is our most valuable achievement: making Ubuntu a great place to get stuff done that has a positive impact on literally millions of people. Getting that right isn’t technical, but it’s hard and complex work. And that’s what makes the technical goodness flow.

In particular, I’d like to thank those who have stepped into responsibilities as leaders in large and small portions of our Ubuntu universe. Whether it’s organising a weekly newsletter, coordinating the news team, arranging the venue for a release party, reviewing translations from new translators in your language, moderating IRC or reviewing hard decisions by IRC moderators, planning Kubuntu or leading MOTU’s, the people who take on the responsibility of leadership are critical to keeping Ubuntu calm, happy and productive.

But I’d also like to say that what made me most proud was seeing folks who might not think of themselves as leaders, stepping up and showing leadership skills.

There are countless occasions when something needs to be said, or something needs to get done, but where it would be easy to stay silent or let it slip, and I’m most proud of the fact that many of the acts of leadership and initiative I saw weren’t by designated or recognised leaders, they were just part of the way teams stayed cohesive and productive. I saw one stroppy individual calmly asked to reconsider their choice of words and pointed to the code of conduct by a newcomer to Ubuntu. I saw someone else step up and lead a meeting when the designated chairman couldn’t make it. That’s what makes me confident Ubuntu will continue to grow and stay sane as it grows. That’s the really daunting thing for me – as it gets bigger, it depends on a steady supply of considerate and thoughtful people who are passionate about helping do something amazing that they couldn’t do on their own. It’s already far bigger than one person or one company – so we’re entirely dependent on broader community commitment to the values that define the project.

So, to everyone who participates, thank you and please feel empowered to show leadership whenever you think we could do better as a community. That’s what will keep us cohesive and positive. That’s what will make sure the effort everyone puts into it will reach the biggest possible audience.

With that said, well done everyone on a tight but crisp post-LTS release. Maverick was a challenge, we wanted to realign the cycle slightly which compressed matters but hopefully gives us a more balanced April / October cadence going forward based on real data for real global holiday and weather patterns :-). There was an enormous amount of change embraced and also change deferred, wisely. You all did brilliantly. And so, ladies an gentlemen, I give you Mr Robbie Williamson and the Maverick Release Announcement. Grab your towel and let’s take the Meerkat out on a tour of the Galaxy ;-)

Tribalism is the enemy within

Friday, July 30th, 2010

Tribalism is when one group of people start to think people from another group are “wrong by default”. It’s the great-granddaddy of racism and sexism. And the most dangerous kind of tribalism is completely invisible: it has nothing to do with someone’s “birth tribe” and everything to do with their affiliations: where they work, which sports team they support, which linux distribution they love.

There are a couple of hallmarks of tribal argument:

1. “The other guys have never done anything useful”. Well, let’s think about that. All of us wake up every day, with very similar ambitions and goals. I’ve travelled the world and I’ve never met a single company, or country, or church, where *everybody* there did *nothing* useful. So if you see someone saying “Microsoft is totally evil”, that’s a big red flag for tribal thinking. It’s just like someone saying “All black people are [name your prejudice]“. It’s offensive nonsense, and you would be advised to distance yourself from it, even if it feels like it would be fun to wave that pitchfork for a while.

2. “Evidence contrary to my views doesn’t count.” So, for example, when a woman makes it to the top of her game, “it’s because she slept her way there”. Offensive nonsense. And similarly, when you see someone saying “Canonical didn’t actually sponsor that work by that Canonical employee, that was done in their spare time”, you should realize that’s likely to be offensive nonsense too.

Let’s be clear: tribalism makes you stupid. Just like it would be stupid not to hire someone super-smart and qualified because they’re purple, or because they are female, it would be stupid to refuse to hear and credit someone with great work just because they happen to be associated with another tribe.

The very uncool thing about being a fanboy (or fangirl) of a project is that you’re openly declaring both a tribal affiliation and a willingness to reject the work of others just because they belong to a different tribe.

One of the key values we hold in the Ubuntu project is that we expect everyone associated with Ubuntu to treat people with respect. It’s part of our code of conduct – it’s probably the reason we *pioneered* the use of codes of conduct in open source. I and others who founded Ubuntu have seen how easily open source projects descend into nasty, horrible and unproductive flamewars when you don’t exercise strong leadership away from tribal thinking.

Now, bad things happen everywhere. They happen in Ubuntu – and because we have a huge community, they are perhaps more likely to happen there than anywhere else. If we want to avoid human nature’s worst consequences, we have to work actively against them. That’s why we have strong leadership structures, which hopefully put people who are proven NOT to be tribal in nature into positions of responsibility. It takes hard work and commitment, but I’m grateful for the incredible efforts of all the moderators and council members and leaders in LoCo teams across this huge and wonderful project, for the leadership they exercise in keeping us focused on doing really good work.

It’s hard, but sometimes we have to critique people who are associated with Ubuntu, because they have been tribal. Hell, sometimes I and others have to critique ME for small-minded and tribal thinking. When someone who calls herself “an Ubuntu fan” stands up and slates the work of another distro we quietly reach out to that person and point out that it’s not the Ubuntu way of doing things. We don’t spot them all, but it’s a consistent practice within the Ubuntu leadership team: our values are more important than winning or losing any given debate.

Do not be drawn into a tribal argument on Ubuntu’s behalf

Right now, for a number of reasons, there is a fever pitch of tribalism in plain sight in the free software world. It’s sad. It’s not constructive. It’s ultimately going to be embarrassing for the people involved, because the Internet doesn’t forget. It’s certainly not helping us lift free software to the forefront of public expectations of what software can be.

I would like to say this to everyone who feels associated with Ubuntu: hold fast to what you know to be true. You know your values. You know how hard you work. You know what an incredible difference your work has made. You know that you do it for a complex mix of love and money, some more the former, others the more latter, but fundamentally you are all part of Ubuntu because you think it’s the most profound and best way to spend your time. Be proud of that.

There is no need to get into a playground squabble about your values, your ethics, your capabilities or your contribution. If you can do better, figure out how to do that, but do it because you are inspired by what makes Ubuntu wonderful: free software, delivered freely, in a way that demonstrates real care for the end user. Don’t do it because you feel intimidated or threatened or belittled.

The Gregs are entitled to their opinions, and folks like Jono and Dylan have set an excellent example in how to rebut and move beyond them.

I’ve been lucky to be part of many amazing things in life. Ubuntu is, far and away, the best of them. We can be proud of the way we are providing leadership: on how communities can be a central part of open source companies, on how communities can be organised and conduct themselves, on how the economics of free software can benefit more than just the winning distribution, on how a properly designed user experience combined with free software can beat the best proprietary interfaces any day. But remember: we do all of those things because we believe in them, not because we want to prove anybody else wrong.

Linaro: Accelerating Linux on ARM

Thursday, June 3rd, 2010

At our last UDS in Belgium it was notable how many people were interested in the ARM architecture. There have always been sessions at UDS about lightweight environments for the consumer electronics and embedded community, but this felt tangibly different. I saw questions being asked about ARM in server and cloud tracks, for example, and in desktop tracks. That’s new.

So I’m very excited at today’s announcement of Linaro, an initiative by the ARM partner ecosystem including Freescale, IBM, Samsung, ST-Ericsson and TI, to accelerate and unify the field of Linux on ARM. That is going to make it much easier for developers to target ARM generally, and build solutions that can work with the amazing diversity of ARM hardware that exists today.

The ARM platform has historically been superspecialized and hence fragmented – multiple different ARM-based CPU’s from multiple different ARM silicon partners all behaved differently enough that one needed to develop different software for each of them. Boot loaders, toolchains, kernels, drivers and middleware are all fragmented today, and of course there’s additional fragmentation associated with Android vs mainline on ARM, but Linaro will go a long way towards cleaning this up and making it possible to deliver a consistent platform experience across all of the major ARM hardware providers.

Having played with a prototype ARM netbook, I was amazed at how cool it felt. Even though it was just a prototype it was super-thin, and ran completely cool. It felt like a radical leap forward for the state of the art in netbooks. So I’m a fan of fanless computing, and can’t wait to get one off the shelf :-)

For product developers, the big benefit from Linaro will be reduced time to market and increased choice of hardware. If you can develop your software for “linux on ARM”, rather than a specific CPU, you can choose the right hardware for your project later in the development cycle, and reduce the time required for enablement of that hardware. Consumer electronics product development cycles should drop significantly as a result. That means that all of us get better gadgets, sooner, and great software can spread faster through the ecosystem.

Linaro is impressively open: www.linaro.org has details of open engineering summits, an open wiki, mailing lists etc. The teams behind the work are committed to upstreaming their output so it will appear in all the distributions, sooner or later. The images produced will all be royalty free. And we’re working closely with the Linaro team, so the cadence of the releases will be rigorous, with a six month cycle that enables Linaro to include all work that happens in Ubuntu in each release of Linaro. There isn’t a “whole new distribution”, because a lot of the work will happen upstream, and where bits are needed, they will be derived from Ubuntu and Debian, which is quite familiar to many developers.

The nature of the work seems to break down into four different areas.

First, there are teams focused on enabling specific new hardware from each of the participating vendors. Over time, we’ll see real convergence in the kernel used, with work like Grant Likely’s device tree forming the fabric by which differences can be accommodated in a unified kernel. As an aside, we think we can harness the same effort in Ubuntu on other architectures as well as ARM to solve many of the thorny problems in linux audio support.

Second, there are teams focused on the middleware which is common to all platforms: choosing APIs and ensuring that those are properly maintained and documented so that people can deliver any different user experience with best-of-breed open tools.

Third, there are teams focused on advancing the state of the art. For example, these teams might accelerate the evolution of the compiler technology, or the graphics subsystem, or provide new APIs for multitouch gestures, or geolocation. That work benefits the entire ecosystem equally.

And finally, there are teams aimed at providing out of the box “heads” for different user experiences. By “head” we mean a particular user experience, which might range from the minimalist (console, for developers) to the sophisticated (like KDE for a netbook). Over time, as more partners join, the set of supported “heads” will grow – ideally in future you’ll be able to bring up a Gnome head, or a KDE head, or a Chrome OS head, or an Android head, or a MeeGo head, trivially. We already have goot precedent for this in Ubuntu with support for KDE, Gnome, LXE and server heads, so everyone’s confident this will work well.

The diversity in the Linux ecosystem is fantastic. In part, Linaro grows that diversity: there’s a new name that folks need to be aware of and think about. But importantly, Linaro also serves to simplify and unify pieces of the ecosystem that have historically been hard to bring together. If you know Ubuntu, then you’ll find Linaro instantly familiar: we’ll share repositories to a very large extent, so things that “just work” in Ubuntu will “just work” with Linaro too.

Six-month cycles are great. Now let’s talk about meta-cycles: broader release cycles for major work. I’m very interested in a cross-community conversation about this, so will sketch out some ideas and then encourage people from as many different free software communities as possible to comment here. I’ll summarise those comments in a follow-up post, which will no doubt be a lot wiser and more insightful than this one :-)

Background: building on the best practice of cadence

The practice of regular releases, and now time-based releases, is becoming widespread within the free software community. From the kernel, to GNOME and KDE, to X, and distributions like Ubuntu, Fedora, the idea of a regular, predictable cycle is now better understood and widely embraced. Many smarter folks than me have articulated the benefits of such a cadence: energising the whole community, REALLY releasing early and often, shaking out good and bad code, rapid course correction.

There has been some experimentation with different cycles. I’m involved in projects that have 1 month, 3 month and 6 month cycles, for different reasons. They all work well.

..but addressing the needs of the longer term

But there are also weaknesses to the six-month cycle:

  • It’s hard to communicate to your users that you have made some definitive, significant change,
  • It’s hard to know what to support for how long, you obviously can’t support every release indefinitely.

I think there is growing insight into this, on both sides of the original “cadence” debate.

A tale of two philosophies, perhaps with a unifying theory

A few years back, at AKademy in Glasgow, I was in the middle of a great discussion about six month cycles. I was a passionate advocate of the six month cycle, and interested in the arguments against it. The strongest one was the challenge of making “big bold moves”.

“You just can’t do some things in six months” was the common refrain. “You need to be able to take a longer view, and you need a plan for the big change.” There was a lot of criticism of GNOME for having “stagnated” due to the inability to make tough choices inside a six month cycle (and with perpetual backward compatibility guarantees). Such discussions often become ideological, with folks on one side saying “you can evolve anything incrementally” and others saying “you need to make a clean break”.

At the time of course, KDE was gearing up for KDE 4.0, a significant and bold move indeed. And GNOME was quite happily making its regular releases. When the KDE release arrived, it was beautiful, but it had real issues. Somewhat predictably, the regular-release crowd said “see, we told you, BIG releases don’t work”. But since then KDE has knuckled down with regular, well managed, incremental improvements, and KDE is looking fantastic. Suddenly, the big bold move comes into focus, and the benefits become clear. Well done KDE :-)

On the other side of the fence, GNOME is now more aware of the limitations of indefinite regular releases. I’m very excited by the zest and spirit with which the “user experience MATTERS” campaign is being taken up in Gnome, there’s a real desire to deliver breakthrough changes. This kicked off at the excellent Gnome usability summit last year, which I enjoyed and which quite a few of the Canonical usability and design folks participated in, and the fruits of that are shaping up in things like the new Activities shell.

But it’s become clear that a change like this represents a definitive break with the past, and might take more than a single six month release to achieve. And most important of all, that this is an opportunity to make other, significant, distinctive changes. A break with the past. A big bold move. And so there’s been a series of conversations about how to “do a 3.0″, in effect, how to break with the tradition of incremental change, in order to make this vision possible.

It strikes me that both projects are converging on a common set of ideas:

  • Rapid, predictable releases are super for keeping energy high and code evolving cleanly and efficiently, they keep people out of a deathmarch scenario, they tighten things up and they allow for a shakeout of good and bad ideas in a coordinated, managed fashion.
  • Big releases are energising too. They are motivational, they make people feel like it’s possible to change anything, they release a lot of creative energy and generate a lot of healthy discussion. But they can be a bit messy, things can break on the way, and that’s a healthy thing.

Anecdotally, there are other interesting stories that feed into this.

Recently, the Python community decided that Python 3.0 will be a shorter cycle than the usual Python release. The 3.0 release is serving to shake out the ideas and code for 3.x, but it won’t be heavily adopted itself so it doesn’t really make sense to put a lot of effort into maintaining it – get it out there, have a short cycle, and then invest in quality for the next cycle because 3.x will be much more heavily used than 3.0. This reminds me a lot of KDE 4.0.

So, I’m interesting in gathering opinions, challenges, ideas, commitments, hypotheses etc about the idea of meta-cycles and how we could organise ourselves to make the most of this. I suspect that we can define a best practice, which includes regular releases for continuous improvement on a predictable schedule, and ALSO defines a good practice for how MAJOR releases fit into that cadence, in a well structured and manageable fashion. I think we can draw on the experiences in both GNOME and KDE, and other projects, to shape that thinking.

This is important for distributions, too

The major distributions tend to have big releases, as well as more frequent releases. RHEL has Fedora, Ubuntu makes LTS releases, Debian takes cadence to its logical continuous integration extreme with Sid and Testing :-).

When we did Ubuntu 6.06 LTS we said we’d do another LTS in “2 to 3 years”. When we did 8.04 LTS we said that the benefits of predictability for LTS’s are such that it would be good to say in advance when the next LTS would be. I said I would like that to be 10.04 LTS, a major cycle of 2 years, unless the opportunity came up to coordinate major releases with one or two other major distributions – Debian, Suse or Red Hat.

I’ve spoken with folks at Novell, and it doesn’t look like there’s an opportunity to coordinate for the moment. In conversations with Steve McIntyre, the current Debian Project Leader, we’ve identified an interesting opportunity to collaborate. Debian is aiming for an 18 month cycle, which would put their next release around October 2010, which would be the same time as the Ubuntu 10.10 release. Potentially, then, we could defer the Ubuntu LTS till 10.10, coordinating and collaborating with the Debian project for a release with very similar choices of core infrastructure. That would make sharing patches a lot easier, a benefit both ways. Since there will be a lot of folks from Ubuntu at Debconf, and hopefully a number of Debian developers at UDS in Barcelona in May, we will have good opportunities to examine this opportunity in detail. If there is goodwill, excitement and broad commitment to such an idea from Debian, I would be willing to promote the idea of deferring the LTS from 10.04 to 10.10 LTS.

Questions and options

So, what would the “best practices” of a meta-cycle be? What sorts of things should be considered in planning for these meta-cycles? What problems do they cause, and how are those best addressed? How do short term (3 month, 6 month) cycles fit into a broader meta-cycle? Asking these questions across multiple communities will help test the ideas and generate better ones.

What’s a good name for such a meta-cycle? Meta-cycle seems…. very meta.

Is it true that the “first release of the major cycle” (KDE 4.0, Python 3.0) is best done as a short cycle that does not get long term attention? Are there counter-examples, or better examples, of this?

Which release in the major cycle is best for long term support? Is it the last of the releases before major new changes begin (Python 2.6? GNOME 2.28?) or is it the result of a couple of quick iterations on the X.0 release (KDE 4.2? GNOME 3.2?) Does it matter? I do believe that it’s worthwhile for upstreams to support an occasional release for a longer time than usual, because that’s what large organisations want.

Is a whole-year cycle beneficial? For example, is 2.5 years a good idea? Personally, I think not. I think conferences and holidays tend to happen at the same time of the year every year and it’s much, much easier to think in terms of whole number of year cycles. But in informal conversations about this, some people have said 18 months, others have said 30 months (2.5 years) might suit them. I think they’re craaaazy, what do you think?

If it’s 2 years or 3 years, which is better for you? Hardware guys tend to say “2 years!” to get the benefit of new hardware, sooner. Software guys say “3 years!” so that they have less change to deal with. Personally, I am in the 2 years camp, but I think it’s more important to be aligned with the pulse of the community, and if GNOME / KDE / Kernel wanted 3 years, I’d be happy to go with it.

How do the meta-cycles of different projects come together? Does it make sense to have low-level, hardware-related things on a different cycle to high-level, user visible things? Or does it make more sense to have a rhythm of life that’s shared from the top to the bottom of the stack?

Would it make more sense to stagger long term releases based on how they depend on one another, like GCC then X then OpenOffice? Or would it make more sense to have them all follow the same meta-cycle, so that we get big breakage across the stack at times, and big stability across the stack at others?

Are any projects out there already doing this?

Is there any established theory or practice for this?

A cross-community conversation

If you’ve read this far, thank you! Please do comment, and if you are interested then please do take up these questions in the communities that you care about, and bring the results of those discussions back here as comments. I’m pretty sure that we can take the art of software to a whole new level if we take advantage of the fact that we are NOT proprietary, and this is one of the key ways we can do it.

This is not the end of capitalism

Tuesday, November 4th, 2008

Some of the comments on my last post on the economic unwinding of 2008 suggested that people think we are witnessing the end of capitalism and the beginning of a new socialist era.

I certainly hope not.

I think a world without regulated capitalism would be a bleak one indeed. I had the great privilege to spend a year living in Russia in 2001/2002, and the visible evidence of the destruction wrought by central planning was still very much present. We are all ultimately human, with human failings, whether we work for a state planning agency or a private company, and those failings have consequences either way. To think that moving all private enterprise into state hands will somehow create a panacea of efficiency and sustainability is to ignore the stark lessons of the 20th century.

The leaders and decision makers in a centrally-planned economy are just as fallible as those in a capitalist one – they would probably be the same people! But state enterprises lack the forces of evolution that apply in a capitalist economy – state enterprises are rarely if ever allowed to fail. And hence bad ideas are perpetuated indefinitely, and an economy becomes dysfunctional to the point of systemic collapse. It is the fact that private enterprises fail which keeps industries vibrant. The tension between the imperative to innovate and the consequences of failure drives capitalist economies to evolve quickly. Despite all of the nasty consequences that we have seen, and those we have yet to see, of capitalism gone wrong, I am still firmly of the view that society must tap into its capitalist strengths if it wants to move forward.

But I chose my words carefully when I said “regulated capitalism”. I used to be a fan of Adam Smith’s invisible hand, and great admirer of Ayn Rand’s vision. Now, I feel differently. Left to it’s own devices, the market will tend to reinforce the position of those who were successful in the past, at the exclusion of those who might create future successes. We see evidence of this all the time. The heavyweights that define an industry tend to do everything in their power to prevent innovation from changing the rules that enrich them.

A classic example of that is the RIAA’s behaviour – in the name of “saving the music industry” they have spent the past ten years desperately trying to keep it in the analog era to save their members, with DRM and morally unjustifiable special-interest lobbying around copyright rules that affect the whole of society.

Similarly, patent rules tend to evolve to suit the companies that hold many patents, rather than the people who might generate the NEXT set of innovative ideas. Of course, the lobbying is dressed up in language that describes it as being “in the interests of innovation”, but at heart it is really aimed at preserving the privileged position of the incumbent.

In South Africa, the incumbent monopoly telco, which was a state enterprise until it was partially privatized in 1996, has systematically delayed, interfered, challenged and obstructed the natural process of deregulation and the creation of a healthy competitive sector. Private interests act in their own interest, by definition, so powerful private interests tend to drive the system in ways that make THEM healthier rather than ways that make society healthier.

Left to their own devices, private companies will tend to gobble one another up, and create monopolies. Those monopolies will then undermine every potential new entrant, using whatever tactics they can dream up, from FUD to lobbying to thuggery.

So, I’m a fan of regulated capitalism.

We need regulation to ensure that society’s broader needs, like environmental sustainability, are met while private companies pursue their profits. We also need regulation to ensure that those who manage national and international infrastructure, whether it’s railways or power stations or financial systems, don’t cook the books in a way that lets them declare fat profits and fatter bonuses while driving those systems into crisis.

But effective regulation is not the same as state management and supervision. I would much rather have private companies managing power stations competitively, than state agencies doing so as part of a complacent government monopoly.

Good regulation is very hard. Over the years I’ve interacted with a few different regulatory authorities, and I sympathise with the problems they encounter.

First, to be an effective regulator, you need superb talent. And for that you need to pay – talent follows the money and the lights, whether we like it or not, so to design a system on other assumptions is to design it for failure. My ideal regulator is an insightful genius working for the common good, but since I’m never likely to meet that person, a practical goal is to encourage regulators to be small but very well funded, with key salaries and performance measures that are just behind the industries they are supposed to regulate. Regulators must be able to be fired – no sense in offering someone a private sector salary and public sector accountability. Unfortunately, most regulators end up going the other way, hiring more and more people of average competence, that they become both expensive and ineffective.

Second, a great regulator needs to be independent. You’re the guy who tells people to stop doing what will hurt society; it’s very hard to do that to your friends. A regulatory job is a lonely job, which is why you hear so many stories of regulators being wined and dined by the industries they regulate only to make sure they don’t look too hard in the back room. A great regulator needs to know a lot about an industry, but be independent of that industry. Again, my ideal is someone who has made a good living in a sector, knows it backwards, can justify their high price, but wants to make a contribution to society.

Third, a great regulator needs to have teeth and muscle. It has been very frustrating for me to watch the South African telecomms regulator get tied up in court by Telkom, and stymied by government department inadequacy. Regulators need to be able to drive things forward, they need to be able to change the way companies behave, and they cannot rely on moral suasion to do so.

And fourth, a regulator has to make very tough decisions about innovation, which amount to venture capital decisions – to make them well, you have to be able to tell the future. For example, when an industry changes, as all industries change, how should the rules evolve? When a new need for society is identified, like the need to address climate change early and systemically, how should the rules evolve? Regulators need to move forward as fast as the industries they regulate, and they need to make decisions about things we don’t yet understand. And even when you regulate, you may not be able to stop an impending crisis. It’s very easy to criticize Greenspan for his light touch regulation on hedge funds and derivatives today, but it’s not at all clear to me that regulation would have made a difference, I think it would simply have moved the shadow global financial system offshore.

So regulation is extremely difficult, but also very much worth investing in if you are trying to run a healthy, vibrant, capitalist society.

Coming back to the original suggestion that sparked this blog – I’m sure we will see a lot of failed capitalists in the future. Hell, I might join their ranks, I wouldn’t be the first ;-). But that doesn’t spell the end of capitalism, only the opportunity to start again – smarter.

There is no victor of a flawed election

Monday, February 4th, 2008

The tragedy unfolding in Kenya is a reminder of the fact that a flawed election leaves the “winner” worse off than he would be losing a fair contest.

Whoever is President at the conclusion of this increasingly nasty standoff inherits an economy that is wounded, a parliament that is angry and divided, and a populace that know their will has been disregarded. And he will face a much increased risk of personal harm at the hands of those who see assassination as no worse a crime than electoral fraud. That is at best a Pyrrhic victory. It will be extremely difficult to get anything done under those circumstances.

There is, however, some cause for optimism amidst all the gloom. It seems that many Kenyan MP’s who were fingered for corruption during their previous terms were summarily dismissed by their constituencies, despite tribal affiliations. In other words, if your constituents think you’re a crook, they will vote you out even if you share their ethnicity.

That shows the beginnings of independent-minded political accountability – it shows that voting citizens in Kenya want leaders who are not tainted with corruption, even if that means giving someone from a different tribe their vote. And that is the key shift that is needed in African countries, to give democracy teeth. Ousted MP’s and former presidents are subject to investigation and trial, and no amount of ill-gotten loot in the bank is worth the indignity of a stint in jail at the hands of your successor. As Frederick Chiluba has learned, there’s no such thing as an easy retirement from a corrupt administration.

Of course, that makes it likely that those with skeletons in their closets will try even harder to cling to power, for fear of the consequences if they lose their grip on it. Robert Mugabe is no doubt of the opinion that a bitter time in power is preferable to a bitter time after power. But increasingly, voters in Africa are learning that they really can vote for change. And neighboring countries are learning that it hurts their own investment and economic profiles to certify elections as free and fair when they are far from it. It would be much harder for Robert Mugabe to stay in power illegally if he didn’t have *nearly* enough votes to stay there legally. You can fudge an election a little, but it’s very difficult to fudge it when the whole electorate abandons you, and when nobody will lend your their credibility.

The best hope a current president has of a happy retirement is to ensure that the institutions which will pass judgement on him (or her) in future are independent and competent, to ensure that they will stay that way, and to keep their hands clean. It will take time, but I think we are on track to see healthy changes in governance becoming the norm and not the exception in Africa.

Economic oversteering

Wednesday, January 23rd, 2008

Yesterday, we saw the most extraordinary failure of economic leadership in recent years, when the US Federal Reserve pressed the “emergency morphine” button and cut Federal Reserve rates by 0.75%. It will not help.

These are extremely testing times, and thus far, the US Fed under Bernanke has been found wanting. Historians may well lay the real blame for current distress at the door of Alan Greenspan, who pioneered the use of morphine to dull economic pain, but they will probably also credit him with a certain level of discretion in its prescription. During Greenspan’s tenure at the Fed, economic leaders became convinced that the solution to market distress was to ensure that the financial system had access to easy money.

This proved effective in the short term. When LTCM looked set to explode (private investments, leveraged up dramatically, managed by Nobel prize-winning financial theorists, placed a bet on a sure thing which didn’t pan out quite as expected) Greenspan engineered an orderly unwinding of its affairs. When the dot com bubble burst, Greenspan kept the financial system energised by lowering rates so far that they were, for a substantial period, at negative levels.

A negative real interest rate means we are effectively paid to take out loans. That might sound good, but how would you feel if I used the words “paid to take a few more hits of crack cocaine”? The underlying problem was that people had become accustomed to high rates of return and did not want to accept that real rates of return in the US were moving down. They had become accustomed to easy money, and Greenspan’s policy ensured that money remained accessible at a time when people had demonstrated a low ability to invest that easy money well.

Low rates give people an incentive to invest in stocks, even if those stocks are not earning very much. This meant stock prices recovered quickly, and the effect was amplified by the fact that low rates increased corporate earnings. This was a so-called “soft landing” – disaster averted. He must have known the risks, but the one big warning sign that would likely have convinced Greenspan to return to normal rates was missing: inflation. Low rates, and especially negative rates, have historically always resulted in inflation. Greenspan kept rates low because there were no signs of inflation. It seemed as if the US had entered a new era where the correlation of rates and inflation no long held true. People explained it by saying that the US was increasing its productivity dramatically (productivity increases are like anti-inflation medicine). Now, with hindsight, it appears that the real reason for the absence of inflation was that the Chinese were increasing their productivity dramatically, and that US consumers were spending so much on Chinese goods that Chinese productivity growth, not US productivity growth, was keeping US prices low.

When tech came off the boil and people should have been using the pause to clean up their affairs, Greenspan made it easy for people to get themselves into a worse position. Easy money made stock market prices artificially high, so stock market investors felt rich. Worse, easy money made house prices artificially high (by about 45%), so everybody felt wealthier than they had planned or expected to.

To make matters worse, a series of financial innovations created a whole industry designed to help people go back into debt on their houses. I remember trying to watch TV in the US and being amazed at the number of advertisements for “home equity withdrawals”. They made it sound like turning your major personal financial asset – your paid-off house – into an ATM machine was a good thing. In fact, it was a means to spend all of your primary store of wealth. And with inflated house prices, it was a way to spend money that you did not really have. A convenient way to get into a deep, dark hole of family debt. The result? The average American owns less of her home today than  she did 30 years ago – 55% as opposed to 68%. Easy money makes people poorer.The company with the most irritating ads, Ditech (and I feel ashamed to be contributing to their website search ranking with the mention, perhaps it will help instead to link to their customer feedback), has a tagline “People are smart” and a business model built on the idea that “People are dumb”. Their “most popular” product strikes me as being tailor-made to make it easy to turn home equity – an asset – into new debt.

Why did Greenspan do it? I think he genuinely believed that there was something different about the modern world that had altered the laws of economic gravity. I suspect he no longer feels that way.

But Greenspan is no longer Chairman of the Fed. Ben Bernanke blinked, yesterday, and in that blink we have the measure of the man.

Greenspan acted carefully, logically, and basically prudently. Several years of anomalous economic data are a reasonable basis to think that the rules have evolved. You would have to have a Swiss (700 years of stability) or Chinese (“we think it’s too early to tell if the French Revolution was a good idea”) approach to stick with economic theories that are at odds with the facts for very long. Greenspan made a mistake, and it will have huge consequences for the US for a generation, but he had reasons for that mistake. Bernanke just blinked, he panicked, despite knowing better.

We now have rigorous economic explanations for all that is happening. We have come to understand, quite clearly, what is going on in the world. The deflationary Eastern wind has been identified. We know there is no productivity miracle in the US, no change in the laws of physics or economics. So we know that the US patient is addicted to easy money morphine, medicine that was prescribed with good intentions by Dr Greenspan, medicine that has in the last 7 years made the patient more ill and not less. More morphine today constitutes malpractice, not economic innovation. We know the consequences of more morphine – stock prices will rise artificially (4% yesterday, on the news of the shot), house prices will stumble along, companies will take longer to default on their loans.

Bernanke might be hoping to do what Greenspan did – retire before the addiction becomes entirely obvious. Too late. While the Fed is clearly not willing to admit it, the markets have just as clearly taken their own view, that the prognosis is not good. They are smart enough to see that all Bernanke has done is cover up the symptoms of malaise, and many are using the temporary pain relief to head for safer territory. I expect that any relief will be brief, market recoveries will  fade, the rout has been deferred but not averted.

I started out by describing the Fed’s actions as a failure of economic leadership. Some folks are lucky enough to lead from the bottom of the cycle, up – they take over when things are miserable and can only really get better. They look like heroes even if their voodoo has no mojo, so to speak. Others are less lucky, they get handed custodianship of an asset that is at the peak. As for Bernanke, he’s in that latter category. He needs to be able to speak clearly and frankly about the hard work that lies ahead in the US. He needs to appeal to the very best of American industriousness – a traditional willingness to work hard, be smart, and accept the consequences of refusing to do so. He needs to lead under the most difficult circumstances. But that’s what leadership is about.

Fortunately for Bernanke, central bank independence is widely believed to be the only credible approach to economic governance. That independence gives Bernanke the right to stand at odds with political leaders if needed. Given the recent White House announcements – more morphine, further indebtedness for the worlds most indebted country – there’s no stomache for a real program of rehabilitation in the Bush Administration. Bernanke will have to lead without political support, a very difficult task indeed. Our greatest and most memorable leaders are those who lead through difficult times. The same is true of failures of leadership. Appeasement, or rehabilitation. Chamberlain, or Churchill. Thus far, Chamberlain.

Good architectural layering, and Bzr 1.1

Wednesday, January 9th, 2008

I completely failed to blog the release of Bzr 1.0 last year, but it was an excellent milestone and by all accounts, very well received. Congratulations to the Bazaar community on their momentum! I believe that the freeze for 1.1 is in place now so it’s great to see that they are going to continue to deliver regular releases.

I’ve observed a surge in the number of contributors to Bazaar recently, which has resulted in a lot of small but useful branches with bugfixes for various corner cases, operating systems and integrations with other tools. One of the most interesting projects that’s getting more attention is BzrEclipse, integrating Bzr into the Eclipse IDE in a natural fashion.

I think open source projects go through an initial phase where they work best with a tight group of core contributors who get the basics laid out to the point where the tool or application is usable by a wider audience. Then, they need to make the transition from being “closely held” to being open to drive-by contributions from folks who just want to fix a small bug or add a small feature. That’s quite a difficult transition, because the social skills required to run the project are quite different in those two modes. It’s not only about having good social skills, but also about having good processes that support the flow of new, small contributions from new, unproven contributors into the code-base.

It seems that one of the key “best practices” that has emerged is the idea of plug-in architectures, that allow new developers to contribute an extension, plug-in or add-on to the codebase without having to learn too much about the guts of the project, or participate in too many heavyweight processes. I would generalize that and say that good design, with clearly though-through and pragmatic layers, allow new contributors to make useful contributions to the code-base quickly because they present useful abstractions early on.

Firefox really benefited from their decision to support cross-platform add-ons. I’m delighted to hear that OpenOffice is headed in the same direction.

Bazaar is very nicely architected. Not only is there a well-defined plug-in system, but there’s also a very useful and pragmatic layered architecture which keeps the various bits of complexity contained for those who really need to know. I’ve observed how different teams of contributors, or individuals, have introduced whole new on-disk formats with new performance characteristics, completely orthogonally to the rest of the code. So if you are interested in the performance of status and diff, you can delve into working tree state code without having to worry about long-term revision storage or branch history mappings.

Layering can also cause problems, when the layers are designed too early and don’t reflect the pragmatic reality of the code. For example, witness the “exchange of views” between the ZFS folks and the Linux filesystem community, who have very different opinions on the importance and benefits of layering.

Anyhow, kudos to the Bazaar guys for the imminent 1.1, and for adopting an architecture that makes it easier for contributors to get going.

It’s too early to say for certain, but there are very encouraging signs that the world’s standards bodies will vote in favour of a single unified ISO (“International Standards Organisation”) document format standard. There is already one document format standard – ODF, and currently the ISO is considering a proposal to bless an alternative, Microsoft’s OpenXML, as another standard. In the latest developments, standards committees in South Africa and the United States have both said they will vote against a second standard and thereby issue a strong call for unity and a sensible, open, common standard for business documents in word processing, spreadsheets and presentations.

It’s very important that we build on those brave decisions and call on all of our national standards committees, to support the idea of a single common standard for these critical documents.

The way the ISO works is interesting. There are about 150 member countries who can vote on any particular proposal. Usually, about 40 countries actually vote. In order to pass, a proposal needs to get a 75% “yes” vote. Countries can vote yes, no, or “abstain”. So normally, 10 “no” or “abstain” votes would be sufficient to send the proposal back for further consideration. In this case, however, Microsoft has been working very hard, and spending a lot of money, to convince many countries that don’t normally vote to support their proposed format.

So there is something concrete you can do, right now, today, this week! Find out which body in your country is responsible for your national representation on ISO. In SA is the South African Bureau of Standards (SABS) and in the US I believe it is ANSI. Your country will likely have such a body. There is a list of some of them here but it may not be complete so don’t stop if your country isn’t listed there!

Call them, or email them, and ask them which committee will be voting in the OpenXML proposal. Then prepare a comment for that committee. It is really important that your comment be professional and courteous. You are dealing with strong technical people who have a huge responsibility and take it seriously – they will not take you seriously if your comment is not well thought out, politely phrased and logically sound.

If you have a strong technical opinion, focus on a single primary technical issue that you think is a good reason to decline the proposal from Microsoft. There are some good arguments outlined here. Don’t just resend an existing submission – find a particular technical point which means a lot to you and express that carefully and succinctly for your self. It can be brief – a single paragraph, or longer. There are some guidelines for “talking to standards bodies” here.

Here are the points I find particularly compelling, myself:

  1. This is not a vote “for or against Microsoft”.
    In fact, this is a vote for or against a unified standard. Microsoft is a member of the body that defines ODF (the existing ISO standard) but is hoping to avoid participating in that, in favor of getting their own work blessed as a standard. A vote of “no OpenXML” is vote against multiple incompatible standards, and hence a vote in favour of unity.If the ISO vote is “no”, then there is every reason to expect that Microsoft will adopt ODF, and help to make that a better standard for everybody including themselves. If we send a firm message to Microsoft that the world wants a single, unified standard, and that ODF is the appropriate place for that standard to be set, then we will get a unified global standard that includes Microsoft.The reason this point is important is because many government officials recognise the essential position Microsoft holds in their operations and countries, and they will be afraid to vote in a way that could cost their country money. If they perceive that a vote “no” might make it impossible for them to work with Microsoft, they will vote yes. Of course Microsoft is telling them this, but the reality is that Microsoft will embrace a unified standard if the global standards organisation clearly says that’s a requirement.
  2. Open, consensus based document standards really WORK WELL – consider HTML
    We already have an extraordinary success in defining a document format openly, in the form of HTML. The W3 Consortium, which includes Microsoft and many other companies, defines HTML and CSS. While Microsoft initially resisted the idea, preferring to push Internet Explorer’s proprietary web extensions, it was ultimately forced to participate in W3C discussions.The result is a wonderfully rich document format, with many different implementations. Much of the richness of the web today comes directly from the fact that there is an open standard for web documents and web interactions. Look at a classy web page, and then look at a classy Word document, and ask yourself which is the most impressive format! Clearly, Word would be better with an open standard, not one defined by a single company.
  3. A SINGLE standard with many implementations is MUCH more valuable than multiple standards
    Imagine what would happen if there were multiple incompatible web document standards? You couldn’t go to any web site and just expect it to work, you would need to know which format they used. The fact that there is one web document standard – HTML – is the key driver of the efficiency of the web as a repository of information. The web is a clear example of why ODF is the preferred structure for a public standard.ODF, the existing standard, is defined openly by multiple companies, and Microsoft can participate there along with everyone else. They know they can – and they participate in other standards discussions in the same organisation.Microsoft will say that “multiple standards give customers choice”. But we know that it is far more valuable to have a single standard which evolves efficiently and quickly, like HTML. The network effects of document exchange mean that one standard will in any event emerge as dominant, and it is important to governments, businesses and consumers that it be a standard which ITSELF offers great choice in implementation. People don’t buy a standard, and they don’t use a standard document, they use a software or hardware tool. If the “standard” only has one set of tools from one vendor, then that “choice of standards” has effectively resulted in zero choice of provider for customers. Consider the richness of the GSM cellular world, with hundreds of solution providers following a single global standard, compared to the inefficiency of countries which allowed proprietary networks to be installed on public frequencies.ODF is already implemented by many different companies. This means that there are many different tools which people can choose to do different things with their ODF documents. Some of those tools are optimised for the web, others for storage, others for data analysis, and others for editing. In the case of OpenXML, there is not even one single complete implementation – because even Microsoft Office12 does not exactly implement OpenXML. There is also no other company with any tool to edit or manage OpenXML documents. Microsoft is trying to make it look like there is broad participation, but dig beneath the surface and it is all funded by one company. The ODF standard is a much healthier place to safeguard all of our data.

I’d like to thank the team at TSF for the work they put into briefing the South African standards committee. I hope that each of you – folks who have read this far, will pick up the phone and contact your own standards body to help them make a smart decision.

The USA, South Africa, China, and other countries will be voting “no”. Let’s not allow heavy lobbying to influence what should be a calm, rational, sensible and ultimately technical discussion. Standards are important, and best defined in transparent and open forums. Pick up the phone!