Archive for the 'thoughts' Category

Economic oversteering

Wednesday, January 23rd, 2008

Yesterday, we saw the most extraordinary failure of economic leadership in recent years, when the US Federal Reserve pressed the “emergency morphine” button and cut Federal Reserve rates by 0.75%. It will not help.

These are extremely testing times, and thus far, the US Fed under Bernanke has been found wanting. Historians may well lay the real blame for current distress at the door of Alan Greenspan, who pioneered the use of morphine to dull economic pain, but they will probably also credit him with a certain level of discretion in its prescription. During Greenspan’s tenure at the Fed, economic leaders became convinced that the solution to market distress was to ensure that the financial system had access to easy money.

This proved effective in the short term. When LTCM looked set to explode (private investments, leveraged up dramatically, managed by Nobel prize-winning financial theorists, placed a bet on a sure thing which didn’t pan out quite as expected) Greenspan engineered an orderly unwinding of its affairs. When the dot com bubble burst, Greenspan kept the financial system energised by lowering rates so far that they were, for a substantial period, at negative levels.

A negative real interest rate means we are effectively paid to take out loans. That might sound good, but how would you feel if I used the words “paid to take a few more hits of crack cocaine”? The underlying problem was that people had become accustomed to high rates of return and did not want to accept that real rates of return in the US were moving down. They had become accustomed to easy money, and Greenspan’s policy ensured that money remained accessible at a time when people had demonstrated a low ability to invest that easy money well.

Low rates give people an incentive to invest in stocks, even if those stocks are not earning very much. This meant stock prices recovered quickly, and the effect was amplified by the fact that low rates increased corporate earnings. This was a so-called “soft landing” – disaster averted. He must have known the risks, but the one big warning sign that would likely have convinced Greenspan to return to normal rates was missing: inflation. Low rates, and especially negative rates, have historically always resulted in inflation. Greenspan kept rates low because there were no signs of inflation. It seemed as if the US had entered a new era where the correlation of rates and inflation no long held true. People explained it by saying that the US was increasing its productivity dramatically (productivity increases are like anti-inflation medicine). Now, with hindsight, it appears that the real reason for the absence of inflation was that the Chinese were increasing their productivity dramatically, and that US consumers were spending so much on Chinese goods that Chinese productivity growth, not US productivity growth, was keeping US prices low.

When tech came off the boil and people should have been using the pause to clean up their affairs, Greenspan made it easy for people to get themselves into a worse position. Easy money made stock market prices artificially high, so stock market investors felt rich. Worse, easy money made house prices artificially high (by about 45%), so everybody felt wealthier than they had planned or expected to.

To make matters worse, a series of financial innovations created a whole industry designed to help people go back into debt on their houses. I remember trying to watch TV in the US and being amazed at the number of advertisements for “home equity withdrawals”. They made it sound like turning your major personal financial asset – your paid-off house – into an ATM machine was a good thing. In fact, it was a means to spend all of your primary store of wealth. And with inflated house prices, it was a way to spend money that you did not really have. A convenient way to get into a deep, dark hole of family debt. The result? The average American owns less of her home today than  she did 30 years ago – 55% as opposed to 68%. Easy money makes people poorer.The company with the most irritating ads, Ditech (and I feel ashamed to be contributing to their website search ranking with the mention, perhaps it will help instead to link to their customer feedback), has a tagline “People are smart” and a business model built on the idea that “People are dumb”. Their “most popular” product strikes me as being tailor-made to make it easy to turn home equity – an asset – into new debt.

Why did Greenspan do it? I think he genuinely believed that there was something different about the modern world that had altered the laws of economic gravity. I suspect he no longer feels that way.

But Greenspan is no longer Chairman of the Fed. Ben Bernanke blinked, yesterday, and in that blink we have the measure of the man.

Greenspan acted carefully, logically, and basically prudently. Several years of anomalous economic data are a reasonable basis to think that the rules have evolved. You would have to have a Swiss (700 years of stability) or Chinese (“we think it’s too early to tell if the French Revolution was a good idea”) approach to stick with economic theories that are at odds with the facts for very long. Greenspan made a mistake, and it will have huge consequences for the US for a generation, but he had reasons for that mistake. Bernanke just blinked, he panicked, despite knowing better.

We now have rigorous economic explanations for all that is happening. We have come to understand, quite clearly, what is going on in the world. The deflationary Eastern wind has been identified. We know there is no productivity miracle in the US, no change in the laws of physics or economics. So we know that the US patient is addicted to easy money morphine, medicine that was prescribed with good intentions by Dr Greenspan, medicine that has in the last 7 years made the patient more ill and not less. More morphine today constitutes malpractice, not economic innovation. We know the consequences of more morphine – stock prices will rise artificially (4% yesterday, on the news of the shot), house prices will stumble along, companies will take longer to default on their loans.

Bernanke might be hoping to do what Greenspan did – retire before the addiction becomes entirely obvious. Too late. While the Fed is clearly not willing to admit it, the markets have just as clearly taken their own view, that the prognosis is not good. They are smart enough to see that all Bernanke has done is cover up the symptoms of malaise, and many are using the temporary pain relief to head for safer territory. I expect that any relief will be brief, market recoveries will  fade, the rout has been deferred but not averted.

I started out by describing the Fed’s actions as a failure of economic leadership. Some folks are lucky enough to lead from the bottom of the cycle, up – they take over when things are miserable and can only really get better. They look like heroes even if their voodoo has no mojo, so to speak. Others are less lucky, they get handed custodianship of an asset that is at the peak. As for Bernanke, he’s in that latter category. He needs to be able to speak clearly and frankly about the hard work that lies ahead in the US. He needs to appeal to the very best of American industriousness – a traditional willingness to work hard, be smart, and accept the consequences of refusing to do so. He needs to lead under the most difficult circumstances. But that’s what leadership is about.

Fortunately for Bernanke, central bank independence is widely believed to be the only credible approach to economic governance. That independence gives Bernanke the right to stand at odds with political leaders if needed. Given the recent White House announcements – more morphine, further indebtedness for the worlds most indebted country – there’s no stomache for a real program of rehabilitation in the Bush Administration. Bernanke will have to lead without political support, a very difficult task indeed. Our greatest and most memorable leaders are those who lead through difficult times. The same is true of failures of leadership. Appeasement, or rehabilitation. Chamberlain, or Churchill. Thus far, Chamberlain.

Good architectural layering, and Bzr 1.1

Wednesday, January 9th, 2008

I completely failed to blog the release of Bzr 1.0 last year, but it was an excellent milestone and by all accounts, very well received. Congratulations to the Bazaar community on their momentum! I believe that the freeze for 1.1 is in place now so it’s great to see that they are going to continue to deliver regular releases.

I’ve observed a surge in the number of contributors to Bazaar recently, which has resulted in a lot of small but useful branches with bugfixes for various corner cases, operating systems and integrations with other tools. One of the most interesting projects that’s getting more attention is BzrEclipse, integrating Bzr into the Eclipse IDE in a natural fashion.

I think open source projects go through an initial phase where they work best with a tight group of core contributors who get the basics laid out to the point where the tool or application is usable by a wider audience. Then, they need to make the transition from being “closely held” to being open to drive-by contributions from folks who just want to fix a small bug or add a small feature. That’s quite a difficult transition, because the social skills required to run the project are quite different in those two modes. It’s not only about having good social skills, but also about having good processes that support the flow of new, small contributions from new, unproven contributors into the code-base.

It seems that one of the key “best practices” that has emerged is the idea of plug-in architectures, that allow new developers to contribute an extension, plug-in or add-on to the codebase without having to learn too much about the guts of the project, or participate in too many heavyweight processes. I would generalize that and say that good design, with clearly though-through and pragmatic layers, allow new contributors to make useful contributions to the code-base quickly because they present useful abstractions early on.

Firefox really benefited from their decision to support cross-platform add-ons. I’m delighted to hear that OpenOffice is headed in the same direction.

Bazaar is very nicely architected. Not only is there a well-defined plug-in system, but there’s also a very useful and pragmatic layered architecture which keeps the various bits of complexity contained for those who really need to know. I’ve observed how different teams of contributors, or individuals, have introduced whole new on-disk formats with new performance characteristics, completely orthogonally to the rest of the code. So if you are interested in the performance of status and diff, you can delve into working tree state code without having to worry about long-term revision storage or branch history mappings.

Layering can also cause problems, when the layers are designed too early and don’t reflect the pragmatic reality of the code. For example, witness the “exchange of views” between the ZFS folks and the Linux filesystem community, who have very different opinions on the importance and benefits of layering.

Anyhow, kudos to the Bazaar guys for the imminent 1.1, and for adopting an architecture that makes it easier for contributors to get going.

It’s too early to say for certain, but there are very encouraging signs that the world’s standards bodies will vote in favour of a single unified ISO (“International Standards Organisation”) document format standard. There is already one document format standard – ODF, and currently the ISO is considering a proposal to bless an alternative, Microsoft’s OpenXML, as another standard. In the latest developments, standards committees in South Africa and the United States have both said they will vote against a second standard and thereby issue a strong call for unity and a sensible, open, common standard for business documents in word processing, spreadsheets and presentations.

It’s very important that we build on those brave decisions and call on all of our national standards committees, to support the idea of a single common standard for these critical documents.

The way the ISO works is interesting. There are about 150 member countries who can vote on any particular proposal. Usually, about 40 countries actually vote. In order to pass, a proposal needs to get a 75% “yes” vote. Countries can vote yes, no, or “abstain”. So normally, 10 “no” or “abstain” votes would be sufficient to send the proposal back for further consideration. In this case, however, Microsoft has been working very hard, and spending a lot of money, to convince many countries that don’t normally vote to support their proposed format.

So there is something concrete you can do, right now, today, this week! Find out which body in your country is responsible for your national representation on ISO. In SA is the South African Bureau of Standards (SABS) and in the US I believe it is ANSI. Your country will likely have such a body. There is a list of some of them here but it may not be complete so don’t stop if your country isn’t listed there!

Call them, or email them, and ask them which committee will be voting in the OpenXML proposal. Then prepare a comment for that committee. It is really important that your comment be professional and courteous. You are dealing with strong technical people who have a huge responsibility and take it seriously – they will not take you seriously if your comment is not well thought out, politely phrased and logically sound.

If you have a strong technical opinion, focus on a single primary technical issue that you think is a good reason to decline the proposal from Microsoft. There are some good arguments outlined here. Don’t just resend an existing submission – find a particular technical point which means a lot to you and express that carefully and succinctly for your self. It can be brief – a single paragraph, or longer. There are some guidelines for “talking to standards bodies” here.

Here are the points I find particularly compelling, myself:

  1. This is not a vote “for or against Microsoft”.
    In fact, this is a vote for or against a unified standard. Microsoft is a member of the body that defines ODF (the existing ISO standard) but is hoping to avoid participating in that, in favor of getting their own work blessed as a standard. A vote of “no OpenXML” is vote against multiple incompatible standards, and hence a vote in favour of unity.If the ISO vote is “no”, then there is every reason to expect that Microsoft will adopt ODF, and help to make that a better standard for everybody including themselves. If we send a firm message to Microsoft that the world wants a single, unified standard, and that ODF is the appropriate place for that standard to be set, then we will get a unified global standard that includes Microsoft.The reason this point is important is because many government officials recognise the essential position Microsoft holds in their operations and countries, and they will be afraid to vote in a way that could cost their country money. If they perceive that a vote “no” might make it impossible for them to work with Microsoft, they will vote yes. Of course Microsoft is telling them this, but the reality is that Microsoft will embrace a unified standard if the global standards organisation clearly says that’s a requirement.
  2. Open, consensus based document standards really WORK WELL – consider HTML
    We already have an extraordinary success in defining a document format openly, in the form of HTML. The W3 Consortium, which includes Microsoft and many other companies, defines HTML and CSS. While Microsoft initially resisted the idea, preferring to push Internet Explorer’s proprietary web extensions, it was ultimately forced to participate in W3C discussions.The result is a wonderfully rich document format, with many different implementations. Much of the richness of the web today comes directly from the fact that there is an open standard for web documents and web interactions. Look at a classy web page, and then look at a classy Word document, and ask yourself which is the most impressive format! Clearly, Word would be better with an open standard, not one defined by a single company.
  3. A SINGLE standard with many implementations is MUCH more valuable than multiple standards
    Imagine what would happen if there were multiple incompatible web document standards? You couldn’t go to any web site and just expect it to work, you would need to know which format they used. The fact that there is one web document standard – HTML – is the key driver of the efficiency of the web as a repository of information. The web is a clear example of why ODF is the preferred structure for a public standard.ODF, the existing standard, is defined openly by multiple companies, and Microsoft can participate there along with everyone else. They know they can – and they participate in other standards discussions in the same organisation.Microsoft will say that “multiple standards give customers choice”. But we know that it is far more valuable to have a single standard which evolves efficiently and quickly, like HTML. The network effects of document exchange mean that one standard will in any event emerge as dominant, and it is important to governments, businesses and consumers that it be a standard which ITSELF offers great choice in implementation. People don’t buy a standard, and they don’t use a standard document, they use a software or hardware tool. If the “standard” only has one set of tools from one vendor, then that “choice of standards” has effectively resulted in zero choice of provider for customers. Consider the richness of the GSM cellular world, with hundreds of solution providers following a single global standard, compared to the inefficiency of countries which allowed proprietary networks to be installed on public frequencies.ODF is already implemented by many different companies. This means that there are many different tools which people can choose to do different things with their ODF documents. Some of those tools are optimised for the web, others for storage, others for data analysis, and others for editing. In the case of OpenXML, there is not even one single complete implementation – because even Microsoft Office12 does not exactly implement OpenXML. There is also no other company with any tool to edit or manage OpenXML documents. Microsoft is trying to make it look like there is broad participation, but dig beneath the surface and it is all funded by one company. The ODF standard is a much healthier place to safeguard all of our data.

I’d like to thank the team at TSF for the work they put into briefing the South African standards committee. I hope that each of you – folks who have read this far, will pick up the phone and contact your own standards body to help them make a smart decision.

The USA, South Africa, China, and other countries will be voting “no”. Let’s not allow heavy lobbying to influence what should be a calm, rational, sensible and ultimately technical discussion. Standards are important, and best defined in transparent and open forums. Pick up the phone!

Continuing my discussion of version control tools, I’ll focus today on the importance of the merge capability of the tool.

The “time to branch” is far less important than the “time to merge”. Why? Because merging is the act of collaboration – it’s when one developer sets down to integrate someone else’s work with their own. We must keep the cost of merging as low as possible if we want to encourage people to collaborate as much as possible. If a merge is awkward, or slow, or results in lots of conflicts, or breaks when people have renamed files and directories, then I’m likely to avoid merging early and merging often. And that just makes it even harder to merge later.

The beauty of distributed version control comes in the form of spontaneous team formation, as people with a common interest in a bug or feature start to work on it, bouncing that work between them by publishing branches and merging from one another. These teams form more easily when the cost of branching and merging is lowered, and taking this to the extreme suggests that it’s very worthwhile investing in the merge experience for developers.

In CVS and SVN, the “time to branch” is low, but merging itself is almost always a painful process. Worse, merging a second time from another branch is WORSE, so the incentives for developers to merge regularly are exactly the wrong way around. For merge to be a smooth experience, the tools need to keep track of what has been merged before, so that you never end up redoing work that you’ve already solved. Bzr and Git both handle this pretty well, remembering which revisions in someone else’s branch you have already integrated into yours, and making sure that you don’t need to bother to do it again.

When we encourage people to “do their own thing” with version control, we must also match that independence with tools to facilitate collaboration.

Now, what makes for a great merge experience?

Here are a couple of points:

  1. Speed of the merge, or time it will take to figure out what’s changed, and do a sane job of applying those changes to your working tree. Git is the undisputed champion of merge speed. Anything less than a minute is fine.
  2. Handling of renames, especially renamed directories. If you merge from someone who has modified a file, and you have renamed (and possibly modified) the same file, then you want their change to be applied to the file in your working tree under the name YOU have given it. It is particularly important, I think, to handle directory renames as a first class operation, because this gives you complete freedom to reshape the tree without worrying about messing up other people’s merges. Bzr does this perfectly – even if you have subsequently created a file with the same name that the modified file USED to have, it will correctly apply the change to the file you moved to the new name.
  3. Quality of merge algorithm. This is the hardest thing to “benchmark” because it can be hugely subjective. Some merge algorithms take advantage of annotation data, for example, to minimise the number of conflicts generated during a merge. This is a highly subjective thing but in my experience Bzr is fantastic in merge quality, with very few cases of “stupid” conflicts even when branches are being bounced around between ad-hoc squads of developers. I don’t have enough experience of merging with tools like Darcs which have unusual characteristics and potentially higher-quality merges (albeit with lots of opportunity for unexpected outcomes).

I like the fact that the Bazaar developers made merging a first-class operation from the start, rather than saying “we have a few shell scripts that will help you with that” they focused on techniques to reduce the time that developers spend fixing up merges. A clean merge that takes 10 seconds longer to do saves me a huge amount of time compared to a dirty (conflict-ridden, or rename-busted) merge that happened a few seconds faster.

Linus is also a very strong advocate of merge quality. For projects which really want as much participation as possible, merge quality is a key part of the developer experience. You want ANYBODY to feel empowered to publish their contribution, and you want ANYBODY to be willing to pull those changes into their branches with confidence that (a) nothing will break and (b) they can revert the merge quickly, with a single command.

No negotiations with Microsoft in progress

Saturday, June 16th, 2007

There’s a rumour circulating that Ubuntu is in discussions with Microsoft aimed at an agreement along the lines they have concluded recently with Linspire, Xandros, Novell etc. Unfortunately, some speculation in the media (thoroughly and elegantly debunked in the blogosphere but not before the damage was done) posited that “Ubuntu might be next”.

For the record, let me state my position, and I think this is also roughly the position of Canonical and the Ubuntu Community Council though I haven’t caucused with the CC on this specifically.

We have declined to discuss any agreement with Microsoft under the threat of unspecified patent infringements.

Allegations of “infringement of unspecified patents” carry no weight whatsoever. We don’t think they have any legal merit, and they are no incentive for us to work with Microsoft on any of the wonderful things we could do together. A promise by Microsoft not to sue for infringement of unspecified patents has no value at all and is not worth paying for. It does not protect users from the real risk of a patent suit from a pure-IP-holder (Microsoft itself is regularly found to violate such patents and regularly settles such suits). People who pay protection money for that promise are likely living in a false sense of security.

I welcome Microsoft’s stated commitment to interoperability between Linux and the Windows world – and believe Ubuntu will benefit fully from any investment made in that regard by Microsoft and its new partners, as that code will no doubt be free software and will no doubt be included in Ubuntu.

With regard to open standards on document formats, I have no confidence in Microsoft’s OpenXML specification to deliver a vibrant, competitive and healthy market of multiple implementations. I don’t believe that the specifications are good enough, nor that Microsoft will hold itself to the specification when it does not suit the company to do so. There is currently one implementation of the specification, and as far as I’m aware, Microsoft hasn’t even certified that their own Office12 completely implements OpenXML, or that OpenXML completely defines Office12′s behavior. The Open Document Format (ODF) specification is a much better, much cleaner and widely implemented specification that is already a global standard. I would invite Microsoft to participate in the OASIS Open Document Format working group, and to ensure that the existing import and export filters for Office12 to Open Document Format are improved and available as a standard option. Microsoft is already, I think, a member of OASIS. This would be a far more constructive open standard approach than OpenXML, which is merely a vague codification of current practice by one vendor.

In the past, we have surprised people with announcements of collaboration with companies like Sun, that have at one time or another been hostile to free software. I do believe that companies change their position, as they get new leadership and new management. And we should engage with companies that are committed to the values we hold dear, and disengage if they change their position again. While Sun has yet to fully deliver on its commitments to free software licensing for Java, I believe that commitment is still in place at the top.

I have no objections to working with Microsoft in ways that further the cause of free software, and I don’t rule out any collaboration with them, in the event that they adopt a position of constructive engagement with the free software community. It’s not useful to characterize any company as “intrinsically evil for all time”. But I don’t believe that the intent of the current round of agreements is supportive of free software, and in fact I don’t think it’s particularly in Microsoft’s interests to pursue this agenda either. In time, perhaps, they will come to see things that way too.

My goal is to carry free software forward as far as I can, and then to help others take the baton to carry it further. At Canonical, we believe that we can be successful and also make a huge contribution to that goal. In the Ubuntu community, we believe that the freedom in free software is what’s powerful, not the openness of the code. Our role is not to be the ideologues -in-chief of the movement, our role is to deliver the benefits of that freedom to the widest possible audience. We recognize the value in “good now to get perfect later” (today we require free apps, tomorrow free drivers too, and someday free firmware to be part of the default Ubuntu configuration) we always act in support of the goals of the free software community as we perceive them. All the deals announced so far strike me as “trinkets in exchange for air kisses”. Mua mua. No thanks.

One of the tough choices VCS designers make is “what do we REALLY care about”. If you can eliminate some use cases, you can make the tool better for the other use cases. So, for example, the Git guys choose not to care too much about annotate. By design, annotate is slow on Git, because by letting go of that they get it to be super-fast in the use cases they care about. And that’s a very reasonable position to take.

My focus today is lossiness, and I’m making the case for starting out a project using tools which are lossless, rather than tools which discard useful information in the name of achieving performance that’s only necessary for the very largest projects.

It’s a bit like saying “shoot your pictures in RAW format, because you can always convert to JPEG and downscale resolution for Flickr, but you can’t always get your top-quality images back from a low-res JPEG”.

When you choose a starting VCS, know that you are not making your final choice of tools. Projects who started with CVS have moved to SVN and then to Bitkeeper and then to something else. Converting is often a painful process, sometimes so painful that people opt to throw away history rather than try and convert properly. We’ll see new generations of tools over the next decade, and the capability of machines and the network will change, so of course your optimal choice of tools will change accordingly.

Initially, projects do best if they choose a tool which makes it as easy to migrate to another tool, as possible. Migrating is a little bit like converting from JPEG to PNG, or PNG to GIF. Or PNG to JPEG2000. You really want to be in the situation where your current format has as much of the detail as possible, so that your conversion can be as clean and as comprehensive as possible. Of course, that comes at a price, typically in performance. If you shoot in RAW, you get fewer frames on a memory stick. So you have to ask yourself “will this bite me?”. And it turns out, that for 99% of photographers, you can get SO MANY photos on a 1GB memory stick, even in RAW mode, that the slower performance is worth trading for the higher quality. The only professional photographers I know who shoot in JPEG are the guys who shoot 3-4000 pictures in an event, and publish them instantly to the web, with no emphasis on image quality because they are not to sort of pics anyone will blow up as a poster.

What’s the coding equivalent?

Well, you are starting a free software project. You will have somewhere between 50 and 500 files in your project initially, it will take a while before you have more than 5,000 files. During that time, you need performance to be good enough. And you want to make sure that, if you need to migrate, you have captured as much of your history in detail so that your conversion can be as easy, and as rich and complete, as possible.

I’ve watched people try to convert CVS to SVN, and it’s a nightmare, because CVS never recorded details that SVN needs, such as which file-specific changes are a consistent set. It’s all interpolation, guesswork, voodoo and ultimately painful work that results often enough in people capitulating, throwing history away and just doing a fresh start in SVN. What a shame.

The Bazaar guys, I think, thought about this a lot. It’s another reason the perfect rename tracking is so important. You can convert a Bazaar tree to Git trivially, whenever you want to, if you need to scale past 10,000 files up to 100,000 files with blazing performance. In the process, you’ll lose the renaming information. But going the other way is not so simple, because Git never recorded that information in the first place. You need interpolation and an unfortunate goat under a full moon, and even then there’s no guarantee. You chose a lossy tool, you lost the renaming data as you used it, you can’t get that data back.

Now, performance is important, but “good enough performance” is the threshold we should aim for in order to get as much out of other use cases as possible. If my tool is lossless, and still gives me a “status” in less than a heartbeat, which Bazaar does up to about 7,000 files, then I have perfectly adequate performance and perfectly lossless recording. If my project grows to the point where Bazaar’s performance is not good enough, I can convert to any of the other systems and lose ONLY the data that I choose to lose in my selection of new tool. And perhaps, by then, Git has gained perfect renaming support, so I can get perfect renaming AND blazing performance. But I made the smart choice by starting in RAW mode.

Now, there are projects out there for which the optimisations and tradeoffs made for Git are necessary. If you want to see what those tradeoffs are, watch Linus describe Git here. But the projects which immediately need to make those tradeoffs are quite unusual – they are not multiplatform, they need extraordinary performance from the beginning, and they are willing to lose renaming data and have slow annotate in order to achieve that. X, OpenSolaris, the Linux kernel… those are hardly representative of the typical free software project.

Those projects, though are also the folks who’ve spoken loudest about version control, because they have the scale and resources to do detailed assessments. But we should recognise that their findings are filtered through the unique lenses of their own constraints, and don’t let that perspective colour the decision for a project that does not operate under those constraints.

What’s good enough performance? Well, I like to think in terms of “heartbeat time”. If the major operations which I have to do regularly (several times in an hour) take less than a heartbeat, then I don’t ever feel like I’m waiting. Things which happen 3-5 times in a day can take a bit longer, up to a minute, and those fit with regular workbreaks that I would take anyhow to clear my head for the next phase of work, or rest my aching fingers.
In summary – I think new and smaller (<10,000 files) projects should care more about correctness, completeness and experience in their choice of VCS tools. Performance is important, but perfectly adequate if it takes less than a heartbeat to do the things you do regularly while working on your code. Until you really have to lose them, don’t discard the ability to work across multiple platforms (lots of free software projects have more users on Windows than on Linux), don’t discard perfect renames, don’t opt for “lossy over lossless” just because another project which might be awesomely cool but has totally different requirements from yours, did so.

Further thoughts on version control

Monday, June 11th, 2007

I’ve had quite a lot of positive email feedback on my posting about on renaming as the killer app of distributed version control. So I thought it would be interesting to delve into this subject in more detail. I’ll blog over the next couple of months, starting tomorrow, about the things I think we need from this set of tools – whether they be Git, Darcs, Mercurial, Monotone or Bazaar.

First, to clear something up, Ubuntu selected Bazaar based on our assessment of what’s needed to build a great VCS for the free software community. Because of our work with Ubuntu, we know that what is important is the full spectrum of projects, not just the kernel, or X, or OpenOffice. It’s big and large projects, Linux and Windows projects, C and Python projects, Perl and Scheme projects… the best tools for us are the ones that work well across a broad range of projects, even if those are not the ones that are optimal for a particular project (in the way that Git works brilliantly for the kernel, because its optimisations suit that use case well, it’s a single-platform single-workflow super-optimised approach).

I’ve reviewed our choice of Bazaar in Ubuntu a couple of times, when projects like OpenSolaris and X made other choices, and in each case been satisfied that it’s still the best project for our needs. But we’re not tied to it, we could move to a different one. Canonical has no commercial interest in Bazaar (it’s ALL GPL software) and no cunning secret plans to launch a proprietary VCS based on it. We integrated Bazaar into Launchpad because Bazaar was our preferred VCS, but Bazaar could just as well be integrated into SourceForge and Collab since it’s free code.

So, what I’m articulating here is a set of values and principles – the things we find important and the rationale for our decisions – rather than a ra-ra for a particular tool. Bazaar itself doesn’t meet all of my requirements, but right now it’s the closest tool for the full spectrum of work we do.

Tomorrow, I’ll start with some commentary on why “lossless” tools are a better starting point than lossy tools, for projects that have that luxury.

Fantastic science

Friday, June 1st, 2007

This sort of discovery, I guess, is why I wanted to be a physicist. Alas, after two days at the amazing CERN when I was 18 I was pretty sure I wasn’t clever enough to do that, and so pursued other interests into IT and business and space. But I still get a thrill out of living vicariously in a life that involves some sort of particle accelerator. What an incredible rush for the scientists involved.

Also very glad to have exited the “prime number products are hard to factor so it helps if you generated the beasties in the first place” business.

Microsoft is not the real threat

Monday, May 21st, 2007

Much has been written about Microsoft’s allegation of patent infringements in Linux (by which I’m sure they mean GNU/Linux ;-)). I don’t think Microsoft is the real threat, and in fact, I think Microsoft and the Linux community will actually end up fighting on the same side of this issue.

I’m in favour of patents in general, but not software or business method patents. I’ll blog separately some day about why that’s the case, but for the moment I’ll just state for the record my view that software patents hinder, rather than help, innovation in the software industry.

And I’m pretty certain that, within a few years, Microsoft themselves will be strong advocates against software patents. Why? Because Microsoft is irrevocably committed to shipping new software every year, and software patents represent landmines in their roadmap which they are going to step on, like it or not, with increasing regularity. They can’t sit on the sidelines of the software game – they actually have to ship new products. And every time they do that, they risk stepping on a patent landmine.

They are a perfect target – they have deep pockets, and they have no option but to negotiate a settlement, or go to court, when confronted with a patent suit.

Microsoft already spends a huge amount of money on patent settlements (far, far more than they could hope to realise through patent licensing of their own portfolio). That number will creep upwards until it’s abundantly clear to them that they would be better off if software patents were history.

In short, Microsoft will lose a patent trench war if they start one, and I’m sure that cooler heads in Redmond know that.

But let’s step back from the coal-face for a second. I have high regard for Microsoft. They produce some amazing software, and they made software much cheaper than it ever was before they were around. Many people at Microsoft are motivated by a similar ideal to one we have in Ubuntu: to empower people for the digital era. Of course, we differ widely on many aspects of the implementation of that ideal, but my point is that Microsoft is actually committed to the same game that we free software people are committed to: building things which people use every day.

So, Microsoft is not the real patent threat to Linux. The real threat to Linux is the same as the real threat to Microsoft, and that is a patent suit from a person or company that is NOT actually building software, but has filed patents on ideas that the GNU project and Microsoft are equally likely to be implementing.

Yes, Nathan, I’m looking at you!

As they say in Hollywood, where there’s a hit there’s a writ. And Linux is a hit. We should expect a patent lawsuit against Linux, some time in the next decade.

There are three legs to IP law: copyright, trademark and patents. I expect a definitive suit associated with each of them. SCO stepped up on the copyright front, and that’s nearly dealt with now. A trademark-based suit is harder to envisage, because Linus and others did the smart thing and established clear ownership of the “Linux” trademark a while ago. The best-practice trademark framework for free software is still evolving, and there will probably be a suit or two, but none that could threaten the continued development of free software. And the third leg is patent law. I’m certain someone will sue somebody else about Linux on patent grounds, but it’s less likely to be Microsoft (starting a trench war) and more likely to be a litigant who only holds IP and doesn’t actually get involved in the business of software.

It will be a small company, possibly just a holding company, that has a single patent or small portfolio, and goes after people selling Linux-based devices.

Now, the wrong response to this problem is to label pure IP holders as “patent trolls”. While I dislike software patents, I deeply dislike the characterisation of pure IP holders as “patent trolls”. They are only following the rules laid out in law, and making the most of a bad system; they are not intrinsically bad themselves. Yes, Nathan, all is forgiven ;-). One of the high ideals of the patent system is to provide a way for eccentric genius inventors to have brilliant insights in industries where they don’t have any market power, but where their outsider-perspective leads them to some important innovation that escaped the insiders. Ask anyone on the street if they think patents are good, and they will say, in pretty much any language, “yes, inventors should be compensated for their insights”. The so-called “trolls” are nothing more than inventors with VC funding. Good for them. The people who call them trolls are usually large, incumbent players who cross-license their patent portfolios with other incumbents to form a nice, cosy oligopoly. “Trolling” is the practice of interrupting that comfortable and predictably profitable arrangement. It’s hard to feel any sympathy for the incumbents at all when you look at it that way.

So it’s not the patent-holders who are the problem, it’s the patent system.

What to do about it?

Well, there are lots of groups that are actively engaged in education and policy discussion around patent reform. Get involved! I recently joined the FFII: Foundation for a Free Information Infrastructure, which is doing excellent work in Europe in this regard. Canonical sponsored the EUPACO II conference, which brought together folks from across the spectrum to discuss patent reform. And Canonical also recently joined the Open Invention Network, which establishes a Linux patent pool as a defensive measure against an attack from an incumbent player. You can find a way to become part of the conversation, too. Help to build better understanding about the real dynamics of software innovation and competition. We need to get consensus from the industry – including Microsoft, though it may be a bit soon for them – that software patents are a bad thing for society.

In defense of independent governance

Saturday, May 19th, 2007

My message of support for Ms Machado has touched a nerve, most strongly amongst free software advocates who live in Venezuela.

Every country will have its own culture and way of doing things, and we should pay great respect to the choices and decisions of that country. It is a tragic thing to impose ones own cultural, religious or political views on people who see things differently. That tragedy has played out far too many times – from Apartheid, to the Holocaust, to the invasion of Iraq in recent history, to the acts of the Conquistadors centuries ago. It shows up when a new government renames the streets and cities of the old government, which renamed them from the previous government. We lose our own identity when we lose the voice of history, even if it is a history of which we are ashamed. It also shows up in the homogenization of global culture, with McDonalds and Disney turning the rich culture of the world into large swathes of barren desert. I am very sensitive to the beauty of the cultures that I’ve been privileged to experience in depth – South Africa, Russia, England, America. And I find it sad when one culture arrogantly suppresses another. I believe in letting people make their own choices. The future belongs to those who embrace global thinking without losing their identity and their culture.

At its largest, grandest level, “making choices” is what democracy is all about. However, sometimes the illusion of democracy is used to give legitimacy to choices that were not, at all, democratic.

In Zimbabwe, for example, we have a government that is in power “democratically” because of the systematic culture of fear that was created every time people expressed an interest in making a different choice. I cannot therefor pay much respect to the idea that the government of Zimbabwe is a true reflection of the cultural choices of Zimbabweans.

In such cases, we are obliged to question the decisions made by governments who claim to hold power by democratic mandate, when in fact they hold it by brute force. They may make some good claims and have some noble ideals, but the foundation of their authority is rotten, and it’s highly unlikely that much good will come of it for the long term.

I’m not going to comment directly on the policies of Mr Chavez. Frankly, I’m not qualified to speak on the details of his administration. But I will say that my experience of countries and governance, across continents and decades, has taught me the value of certain key principles:

First, that human nature is unchanging across the world and across time. This, as they say, is why history rhymes with itself. We make the same mistakes, we inspire ourselves to fix them, rinse and repeat. It’s human nature that makes absolute power corrupt absolutely. And its human nature to seek additional power. It’s rare to find someone who will create checks and balances on themselves. This is most eloquently described in the early writings of the American constitutional authors, who sought to “pit ambition against ambition”, and create checks and balances in society, so that neither the authorities, nor the judges, nor the media, could dominate the decisions we make for ourselves.

Second, that the presumption of innocence until the proof of guilt is a vital choice in the maintenance of a free society. In a world where even good countries can elect bad governments, we cannot let the unchallenged word of a government, any government, be sufficient to silence and stifle the lives of their citizens. I find it equally disturbing that American citizens can be locked up without access to attorneys in confidence, and that Zimbabwean opposition members can be arrested and held without charge for long periods. I also find it equally disturbing that residents of the United Kingdom can find themselves in Guantanamo Bay, on what is clearly flimsy or false evidence, without the UK fighting for their release or impartial trial. I am neither for Mr Bush, nor Mr Mugabe, nor Mr Blair, I am simply for the presumption of innocence until an impartial trial finds one guilty.

Third, that freedom of speech is essential for a healthy society. This is a freedom which we cannot take for granted. There is constantly a desire on the part of those in power to reduce the volume of criticism they must face. We have to constantly remind ourselves that those in authority have chosen to play a public role, and they must accept a level of public accountability and criticism, even from people who may have a personal agenda. Of course, not all speech is truth, and conspiracies often arise which seek to use the media to spread misinformation. But we are all better off when multiple viewpoints can be expressed. I’m no believe in media infallibility – we’ve seen very bad journalism from the biggest media networks in the world, for example when they get “embedded” in a controlled fashion into armies of war. But I’m a big believer in allowing calm voices to be heard, globally.

These principles are not written in the laws of physics – we create them in society, and we must defend them. They cannot be taken for granted, even in countries like the USA, which have them written into their constitutional DNA. Since they are a choice that society makes, and since society is reborn in each generation, they are a choice that society must make, and remake, constantly. Sometimes, we fail. Usually, we fail for fear when we are confronted by a perceived threat to security, or for greed when we are presented with the opportunity to benefit ourselves at great cost to others. And it as at times like that, when there is great stress, noise, fear, anger and shouting, that it is most important for calm voices to be heard.

At times like these, we are our own worst enemy. We hear what we want to hear. It is painful to hear that one might be wrong, that one’s hero might have flaws, that one’s leaders might not be all that we wished them to be. The awful truth of the media is that it pays to tell people what they want to hear, much more than it pays to tell people what they need to hear, and so society can whip itself into a frenzy of mistaken greed or fear or anger, and make poor decisions.

It takes great courage to speak out, when these basic principles are at risk. In a free society, there is nevertheless pressure to conform, to stay with the herd. In a society that is not free, one speaks out at some considerable personal cost to life and liberty. I salute those who do.