Archive for the 'thoughts' Category

Continuing my discussion of version control tools, I’ll focus today on the importance of the merge capability of the tool.

The “time to branch” is far less important than the “time to merge”. Why? Because merging is the act of collaboration – it’s when one developer sets down to integrate someone else’s work with their own. We must keep the cost of merging as low as possible if we want to encourage people to collaborate as much as possible. If a merge is awkward, or slow, or results in lots of conflicts, or breaks when people have renamed files and directories, then I’m likely to avoid merging early and merging often. And that just makes it even harder to merge later.

The beauty of distributed version control comes in the form of spontaneous team formation, as people with a common interest in a bug or feature start to work on it, bouncing that work between them by publishing branches and merging from one another. These teams form more easily when the cost of branching and merging is lowered, and taking this to the extreme suggests that it’s very worthwhile investing in the merge experience for developers.

In CVS and SVN, the “time to branch” is low, but merging itself is almost always a painful process. Worse, merging a second time from another branch is WORSE, so the incentives for developers to merge regularly are exactly the wrong way around. For merge to be a smooth experience, the tools need to keep track of what has been merged before, so that you never end up redoing work that you’ve already solved. Bzr and Git both handle this pretty well, remembering which revisions in someone else’s branch you have already integrated into yours, and making sure that you don’t need to bother to do it again.

When we encourage people to “do their own thing” with version control, we must also match that independence with tools to facilitate collaboration.

Now, what makes for a great merge experience?

Here are a couple of points:

  1. Speed of the merge, or time it will take to figure out what’s changed, and do a sane job of applying those changes to your working tree. Git is the undisputed champion of merge speed. Anything less than a minute is fine.
  2. Handling of renames, especially renamed directories. If you merge from someone who has modified a file, and you have renamed (and possibly modified) the same file, then you want their change to be applied to the file in your working tree under the name YOU have given it. It is particularly important, I think, to handle directory renames as a first class operation, because this gives you complete freedom to reshape the tree without worrying about messing up other people’s merges. Bzr does this perfectly – even if you have subsequently created a file with the same name that the modified file USED to have, it will correctly apply the change to the file you moved to the new name.
  3. Quality of merge algorithm. This is the hardest thing to “benchmark” because it can be hugely subjective. Some merge algorithms take advantage of annotation data, for example, to minimise the number of conflicts generated during a merge. This is a highly subjective thing but in my experience Bzr is fantastic in merge quality, with very few cases of “stupid” conflicts even when branches are being bounced around between ad-hoc squads of developers. I don’t have enough experience of merging with tools like Darcs which have unusual characteristics and potentially higher-quality merges (albeit with lots of opportunity for unexpected outcomes).

I like the fact that the Bazaar developers made merging a first-class operation from the start, rather than saying “we have a few shell scripts that will help you with that” they focused on techniques to reduce the time that developers spend fixing up merges. A clean merge that takes 10 seconds longer to do saves me a huge amount of time compared to a dirty (conflict-ridden, or rename-busted) merge that happened a few seconds faster.

Linus is also a very strong advocate of merge quality. For projects which really want as much participation as possible, merge quality is a key part of the developer experience. You want ANYBODY to feel empowered to publish their contribution, and you want ANYBODY to be willing to pull those changes into their branches with confidence that (a) nothing will break and (b) they can revert the merge quickly, with a single command.

No negotiations with Microsoft in progress

Saturday, June 16th, 2007

There’s a rumour circulating that Ubuntu is in discussions with Microsoft aimed at an agreement along the lines they have concluded recently with Linspire, Xandros, Novell etc. Unfortunately, some speculation in the media (thoroughly and elegantly debunked in the blogosphere but not before the damage was done) posited that “Ubuntu might be next”.

For the record, let me state my position, and I think this is also roughly the position of Canonical and the Ubuntu Community Council though I haven’t caucused with the CC on this specifically.

We have declined to discuss any agreement with Microsoft under the threat of unspecified patent infringements.

Allegations of “infringement of unspecified patents” carry no weight whatsoever. We don’t think they have any legal merit, and they are no incentive for us to work with Microsoft on any of the wonderful things we could do together. A promise by Microsoft not to sue for infringement of unspecified patents has no value at all and is not worth paying for. It does not protect users from the real risk of a patent suit from a pure-IP-holder (Microsoft itself is regularly found to violate such patents and regularly settles such suits). People who pay protection money for that promise are likely living in a false sense of security.

I welcome Microsoft’s stated commitment to interoperability between Linux and the Windows world – and believe Ubuntu will benefit fully from any investment made in that regard by Microsoft and its new partners, as that code will no doubt be free software and will no doubt be included in Ubuntu.

With regard to open standards on document formats, I have no confidence in Microsoft’s OpenXML specification to deliver a vibrant, competitive and healthy market of multiple implementations. I don’t believe that the specifications are good enough, nor that Microsoft will hold itself to the specification when it does not suit the company to do so. There is currently one implementation of the specification, and as far as I’m aware, Microsoft hasn’t even certified that their own Office12 completely implements OpenXML, or that OpenXML completely defines Office12′s behavior. The Open Document Format (ODF) specification is a much better, much cleaner and widely implemented specification that is already a global standard. I would invite Microsoft to participate in the OASIS Open Document Format working group, and to ensure that the existing import and export filters for Office12 to Open Document Format are improved and available as a standard option. Microsoft is already, I think, a member of OASIS. This would be a far more constructive open standard approach than OpenXML, which is merely a vague codification of current practice by one vendor.

In the past, we have surprised people with announcements of collaboration with companies like Sun, that have at one time or another been hostile to free software. I do believe that companies change their position, as they get new leadership and new management. And we should engage with companies that are committed to the values we hold dear, and disengage if they change their position again. While Sun has yet to fully deliver on its commitments to free software licensing for Java, I believe that commitment is still in place at the top.

I have no objections to working with Microsoft in ways that further the cause of free software, and I don’t rule out any collaboration with them, in the event that they adopt a position of constructive engagement with the free software community. It’s not useful to characterize any company as “intrinsically evil for all time”. But I don’t believe that the intent of the current round of agreements is supportive of free software, and in fact I don’t think it’s particularly in Microsoft’s interests to pursue this agenda either. In time, perhaps, they will come to see things that way too.

My goal is to carry free software forward as far as I can, and then to help others take the baton to carry it further. At Canonical, we believe that we can be successful and also make a huge contribution to that goal. In the Ubuntu community, we believe that the freedom in free software is what’s powerful, not the openness of the code. Our role is not to be the ideologues -in-chief of the movement, our role is to deliver the benefits of that freedom to the widest possible audience. We recognize the value in “good now to get perfect later” (today we require free apps, tomorrow free drivers too, and someday free firmware to be part of the default Ubuntu configuration) we always act in support of the goals of the free software community as we perceive them. All the deals announced so far strike me as “trinkets in exchange for air kisses”. Mua mua. No thanks.

One of the tough choices VCS designers make is “what do we REALLY care about”. If you can eliminate some use cases, you can make the tool better for the other use cases. So, for example, the Git guys choose not to care too much about annotate. By design, annotate is slow on Git, because by letting go of that they get it to be super-fast in the use cases they care about. And that’s a very reasonable position to take.

My focus today is lossiness, and I’m making the case for starting out a project using tools which are lossless, rather than tools which discard useful information in the name of achieving performance that’s only necessary for the very largest projects.

It’s a bit like saying “shoot your pictures in RAW format, because you can always convert to JPEG and downscale resolution for Flickr, but you can’t always get your top-quality images back from a low-res JPEG”.

When you choose a starting VCS, know that you are not making your final choice of tools. Projects who started with CVS have moved to SVN and then to Bitkeeper and then to something else. Converting is often a painful process, sometimes so painful that people opt to throw away history rather than try and convert properly. We’ll see new generations of tools over the next decade, and the capability of machines and the network will change, so of course your optimal choice of tools will change accordingly.

Initially, projects do best if they choose a tool which makes it as easy to migrate to another tool, as possible. Migrating is a little bit like converting from JPEG to PNG, or PNG to GIF. Or PNG to JPEG2000. You really want to be in the situation where your current format has as much of the detail as possible, so that your conversion can be as clean and as comprehensive as possible. Of course, that comes at a price, typically in performance. If you shoot in RAW, you get fewer frames on a memory stick. So you have to ask yourself “will this bite me?”. And it turns out, that for 99% of photographers, you can get SO MANY photos on a 1GB memory stick, even in RAW mode, that the slower performance is worth trading for the higher quality. The only professional photographers I know who shoot in JPEG are the guys who shoot 3-4000 pictures in an event, and publish them instantly to the web, with no emphasis on image quality because they are not to sort of pics anyone will blow up as a poster.

What’s the coding equivalent?

Well, you are starting a free software project. You will have somewhere between 50 and 500 files in your project initially, it will take a while before you have more than 5,000 files. During that time, you need performance to be good enough. And you want to make sure that, if you need to migrate, you have captured as much of your history in detail so that your conversion can be as easy, and as rich and complete, as possible.

I’ve watched people try to convert CVS to SVN, and it’s a nightmare, because CVS never recorded details that SVN needs, such as which file-specific changes are a consistent set. It’s all interpolation, guesswork, voodoo and ultimately painful work that results often enough in people capitulating, throwing history away and just doing a fresh start in SVN. What a shame.

The Bazaar guys, I think, thought about this a lot. It’s another reason the perfect rename tracking is so important. You can convert a Bazaar tree to Git trivially, whenever you want to, if you need to scale past 10,000 files up to 100,000 files with blazing performance. In the process, you’ll lose the renaming information. But going the other way is not so simple, because Git never recorded that information in the first place. You need interpolation and an unfortunate goat under a full moon, and even then there’s no guarantee. You chose a lossy tool, you lost the renaming data as you used it, you can’t get that data back.

Now, performance is important, but “good enough performance” is the threshold we should aim for in order to get as much out of other use cases as possible. If my tool is lossless, and still gives me a “status” in less than a heartbeat, which Bazaar does up to about 7,000 files, then I have perfectly adequate performance and perfectly lossless recording. If my project grows to the point where Bazaar’s performance is not good enough, I can convert to any of the other systems and lose ONLY the data that I choose to lose in my selection of new tool. And perhaps, by then, Git has gained perfect renaming support, so I can get perfect renaming AND blazing performance. But I made the smart choice by starting in RAW mode.

Now, there are projects out there for which the optimisations and tradeoffs made for Git are necessary. If you want to see what those tradeoffs are, watch Linus describe Git here. But the projects which immediately need to make those tradeoffs are quite unusual – they are not multiplatform, they need extraordinary performance from the beginning, and they are willing to lose renaming data and have slow annotate in order to achieve that. X, OpenSolaris, the Linux kernel… those are hardly representative of the typical free software project.

Those projects, though are also the folks who’ve spoken loudest about version control, because they have the scale and resources to do detailed assessments. But we should recognise that their findings are filtered through the unique lenses of their own constraints, and don’t let that perspective colour the decision for a project that does not operate under those constraints.

What’s good enough performance? Well, I like to think in terms of “heartbeat time”. If the major operations which I have to do regularly (several times in an hour) take less than a heartbeat, then I don’t ever feel like I’m waiting. Things which happen 3-5 times in a day can take a bit longer, up to a minute, and those fit with regular workbreaks that I would take anyhow to clear my head for the next phase of work, or rest my aching fingers.
In summary – I think new and smaller (<10,000 files) projects should care more about correctness, completeness and experience in their choice of VCS tools. Performance is important, but perfectly adequate if it takes less than a heartbeat to do the things you do regularly while working on your code. Until you really have to lose them, don’t discard the ability to work across multiple platforms (lots of free software projects have more users on Windows than on Linux), don’t discard perfect renames, don’t opt for “lossy over lossless” just because another project which might be awesomely cool but has totally different requirements from yours, did so.

Further thoughts on version control

Monday, June 11th, 2007

I’ve had quite a lot of positive email feedback on my posting about on renaming as the killer app of distributed version control. So I thought it would be interesting to delve into this subject in more detail. I’ll blog over the next couple of months, starting tomorrow, about the things I think we need from this set of tools – whether they be Git, Darcs, Mercurial, Monotone or Bazaar.

First, to clear something up, Ubuntu selected Bazaar based on our assessment of what’s needed to build a great VCS for the free software community. Because of our work with Ubuntu, we know that what is important is the full spectrum of projects, not just the kernel, or X, or OpenOffice. It’s big and large projects, Linux and Windows projects, C and Python projects, Perl and Scheme projects… the best tools for us are the ones that work well across a broad range of projects, even if those are not the ones that are optimal for a particular project (in the way that Git works brilliantly for the kernel, because its optimisations suit that use case well, it’s a single-platform single-workflow super-optimised approach).

I’ve reviewed our choice of Bazaar in Ubuntu a couple of times, when projects like OpenSolaris and X made other choices, and in each case been satisfied that it’s still the best project for our needs. But we’re not tied to it, we could move to a different one. Canonical has no commercial interest in Bazaar (it’s ALL GPL software) and no cunning secret plans to launch a proprietary VCS based on it. We integrated Bazaar into Launchpad because Bazaar was our preferred VCS, but Bazaar could just as well be integrated into SourceForge and Collab since it’s free code.

So, what I’m articulating here is a set of values and principles – the things we find important and the rationale for our decisions – rather than a ra-ra for a particular tool. Bazaar itself doesn’t meet all of my requirements, but right now it’s the closest tool for the full spectrum of work we do.

Tomorrow, I’ll start with some commentary on why “lossless” tools are a better starting point than lossy tools, for projects that have that luxury.

Fantastic science

Friday, June 1st, 2007

This sort of discovery, I guess, is why I wanted to be a physicist. Alas, after two days at the amazing CERN when I was 18 I was pretty sure I wasn’t clever enough to do that, and so pursued other interests into IT and business and space. But I still get a thrill out of living vicariously in a life that involves some sort of particle accelerator. What an incredible rush for the scientists involved.

Also very glad to have exited the “prime number products are hard to factor so it helps if you generated the beasties in the first place” business.

Microsoft is not the real threat

Monday, May 21st, 2007

Much has been written about Microsoft’s allegation of patent infringements in Linux (by which I’m sure they mean GNU/Linux ;-)). I don’t think Microsoft is the real threat, and in fact, I think Microsoft and the Linux community will actually end up fighting on the same side of this issue.

I’m in favour of patents in general, but not software or business method patents. I’ll blog separately some day about why that’s the case, but for the moment I’ll just state for the record my view that software patents hinder, rather than help, innovation in the software industry.

And I’m pretty certain that, within a few years, Microsoft themselves will be strong advocates against software patents. Why? Because Microsoft is irrevocably committed to shipping new software every year, and software patents represent landmines in their roadmap which they are going to step on, like it or not, with increasing regularity. They can’t sit on the sidelines of the software game – they actually have to ship new products. And every time they do that, they risk stepping on a patent landmine.

They are a perfect target – they have deep pockets, and they have no option but to negotiate a settlement, or go to court, when confronted with a patent suit.

Microsoft already spends a huge amount of money on patent settlements (far, far more than they could hope to realise through patent licensing of their own portfolio). That number will creep upwards until it’s abundantly clear to them that they would be better off if software patents were history.

In short, Microsoft will lose a patent trench war if they start one, and I’m sure that cooler heads in Redmond know that.

But let’s step back from the coal-face for a second. I have high regard for Microsoft. They produce some amazing software, and they made software much cheaper than it ever was before they were around. Many people at Microsoft are motivated by a similar ideal to one we have in Ubuntu: to empower people for the digital era. Of course, we differ widely on many aspects of the implementation of that ideal, but my point is that Microsoft is actually committed to the same game that we free software people are committed to: building things which people use every day.

So, Microsoft is not the real patent threat to Linux. The real threat to Linux is the same as the real threat to Microsoft, and that is a patent suit from a person or company that is NOT actually building software, but has filed patents on ideas that the GNU project and Microsoft are equally likely to be implementing.

Yes, Nathan, I’m looking at you!

As they say in Hollywood, where there’s a hit there’s a writ. And Linux is a hit. We should expect a patent lawsuit against Linux, some time in the next decade.

There are three legs to IP law: copyright, trademark and patents. I expect a definitive suit associated with each of them. SCO stepped up on the copyright front, and that’s nearly dealt with now. A trademark-based suit is harder to envisage, because Linus and others did the smart thing and established clear ownership of the “Linux” trademark a while ago. The best-practice trademark framework for free software is still evolving, and there will probably be a suit or two, but none that could threaten the continued development of free software. And the third leg is patent law. I’m certain someone will sue somebody else about Linux on patent grounds, but it’s less likely to be Microsoft (starting a trench war) and more likely to be a litigant who only holds IP and doesn’t actually get involved in the business of software.

It will be a small company, possibly just a holding company, that has a single patent or small portfolio, and goes after people selling Linux-based devices.

Now, the wrong response to this problem is to label pure IP holders as “patent trolls”. While I dislike software patents, I deeply dislike the characterisation of pure IP holders as “patent trolls”. They are only following the rules laid out in law, and making the most of a bad system; they are not intrinsically bad themselves. Yes, Nathan, all is forgiven ;-). One of the high ideals of the patent system is to provide a way for eccentric genius inventors to have brilliant insights in industries where they don’t have any market power, but where their outsider-perspective leads them to some important innovation that escaped the insiders. Ask anyone on the street if they think patents are good, and they will say, in pretty much any language, “yes, inventors should be compensated for their insights”. The so-called “trolls” are nothing more than inventors with VC funding. Good for them. The people who call them trolls are usually large, incumbent players who cross-license their patent portfolios with other incumbents to form a nice, cosy oligopoly. “Trolling” is the practice of interrupting that comfortable and predictably profitable arrangement. It’s hard to feel any sympathy for the incumbents at all when you look at it that way.

So it’s not the patent-holders who are the problem, it’s the patent system.

What to do about it?

Well, there are lots of groups that are actively engaged in education and policy discussion around patent reform. Get involved! I recently joined the FFII: Foundation for a Free Information Infrastructure, which is doing excellent work in Europe in this regard. Canonical sponsored the EUPACO II conference, which brought together folks from across the spectrum to discuss patent reform. And Canonical also recently joined the Open Invention Network, which establishes a Linux patent pool as a defensive measure against an attack from an incumbent player. You can find a way to become part of the conversation, too. Help to build better understanding about the real dynamics of software innovation and competition. We need to get consensus from the industry – including Microsoft, though it may be a bit soon for them – that software patents are a bad thing for society.

In defense of independent governance

Saturday, May 19th, 2007

My message of support for Ms Machado has touched a nerve, most strongly amongst free software advocates who live in Venezuela.

Every country will have its own culture and way of doing things, and we should pay great respect to the choices and decisions of that country. It is a tragic thing to impose ones own cultural, religious or political views on people who see things differently. That tragedy has played out far too many times – from Apartheid, to the Holocaust, to the invasion of Iraq in recent history, to the acts of the Conquistadors centuries ago. It shows up when a new government renames the streets and cities of the old government, which renamed them from the previous government. We lose our own identity when we lose the voice of history, even if it is a history of which we are ashamed. It also shows up in the homogenization of global culture, with McDonalds and Disney turning the rich culture of the world into large swathes of barren desert. I am very sensitive to the beauty of the cultures that I’ve been privileged to experience in depth – South Africa, Russia, England, America. And I find it sad when one culture arrogantly suppresses another. I believe in letting people make their own choices. The future belongs to those who embrace global thinking without losing their identity and their culture.

At its largest, grandest level, “making choices” is what democracy is all about. However, sometimes the illusion of democracy is used to give legitimacy to choices that were not, at all, democratic.

In Zimbabwe, for example, we have a government that is in power “democratically” because of the systematic culture of fear that was created every time people expressed an interest in making a different choice. I cannot therefor pay much respect to the idea that the government of Zimbabwe is a true reflection of the cultural choices of Zimbabweans.

In such cases, we are obliged to question the decisions made by governments who claim to hold power by democratic mandate, when in fact they hold it by brute force. They may make some good claims and have some noble ideals, but the foundation of their authority is rotten, and it’s highly unlikely that much good will come of it for the long term.

I’m not going to comment directly on the policies of Mr Chavez. Frankly, I’m not qualified to speak on the details of his administration. But I will say that my experience of countries and governance, across continents and decades, has taught me the value of certain key principles:

First, that human nature is unchanging across the world and across time. This, as they say, is why history rhymes with itself. We make the same mistakes, we inspire ourselves to fix them, rinse and repeat. It’s human nature that makes absolute power corrupt absolutely. And its human nature to seek additional power. It’s rare to find someone who will create checks and balances on themselves. This is most eloquently described in the early writings of the American constitutional authors, who sought to “pit ambition against ambition”, and create checks and balances in society, so that neither the authorities, nor the judges, nor the media, could dominate the decisions we make for ourselves.

Second, that the presumption of innocence until the proof of guilt is a vital choice in the maintenance of a free society. In a world where even good countries can elect bad governments, we cannot let the unchallenged word of a government, any government, be sufficient to silence and stifle the lives of their citizens. I find it equally disturbing that American citizens can be locked up without access to attorneys in confidence, and that Zimbabwean opposition members can be arrested and held without charge for long periods. I also find it equally disturbing that residents of the United Kingdom can find themselves in Guantanamo Bay, on what is clearly flimsy or false evidence, without the UK fighting for their release or impartial trial. I am neither for Mr Bush, nor Mr Mugabe, nor Mr Blair, I am simply for the presumption of innocence until an impartial trial finds one guilty.

Third, that freedom of speech is essential for a healthy society. This is a freedom which we cannot take for granted. There is constantly a desire on the part of those in power to reduce the volume of criticism they must face. We have to constantly remind ourselves that those in authority have chosen to play a public role, and they must accept a level of public accountability and criticism, even from people who may have a personal agenda. Of course, not all speech is truth, and conspiracies often arise which seek to use the media to spread misinformation. But we are all better off when multiple viewpoints can be expressed. I’m no believe in media infallibility – we’ve seen very bad journalism from the biggest media networks in the world, for example when they get “embedded” in a controlled fashion into armies of war. But I’m a big believer in allowing calm voices to be heard, globally.

These principles are not written in the laws of physics – we create them in society, and we must defend them. They cannot be taken for granted, even in countries like the USA, which have them written into their constitutional DNA. Since they are a choice that society makes, and since society is reborn in each generation, they are a choice that society must make, and remake, constantly. Sometimes, we fail. Usually, we fail for fear when we are confronted by a perceived threat to security, or for greed when we are presented with the opportunity to benefit ourselves at great cost to others. And it as at times like that, when there is great stress, noise, fear, anger and shouting, that it is most important for calm voices to be heard.

At times like these, we are our own worst enemy. We hear what we want to hear. It is painful to hear that one might be wrong, that one’s hero might have flaws, that one’s leaders might not be all that we wished them to be. The awful truth of the media is that it pays to tell people what they want to hear, much more than it pays to tell people what they need to hear, and so society can whip itself into a frenzy of mistaken greed or fear or anger, and make poor decisions.

It takes great courage to speak out, when these basic principles are at risk. In a free society, there is nevertheless pressure to conform, to stay with the herd. In a society that is not free, one speaks out at some considerable personal cost to life and liberty. I salute those who do.

Support for Maria Corina Machado

Thursday, May 17th, 2007

I read today of the renewed efforts of the Venezuelan authorities to clamp down on Sumate and their leaders, in particular Maria Corina Machado. Most recently they prevented her from attending a World Economic Forum event.

One of the privileges of working in the free software community is the interaction between different groups trying to bring together social and economic change. People like Maria are inspiring leaders, because they devote themselves to a cause much greater than any one person’s life, but in the process they sacrifice many of the comforts that many of us take for granted. It would be much easier to watch from the sidelines, emigrate, or simply ignore the situation.

I know that the Ubuntu community is very active in Venezuela and I hope they will not also some day face repression. It seems the country is on a knife-edge, facing tough decisions that will have a major impact on the quality of life of citizens there for decades.

DRM *really* doesn’t work

Tuesday, May 8th, 2007

Well, that didn’t take long. Ars Technica is reporting that further vulnerabilities in the HD DVD content protection system have been uncovered. As I noted previously, any DRM system that depends on offline key distribution will be cracked. This latest vulnerability is one step closer to the complete dismantling of the HD DVD protection system.

How long before these guys ask the question: “what do our customers want”? From experience, 5-7 years.

Trademarks redux

Wednesday, April 25th, 2007

One of the very interesting issues-du-jour is the interaction between the three “legs” of “intellectual property”. Traditionally, those three are copyrights, patents and trademarks, and they have quite different laws and contractual precedents that are associated with them.

Recently, however, I’ve observed an increase in the cross-talk between them.

Classically, “software freedom” was about the copyright license associated with the code. But patents and trademarks are now being brought into the mix. For example, the discussion around Mozilla’s trademark policy was directly linking the concept of “freedom” to trademark policy as much as code copyright license. And much of the very hard debate in the GPLv3 process is about linkages between copyright license and relevant patents. And like it or not, the GPL is widely considered the reference implementation of freedom so GPLv3′s approach will be, for many, definitive on the subject.

In the Ubuntu community we’ve recently gone through a process to agree a trademark policy. This was recently approved by the Community Council, and the final draft is here:

http://www.ubuntu.com/aboutus/trademarkpolicy

We’ve tried to strike a balance that keeps the trademarks of Ubuntu meaningful (i.e. if it says Ubuntu, it really is Ubuntu) but also recognizes the fact that Ubuntu is a shared work, in which many different participants of our community make a personal investment, and which they should have the right to share. So we’ve made explicit the idea of a remix – a reworking of Ubuntu that addresses the needs of a specific community (could be national, could be an industry like medical or educational) but preserves the key things that people would expect from Ubuntu, like hardware support and certification.

I’m sure this isn’t the last word on the subject, but I hope it’s a useful contribution to the debate, and would welcome other projects adopting similar licenses. For that reason, our trademark license is published under the Creative Commons Sharealike with Attribution license (CC-BY-SA).