Archive for the 'thoughts' Category

In defense of independent governance

Saturday, May 19th, 2007

My message of support for Ms Machado has touched a nerve, most strongly amongst free software advocates who live in Venezuela.

Every country will have its own culture and way of doing things, and we should pay great respect to the choices and decisions of that country. It is a tragic thing to impose ones own cultural, religious or political views on people who see things differently. That tragedy has played out far too many times – from Apartheid, to the Holocaust, to the invasion of Iraq in recent history, to the acts of the Conquistadors centuries ago. It shows up when a new government renames the streets and cities of the old government, which renamed them from the previous government. We lose our own identity when we lose the voice of history, even if it is a history of which we are ashamed. It also shows up in the homogenization of global culture, with McDonalds and Disney turning the rich culture of the world into large swathes of barren desert. I am very sensitive to the beauty of the cultures that I’ve been privileged to experience in depth – South Africa, Russia, England, America. And I find it sad when one culture arrogantly suppresses another. I believe in letting people make their own choices. The future belongs to those who embrace global thinking without losing their identity and their culture.

At its largest, grandest level, “making choices” is what democracy is all about. However, sometimes the illusion of democracy is used to give legitimacy to choices that were not, at all, democratic.

In Zimbabwe, for example, we have a government that is in power “democratically” because of the systematic culture of fear that was created every time people expressed an interest in making a different choice. I cannot therefor pay much respect to the idea that the government of Zimbabwe is a true reflection of the cultural choices of Zimbabweans.

In such cases, we are obliged to question the decisions made by governments who claim to hold power by democratic mandate, when in fact they hold it by brute force. They may make some good claims and have some noble ideals, but the foundation of their authority is rotten, and it’s highly unlikely that much good will come of it for the long term.

I’m not going to comment directly on the policies of Mr Chavez. Frankly, I’m not qualified to speak on the details of his administration. But I will say that my experience of countries and governance, across continents and decades, has taught me the value of certain key principles:

First, that human nature is unchanging across the world and across time. This, as they say, is why history rhymes with itself. We make the same mistakes, we inspire ourselves to fix them, rinse and repeat. It’s human nature that makes absolute power corrupt absolutely. And its human nature to seek additional power. It’s rare to find someone who will create checks and balances on themselves. This is most eloquently described in the early writings of the American constitutional authors, who sought to “pit ambition against ambition”, and create checks and balances in society, so that neither the authorities, nor the judges, nor the media, could dominate the decisions we make for ourselves.

Second, that the presumption of innocence until the proof of guilt is a vital choice in the maintenance of a free society. In a world where even good countries can elect bad governments, we cannot let the unchallenged word of a government, any government, be sufficient to silence and stifle the lives of their citizens. I find it equally disturbing that American citizens can be locked up without access to attorneys in confidence, and that Zimbabwean opposition members can be arrested and held without charge for long periods. I also find it equally disturbing that residents of the United Kingdom can find themselves in Guantanamo Bay, on what is clearly flimsy or false evidence, without the UK fighting for their release or impartial trial. I am neither for Mr Bush, nor Mr Mugabe, nor Mr Blair, I am simply for the presumption of innocence until an impartial trial finds one guilty.

Third, that freedom of speech is essential for a healthy society. This is a freedom which we cannot take for granted. There is constantly a desire on the part of those in power to reduce the volume of criticism they must face. We have to constantly remind ourselves that those in authority have chosen to play a public role, and they must accept a level of public accountability and criticism, even from people who may have a personal agenda. Of course, not all speech is truth, and conspiracies often arise which seek to use the media to spread misinformation. But we are all better off when multiple viewpoints can be expressed. I’m no believe in media infallibility – we’ve seen very bad journalism from the biggest media networks in the world, for example when they get “embedded” in a controlled fashion into armies of war. But I’m a big believer in allowing calm voices to be heard, globally.

These principles are not written in the laws of physics – we create them in society, and we must defend them. They cannot be taken for granted, even in countries like the USA, which have them written into their constitutional DNA. Since they are a choice that society makes, and since society is reborn in each generation, they are a choice that society must make, and remake, constantly. Sometimes, we fail. Usually, we fail for fear when we are confronted by a perceived threat to security, or for greed when we are presented with the opportunity to benefit ourselves at great cost to others. And it as at times like that, when there is great stress, noise, fear, anger and shouting, that it is most important for calm voices to be heard.

At times like these, we are our own worst enemy. We hear what we want to hear. It is painful to hear that one might be wrong, that one’s hero might have flaws, that one’s leaders might not be all that we wished them to be. The awful truth of the media is that it pays to tell people what they want to hear, much more than it pays to tell people what they need to hear, and so society can whip itself into a frenzy of mistaken greed or fear or anger, and make poor decisions.

It takes great courage to speak out, when these basic principles are at risk. In a free society, there is nevertheless pressure to conform, to stay with the herd. In a society that is not free, one speaks out at some considerable personal cost to life and liberty. I salute those who do.

Support for Maria Corina Machado

Thursday, May 17th, 2007

I read today of the renewed efforts of the Venezuelan authorities to clamp down on Sumate and their leaders, in particular Maria Corina Machado. Most recently they prevented her from attending a World Economic Forum event.

One of the privileges of working in the free software community is the interaction between different groups trying to bring together social and economic change. People like Maria are inspiring leaders, because they devote themselves to a cause much greater than any one person’s life, but in the process they sacrifice many of the comforts that many of us take for granted. It would be much easier to watch from the sidelines, emigrate, or simply ignore the situation.

I know that the Ubuntu community is very active in Venezuela and I hope they will not also some day face repression. It seems the country is on a knife-edge, facing tough decisions that will have a major impact on the quality of life of citizens there for decades.

DRM *really* doesn’t work

Tuesday, May 8th, 2007

Well, that didn’t take long. Ars Technica is reporting that further vulnerabilities in the HD DVD content protection system have been uncovered. As I noted previously, any DRM system that depends on offline key distribution will be cracked. This latest vulnerability is one step closer to the complete dismantling of the HD DVD protection system.

How long before these guys ask the question: “what do our customers want”? From experience, 5-7 years.

Trademarks redux

Wednesday, April 25th, 2007

One of the very interesting issues-du-jour is the interaction between the three “legs” of “intellectual property”. Traditionally, those three are copyrights, patents and trademarks, and they have quite different laws and contractual precedents that are associated with them.

Recently, however, I’ve observed an increase in the cross-talk between them.

Classically, “software freedom” was about the copyright license associated with the code. But patents and trademarks are now being brought into the mix. For example, the discussion around Mozilla’s trademark policy was directly linking the concept of “freedom” to trademark policy as much as code copyright license. And much of the very hard debate in the GPLv3 process is about linkages between copyright license and relevant patents. And like it or not, the GPL is widely considered the reference implementation of freedom so GPLv3’s approach will be, for many, definitive on the subject.

In the Ubuntu community we’ve recently gone through a process to agree a trademark policy. This was recently approved by the Community Council, and the final draft is here:

We’ve tried to strike a balance that keeps the trademarks of Ubuntu meaningful (i.e. if it says Ubuntu, it really is Ubuntu) but also recognizes the fact that Ubuntu is a shared work, in which many different participants of our community make a personal investment, and which they should have the right to share. So we’ve made explicit the idea of a remix – a reworking of Ubuntu that addresses the needs of a specific community (could be national, could be an industry like medical or educational) but preserves the key things that people would expect from Ubuntu, like hardware support and certification.

I’m sure this isn’t the last word on the subject, but I hope it’s a useful contribution to the debate, and would welcome other projects adopting similar licenses. For that reason, our trademark license is published under the Creative Commons Sharealike with Attribution license (CC-BY-SA).

Note to content owners: DRM doesn’t work

Saturday, April 7th, 2007

There are some ideas that are broken, but attractive enough to some people that they are doomed to be tried again and again.

DRM is one of them.

I was thrilled to see recently that the processing key for *all* HD discs produced to date has been discovered and published. I expect this to lead to the complete unraveling of the Blu-Ray and HD-DVD content protection schemes before even 1% of the potential market for those players has been reached. Good news indeed, because it may inspire the people who setup such schemes to reconsider.

We’ve been here before. The DVD-CSS encryption system was cracked very quickly – stylishly and legally so. Content owners – Hollywood Inc – were outraged and pursued anybody who even referred to the free software which could perform the trivial decryption process. They used the DMCA as a way to extend the laws of copyright well beyond their original intent. They behaved like a deer in the headlights – blinded by the perceived oncoming doom of a world where their content flows quickly and efficiently, unable to see potential routes to safety while those headlights approach. Their market was changing, facing new opportunities and new threats, and they wanted to slow down the pace of change.
Content owners think that DRM can slow down the natural evolution of a marketplace.

In the case of movies, a big driver of DRM adoption was the unwillingness of the industry to get out of the analog era. Movies are typically distributed to theaters on celluloid film, great big reels of it. It costs a lot to print and distribute those films to the cinemas who will display it. So the realities of real-world distribution have come to define the release strategy of most movies. Companies print a certain number of films, and ship those to cinemas in a few countries. When the movie run is finished there, those same films are shipped to new countries. This is why a movie is typically released at different times in different countries. It’s purely a physical constraint on the logistics of moving chunks of celluloid, and has no place in today’s era of instant, global, digital distribution.

Of course, when DVD’s came along, content owners did not want people to buy the DVD in the USA, then ship that to Australia before the film was showing in cinemas there. Hence the brain damage that we call region encoding – the content owners designed DVD-CSS so that it was not only encrypted, but contained a region marker that is supposed to prevent it from being played anywhere other than the market for which it was released. If you live outside the US, and have ever tried to buy a small-run por^W documentary movie from the US you’ll know what I mean by brain damage: it doesn’t play outside the US, and the demand in your region is not sufficient to justify a print run in your region-coding, so sorry for you.

The truth is that survival in any market depends on your ability to keep up with what is possible. The movie owners need to push hard for global digital distribution – that will let them get movies out on cinema globally on the same day (modulo translation), the same way that you and I can see everything on YouTube the day it is uploaded.

The truth is also that, as the landscape changes, different business models come and go in their viability. Those folks who try to impose analog rules on digital content will find themselves on the wrong side of the tidal wave. Sorry for you. It’s necessary to innovate (again, sometimes!) and stay ahead of the curve, perhaps even being willing to cannibalize your own existing business – though to be honest cannibalizing someone else’s is so much more appealing.

Right now the content owners need to be thinking about how they turn this networked world to their advantage, not fight the tide, and also how to restructure the costs inherent in their own businesses to make them more in line with the sorts of revenues that are possible in a totally digital world.

Here are some reality bites:

  • Any DRM that involves offline key storage will be broken. It doesn’t matter if that key is mostly stored on protected hardware, either, because sooner or later one of those gets broken too. And if you want your content to be viewable on most PC’s you will have software viewers. They get broken even faster. So, even if you try to protect every single analog pathway (my favourite is the push for encrypted channels between the hifi and the speakers!) someone, somewhere will get raw access to your content. All you are doing is driving up the cost of your infrastructure – I wonder what the cost of all the crypto associated with HD DVD/BluRay is, when you factor in the complexity, the design, and the incremental cost of IP, hardware and software for every single HD-capable device out there.
  • The alternative to offline key storage is streaming-only access, and that is equally unprotectable. The classic streaming system, TV broadcast, was hacked when the VCR came out, and that was blessed as fair use. Today we see one of the digital satellite radio companies (Sirius or XM, I think) being sued by content owners for their support of a device which records their CD-quality broadcasts to MP3 players. Web content streaming services that don’t allow you to save the content locally are a very useless form of protection, easily and regularly subverted. And of course not everyone wants to be online when they are watching your content.
  • It only takes one crack. For any given piece of content, all it takes is one unprotected copy, and you have to assume that anyone who wants it will get it. Whether it is software off a warez site, or music from an MP3 download service in Russia, or a file sharing system, you cannot plug all the holes. Face it, people either want to pay you for your content, or they don’t, and your best strategy is to make it as easy as possible for people who want to comply with the law to do so. That does not translate into suing grannies and schoolkids, it translates into effective delivery systems that allow everyone to do the right thing, easily.
  • Someone will find a business model that doesn’t depend on the old way of thinking, and if it is not you, then they will eat you alive. You will probably sue them, but this will be nothing but a defensive action as the industry reforms around their new business model, without you. And by the industry I don’t mean your competitors – they will likely be in the same hole – but your suppliers and your customers. The distributors of content are the ones at risk here, not the creators or the consumers.

The music industry’s fear of Napster led them down the DRM rabbit-hole. Microsoft, Apple, SONY and others all developed DRM systems and pitched those to the music industry as a “sane ” approach to online music distribution. It was a nice pitch: “All the distribution benefits of download, all the economic benefits of vinyl”, in a nutshell.

Of these contenders, SONY was clearly ruled out because they are a content owner and there’s no way the rest of the industry would pay a technology tax to a competitor (much as Nokia’s Symbian never gained much traction with the other biggies, because it was too tied to Nokia). Microsoft was a non-starter, because they are too obviously powerful and the music industry could see a hostile takeover coming a mile away. But cute, cuddly Apple wouldn’t harm anyone! So iTunes and AAC were roundly and widely embraced, and Apple succeeded in turning the distribution and playing of legal digital music into a virtual monopoly. Apple played a masterful game, and took full advantage of the music industry’s fear.

The joyful irony in this of course is Steve Jobs recent call for the music industry to adopt DRM-free distribution, giving Apple the moral high ground. Very, very nicely played indeed!

A few years back I was in Davos, at the World Economic Forum. It was perhaps 2002 or 2003, a few years after the dot-com bust. It was the early days of the iPaq, everyone at the conference had been loaned one. I remember clearly sitting in on a session that was more or less a CEO confessional, a sort of absolution-by-admission-of-stupidity gig. One by one, some well known figures stood up and told horror stories about how they’d let the inmates run the asylum, and allowed twenty-something year olds to tell them how to spend their shareholder capital on dot-com projects. This was really interesting to me, as I’d spent the dot-com period telling big companies NOT to over-invest, and to focus on improving their relationships with existing customers and partners using the net, not taking over the world overnight.

But the real kicker came at the very end, when the head of SONY USA, also responsible for its music division, Sir Stringer, stood up to make his peace. He gloated on at length about how SONY had NOT invested in the dot-com, and thus how he felt he must be the only person in the room who had not been taken in by the kids. It was a very funny, very witty speech that earned a round of applause and laughter. I was left wondering whether he had any clue whatsoever how many songs would fit on the iPaq in his pocket, or how long it would take to download them. I suspected not. Of all the CEO’s who had spoken that day, I thought he was the one most likely to be hit hard, and soon, by the digital train.

Sir Stringer is now CEO of SONY worldwide. Funny, then, that the SONY PS3 should have been delayed so that work could be completed on its DRM system.

Some bad ideas are just too attractive to die, once and for all.

Conflicting goals create tension in communities

Saturday, September 9th, 2006

Matthew Garrett expressed frustration with Debian recently, in a blog post that’s become rather famous.

I’m of the opinion that Ubuntu could not exist without Debian. So it’s absolutely my intention to see that Ubuntu is a constructive part of the broader Debian landscape. It’s vital that Ubuntu help to sustain and grow Debian, because it’s the breadth and strength of Debian which make up the “shoulders of greatness” on which we in the Ubuntu community stand when we reach for the stars. That doesn’t mean I’m naive enough to think this relationship will ever be an easy one, but I would hope that the discussion doesn’t turn into one of “Ubuntu versus Debian”. Because it isn’t the case that one of them will succeed and the other fail. You could only perceive that as an outcome if you assumed that the two have exactly the same goals.

And that’s where I think a lot of tension is created: it’s hard to know what Debian’s goals are. Those goals are technically articulated in some founding documents but I don’t believe the specific, detailed nature of those goals is actually matched by the personal goals of many members of the community, or users. Debian, in many senses, is at that dangerous stage where it’s a victim of its own success. Its infrastructure and developer recruitment model are for many people what define Debian, and they have been so successful that a community has been created of people who, drawn together by the same things, nonetheless have subtly different personal agendas and goals.

Those differences are a cause of tremendous stress.

When a flamewar erupts, the notional topic of the flameage is often less relevant than the underlying tension between people’s true goals. It’s hard to come to agreement on how to address a specific issue, if there’s no agreement on the very high-level goals that everyone is working towards. Arguments go on forever because one person REALLY wants to see Debian get even more stable on the server, and another person wants to see it get even more cutting edge on the desktop. One person wants more translation of stable versions of applications, another wants newer versions which are by definition not as well translated. One person wants fewer architectures, another wants the full power of Debian on a small embedded architecture.

And all of them have every right to BE RIGHT. All of them, ARE right.

The problem comes if anybody believes that one institution, one product, one single leadership team can synthesise all of that into something which is optimal for EVERYBODY. It’s just not possible to deliver one thing which is optimal for two sets of conflicting requirements, let alone those of a thousand or so of the smartest, most passionate, and lets face it most eclectic of the world’s free software developers. Debian has almost unlimited capacity for some things, by virtue of its openness and democratic governance. That is a wonderful thing. At a time when we all must play to our strengths, many organisations out there would love to have a strength as potent as that. But openness and democracy come at a price if you have narrow goals. No one person or institution can bend that democratic forum to its own specific goals, whether they be desktop, server, embedded, global, local or whatever. Debian, like any institution or product, cannot be all things to all people. It can also not be perfect for one group at the expense of another.

To me, this is the real joy of Debian – it can provide a forum for almost every part of the free software world to come together to hammer out differences and find common ground to the extent that common ground exists. It’s a level playing field – independent of company agendas or technical historical baggage. Debian is the Tibetan Plateau of the free software landscape – elevated through the grinding efforts of conflicting passions to the point of forcing those who visit to get along in a somewhat rarified atmosphere. It can be difficult to breathe up there, sometimes :-). It’s a bit like the Linux kernel itself: show up, with code, and take your place at the table. And the results are spectacular – Debian as a community creates what I believe is one of the great digital artistic works of the era, and frankly comes as close as I can think possible to actually delivering something that does meet all those conflicting agendas and goals.

Consider Sid. Yes, it breaks your toys now and then, but by and large it represents an extraordinary achievement – pretty much the latest releases of the upstream communities, packaged and categorised. Nothing else, from Ubuntu or Red Hat or Novell (or Microsoft) comes anywhere close. Debian Developers are at their happiest running and working on Sid – a recent survey found that something like 76% of Debian users run Sid, while only something like 6% of Ubuntu users run the equivalent beta code. And remember, Ubuntu only has an Edgy or an Edgy+1 because of Sid. When I look at the ebb and flow of discussions on the Debian mailing lists, I see that Sid is in fact where the very best of Debian comes forth. It’s forward looking, it’s focused on the next generation, it requires exceptional skill and up to date technical knowledge to participate, and it’s not subject to the same political tradeoffs that are inevitable when dealing with releases, architectures, dates, deliverables, translation, documentation and so on. There are very few flamewars about Sid.

If Debian were a business, now would be the time for a careful review of strengths and weaknesses, and perhaps for a plan to focus the resources of the organisation on the things it does best. There’s nothing wrong with cutting goals. Jane, the COO at Canonical, keeps me on the straight and narrow with a fairly regular pruning of Canonical’s focus points too :-). Every conflicting goal sucks resources from the overall cohesiveness and strength of the group. If there is no consensus in the community, and the leadership don’t think they can get consensus, then it might be better to cut out those conflicting goals altogether. To my mind, the two things that Debian developers absolutely agree on are first, the uncompromising emphasis on free software, and second, the joy of Sid. If I was to try to resolve the bickering and frustration that I see evident in the community, that’s where I would direct the focus of my efforts. Of course, that’s a tough approach, and leaves many other goals for other people and other communities. But it’s where I think Debian, and DD’s, would be most productive and ultimately happiest. There are many things that Debian does brilliantly – celebrate that, focus on it, and trust that others will fill in the gaps.

By contrast with Debian’s Plateau, Ubuntu is a cluster of peaks. By narrowing the focus and allowing the KDE, Gnome and server communities to leverage the base of Debian without treading on one another’s toes, we can create a K2, and a Kangchenjunga and a Lhotse. Ubuntu’s peaks depend on the plateau for their initial start, and their strong base. Ubuntu needs to be humble about its achievements, because much of its elevation comes from Debian. At the same time, Ubuntu can be proud of the way it has lifted beyond the plateau, drawing together people with specific goals to raise the bar and deliver specific releases that meet ambitious, but narrow, goals.

Many people have asked why I decided to build Ubuntu alongside, or on top of, Debian, rather than trying to get Debian to turn into a peak in its own right. The reason is simple – I believe that Debian’s breadth is too precious to compromise just because one person with resources cares a lot about a few specific use cases. We should not narrow the scope of Debian. The breadth of Debian, its diversity of packages and architectures, together with the social equality of all DD’s, is its greatest asset.

So, what’s to be done about the current furore?

A little introspection is healthy, and Debian will benefit from the discussion. Matt is to be credited for his open commentary – a lesser person would simply have disengaged, quietly. I hope that Matt will in fact stay involved in Debian, either directly or through Ubuntu, because his talent and humour are both of enormous benefit to the project. I also hope that Debian developers will make better use of the work we do in Ubuntu, integrating relevant bits of it back into Debian so as to help uplift some of those other peaks – Xandros, Linspire, Maemo, Skolelinux and of course Etch. And most of all, I hope that Debian will start to appreciate its strengths even more, and to play to them, rather than dividing itself along the lines of its weaknesses. Debian/rules, remember?

US visa-waiver program

Monday, May 29th, 2006

Joi ito has had a few stern looks from the US INS regarding visa waiver forms.

I can relate.

I have a UK passport by virtue of the fact that my father was born in the UK (mostly by accident – another fun story). So I also know about the visa waiver program – it used to cover me too. Until one day I flew into the US briefly, on my own plane, to visit friends in DC as part of a long trip. When we arrived at Dulles, the immigration officer said there was a small problem. The operator of my plane had never signed the visa-waiver treaty, and so despite the fact that I had entered the US 27 times previously on that same passport, without a visa, they would now have to decline me entry.

But before doing that they would:

  • take me in for questioning
  • search me (I objected to the strip search, they relented)
  • fingerprint me and send those fingerprints off around the world (no, Mossad is not looking for me, yet)
  • examine for obvious tattoos and other distinguishing features
  • ask me to sign a statement of wrongdoing (I declined)
  • terminate my visa waiver access – from then on I need a visa

A complication was that, because they did not have records of all the times I left the USA, they believed I had previously stayed for longer than the 90 days. Fortunately I was able to get copies of all my inbound and outbound tickets faxed to them, so I think they eventually came to believe that I had not actually overstayed the visa program ever.
Then they let me back on the plane, we flew to Ottawa, the US embassy kindly gave me a visa, and we returned to the USA.

Now, flying into the USA I am ALWAYS sent off for extra questions and paperwork. And on applying for a new visa, I have to fill out the form for “people with a criminal record” (cross out the criminal record part, write in “visa waiver declined”, I kid you not). It’s a joyless process.

Hello, land of the free, knock knock.

I fell in love with the USA once. It was built on beautiful principles. Alas, it appears to have forsaken those in the name of security and expediency. As a result, I think the world is looking for a new source of inspiration – a new country where the most interesting people of the world can arrive, feel welcome, and feel free. Joi, best you be sure to hand that little green form back, every time.

Kudos to the Shadowman

Wednesday, May 24th, 2006

There was a bit of grumbling about Red Hat at Debconf this year (along with plenty of grumbling about Ubuntu too). While I agree that the Red Hat Network legalese effectively makes RHEL a proprietary product, I think it’s important to give credit to Red Hat for the role they have played and continue to play in bringing Linux, GNU and the free software stack to the wider world.

Red Hat was, and remains, essential to the free software and linux ecosystem.

Most of the world’s computer users have never seen or touched a Linux environment directly. Yes, of course we all use it in the form of Google or Akamai or a WiFi base station that we installed ourselves last weekend… but in terms of actually staring at the keyboard and typing into a Linux console or X-Term, adoption is still very much in the low-single-digits. Which means that less than 1-in-20 people who consider themselves IT people have hands-on experience with free software. For most of those people, Red Hat “speaks their language”. Which means that they are far more likely to get their first taste of free software via Red Hat than they are, say, compiling their personal kernels from scratch.

And we all win, when Red Hat has a win.

Every Red Hat server installed at UBS is a win for Free Software.

Every Fedora Core 5 installation at a school is a win for Linus Torvalds, for me, for Ubuntu, for Debian and even for Richard Stallman.

Every time Red Hat gets a mention on CNBC or that stupid Cramer show that really should have died with the dot com bomb, it’s a win for Free Software.


Because people learn by taking single steps. They learn by tasting new ideas little by little and not getting burned. They learn that “Linux” works, is reliable, is predictable, and that not running Microsoft Office is potentially a feature because it also means that it does not run all those pesky macro viruses.

And they learn that “normal”, which they used to define as “Windows”, is just part of the full universe of options they have at their disposal.

That of course inevitably touches their curiousity bone… and who knows where they might actually end up – in Gentoo, in Ubuntu, in Debian, perhaps even in OpenSolaris-land. It doesn’t matter – it’s all GNU goodness. Even if they stick with Red Hat.
In the free software community, we are as likely to turn viciously on one another as we are to stand united. And that’s a big weakness for us, as a community. It’s too easy to get free software advocates, who agree on 99% of things, to shred one another over the remaining 1%. That just makes us a confusing place for new users – and a disturbing place for more corporate adopters. It makes it easy for proprietary software companies to divide and conquer the free software world. We divide spontaneously.

Instead, we should strongly affirm the things in which we all agree.

  • Software CAN be Free (using Richard’s terminology) and therefor we believe it will ALL END UP FREE. And we’re committed to reinventing everything we need until the free software stack is a genuinely complete computing universe. We’re already pretty far along.
  • Free software is not just cheaper. It’s BETTER. It’s produced using a better process attracting better talent and it evolves faster, resulting in better innovation. All of that adds up to great value.

If we keep reminding ourselves that we agree on all of that, then our disagreements come into perspective. Instead of criticism for Red Hat, I think it would be more constructive to remind ourselves of the things that Red Hat does for the free software community first – and then perhaps to talk about why we prefer another system, for whatever personal reasons.

Just off the top of my head, here are a couple of things for which all of us free software advocates have Red Hat to thank:

  • When the EU was voting on software patents, it was the surprising sight of Red Hat and SUN jointly appealing for clearer thinking that tipped the scales in favour of the defeat of the motion.
  • Red Hat has in many ways been the public vehicle of IBM’s major Linux initiative. Without Red Hat, much of that work would have had less “punch”, because Red Hat was able to encapsulate it into a platform that could be presented to traditional IT people. We’ve all benefitted from that punch.
  • Red Hat was a leading proponent of GNOME, and to date has put far more active resources into the GNOME desktop even than Ubuntu. I do intent to match that as soon as Ubuntu stands on its own two feet, but until that time, hats off (har) to Red Hat.
  • The NSA and Red Hat teamed up to make SE Linux feasible. Even though that hasn’t yet become widely adopted, it still was a crucial step in getting the Fed’s to treat Linux seriously in the data center – and that’s in turn brought a rush of hardware vendors and ISV’s on board.

There’s a danger in something we love to hate. It’s that we can forget to love it.

Free software spotlight at SUN

Tuesday, May 23rd, 2006

I have to admit I was a little edgy, facing 15,000 people at the Java One event in the Moscone Center on Tuesday. Being on stage like that makes me break out in incomprehensible Swahili-nglish, every time. So I was grateful when Jonathan Schwartz gently pointed out that the licencing change we were discussing meant we could make *Java* easily available to free software desktop users, not Linux, as I said in my nervousness. Doh. Most free software desktop users already have Linux :-). Some day maybe it will be easier to be in the spotlight. Till then, I can only hope for hosts that gracious.

So, down to the nitty gritty.

Even though this was not the announcement we were all hoping for (a complete shift to free software Java) I was pleased to be part of the “Distro Licence for Java” announcement. As best I can tell, the new leadership at SUN clearly recognises the importance of the free software model AND the role of the community. That’s a big step forward and important to the progress of free software. If my being there could help accelerate the date when we really do have a free software licence for Java, then I was happy to take the time.

The new licence does not mean that we can include Java in Ubuntu by default. It does not yet meet our criteria for free software in order to get into “main”. But it DOES mean we can put it in the Multiverse or Commercial repositories, and people who want it can trivially get it after they have installed Ubuntu on a desktop or a server. Three clicks, and you’re done. Also, it can certainly be part of the default install of distros like MEPIS which leverage Ubuntu but add interesting and to many people useful proprietary bits.

So it’s a constructive step.

I wouldn’t expect a “big bang” conversion at SUN – it takes a long time to build consensus throughout a large organisation and it also takes time to straighten out the details of potentially thousands of other legal commitments. But the quote of the day was certainly Rich Green saying that the question is not “if” Java should be open source, but “how”. SUN wants to live in a free software world but also wants to continue to deliver on the promise of Java as a “compatible everywhere” platform. Even if there are already warts on the universal compatibility front (“write once, debug everywhere”) we’re all better off for the fact that Java is basically Java is basically Java. Developers don’t want to have to deal with “extended double remix” versions from a multitude of vendors. Especially not from vendors that specialise in “embrace, extend and extinguish” lines of attack.

So I thought I would ponder a little the strategic options and hypothesize a game plan that might do what SUN needs, while remaining firmly in the free software world. Here it is, inspired by the view of the Pacific coast as I make my way back down from San Francisco to Mexico City and Debconf.

  1. Pick a copyleft licence, like the GPL
    If SUN were to make Java available under a free software licence it would benefit from having it under a copyleft licence, which means that folks taking it freely would also have to contribute their work back. Following the lead of the Free Software Foundation and requiring copyright assignment would preserve SUN’s undisputed ability to defend the ownership and licence of Java while broadening the base of talent that can help make Java perform on a wider variety of platforms. For many companies, particularly those who want to embed Java in their own infrastructure, a copyleft licence will not do (it would force them to make their own applications Free) and there will thus be plenty of incentive for them to strike a reasonable agreement on commercial terms for a custom licence to the Java code. This dual-licensing approach has worked for companies like MySQL. Java is both a client and a server technology, so the reach of such a strategy is even wider than server-only platforms like MySQL.
  2. Manage the trademark very, very well
    Copyright is only one leg of the intellectual property table. The trademark is the piece MOST associated with compatibility. SUN gets to say who can use that trademark even if it makes the code itself available under the GPL. Of course, use of the trademark should be conditional upon the maintainance of compatibility. Those test suites will have to get even better. If SUN is smart it can deploy the trademark as free advertising on all the platforms which are not in any event revenue-generating (like, for example, the free software desktop) while at the same time benefitting from royalties in places where it can reasonably claim a share of the revenue flow (the proprietary desktop and the embedded marketplace).
  3. Keep the patent option open
    I don’t believe software patents are a good thing (they’re a no-win deal for society, we shouldn’t grant a monopoly in exchange for a disclosure of something that could never be kept a secret anyway). That said, since the US allows them, SUN has them and SUN’s competitors have them too, it’s smart to tie Java-related patent licences to good behaviour on the part of licencees. The GPLv3 touches on this – in short, SUN can leverage the popularity of Java in order to provide itself with some protection from patent suits.

All of us at some stage or another have dreamed of having a brilliant idea then resting on our laurels while the cash flowed in. Truth is, life is never that easy. While Java was and still is a brilliant idea, there is plenty of competition out there and the field is hotting up, especially amongst the free software alternatives. I’d say there are no more than two years between today, and the day that a free software widget comes along that has all the characteristics of Java and *doesn’t* come from SUN. So, much as SUN would like the world + dog to rush to licence Java, that isn’t happening and won’t happen. More to the point, when that other contender exists it will rapidly become ubiquitous, at which point SUN will have lost the opportunity to lead that platform – forever.

Today’s markets are created by tools and standards that become pervasive and ubiquitous. Apache became pervasive because it is free software, and Java could do the same. This opportunity is SUN’s to lose, but everything I’ve seen suggests that they do want to grasp it with both hands. I can only applaud that commitment.

As a small incentive, I’ll close with a thought on the value of being the platform where innovation happens.

Right now, there’s plenty of development on “Java on Linux”. Think of every major company that deploys heavyweight J2EE infrastructure on Linux today. But that sort of development is not what drives innovation. When I look at the Java code that’s out there it’s industrial, but not exciting. The really exciting stuff tends to happen at the fringes, in student dorm rooms and tiny companies. And they tend to use the tools that are most immediately accessible to them.

  • If Java had been free five years ago, PHP and LAMP might never have come into existence, because it would have been possible to deliver that vision using a lightweight integration of Java, scripting and the database.
  • If Java had been free software four years ago, then it might have less of the reputation as a “corporate overkill platform” that it’s at risk of being saddled with right now
  • If Java had been free software three years ago, then it would have even higher penetration on cellphones and mobile platforms, and there would not be competing free osftware implementations of the mobile Java VM and API’s
  • If Java had been free two years ago, then Ruby-on-Rails might have been “Java on Rails” (though truth be told I’d rather it was Python on Rails!).
  • If Java was free today, more of the AJAX stuff that is popping up all over the show would be done in Java (on the server side), because every student with a Linux box would have Java installed by default

A big enterpise-focused company like SUN will always convince big enterprises like Deutsche Bank that they should buy technology and services from SUN. Fair enough. But they can’t convince a smart Comp.Sci. student to envision the world using their tools – unless they make those tools pervasively and freely available. And it’s that smart Comp.Sci. student who writes the magic that will be headline grabbing material three years from now. Innovation starts in unexpected, lightweight sorts of places. So you need to ensure that your tools can reach into those lightweight sorts of places. When Java is free software, it will flow further out into the network – and that can only be a good thing for SUN.

One of the big debates we are having at the moment in the Foundation is all about how to design a curriculum to stimulate the development of analytical skills. The thing I care most about is that we focus not on the specific set of tools, but on the ability to “learn and apply a current tool set”.

The truth is that we constantly acquire and discard sets of tools. So we should not be fixated on one specific set of tools for all of life. Society, technology and the times change so fast that any fact, process or algorithm we learn at school is by definition not going to be useful for any length of time. The real skills that serve us are the ability to adapt, learn, apply the products of that learning, and participate in the discussions and challenges of the day. Tht doesn’t mean that facts are useless, nor that specific tools don’t matter. Unless you can demonstrate an ability to absorb and apply both, fast, you haven’t actually gained the knack of becoming effective in a given environment.
I was thinking about the toolsets I’ve had to acquire over the past fifteen years since I left school.

In university you are solving the problem-du-jour as set by lecturers and tutors. Each year you learn a new set of theorems, axioms, rules, laws, analytical techniques, best practices, algorithms, formulae etc. And you have to learn how to make them dance for you so that you can do well in that year. Then, by and large, you file those away never to be used again, and learn new tools for the next year of study. Sometimes, the tools and laws and rules are additive, you build new knowledge on the old stuff. Sometimes, however, you just learn the tools because you need them to get through the year, and that strikes me as being makework. See my rant on the study of economics below.
In work, you’ll have to learn the tools of the trade or the company and how to get things done. If you’re a nutcase like me, you change your toolset entirely every few years – I spent two years consulting and training (late university and early Thawte), two years writing database-driven web applications for crypto and PKI services (later Thawte), a year studying ballistics and space vehicle operations (Star City and the ISS), two years learning cooking, dancing, and the intricate details of playboyhood, and now two years learning about how to build a distribution (Ubuntu), and how to build *big* web applications ( In each of those phases the tools have been different. Its hard to know what kind of schooling could have made a meaningful impression on my ability to be a better cosmonaut – or a better programmer – or a better man of leisure.

And I’ve no idea what set of tools I’ll have to learn next.

My experience might be extreme, but for ALL of us life consists of a constant process of reinvention, learning and discovery. You are not doing the same job today that you were five years ago – the world is changing around you. the most successful people learn how to spot the best tools and trends and to take advantage of them. They also learn to LET THEM GO when the time is right. Rather than being convinced your tools are the One True Way, recognise that they are rocking good tools right now and will also certainly be obsolete within five years. That gives you an incentive to keep an eye out for the things you need to learn next.

Not everything that gets offered to you is likely to be of use. I hated economics at university because it epitomised the disposability of old knowledge. The problem was that first-year economics was basically a history lesson disguised as a science lesson. We learned one classical set of ways of looking at the world, and how to apply them to assess an economy. This was a bit like learning science circa 1252 and being told that you need to be able to draw up an alchemical recipe for lead-to-gold conversions that could pass for authentic in that era.

Then in second year they said “luckily, the world has since decided that those ideas are utter crap, you can’t really manage an economy using them, but here’s a new set of ideas about economics”. So we set about learning economics circa 1910, and being expected to reproduce the thinking of the Alan Greenspan’s of that era. The same people who orchestrated 1929-1935 and all the economic joy that brought the world. We knew when we were studying it that the knowledge was obsolete. And of course, when I looked into the things we were supposed to study in third year, fourth year and masters economics programs the pattern repeated itself.
There is some value in disposable knowledge. I like to hire guys who set out to learn a new programming language every year, as long as they are smart enough to stick to core tools for large scale productive work, and not to try and rewrite their worlds in the new language every year. The exercise of learning new API’s, new syntactical approaches, new styles is like jogging, it keeps you fit and energised. It’s useful even if aren’t a marathon runner by profession. But it should be kept in balance with everything else you have to do.

So, back to the topic of curriculum.

We want to create a curriculum that can:

  • be self taught, peer mentored, and effectively evaluated without expert supervision
  • provide tools for analysis that will be general useful across the range of disciplines being taught at any given age
  • be an exercise machine for analysis, process and synthesis

The idea is not that kids learn tools they use for the rest of their lives. That’s not realistic. I don’t use any specific theorems or other mathematics constructs from school today. They should learn tools which they use AT SCHOOL to develop a general ability to learn tools. That general ability – to break a complex problem into pieces, identify familiar patterns in the pieces, solve them using existing tools, and synthesise the results into a view or answer… that’s the skill of analysis, and that’s what we need to ensure kids graduate with.