Q&A: Linux founder Linus Torvalds talks about open-source identity
By Rodney Gedda
Computerworld
January 22, 2009
Computerworld Australia - Linus Torvalds is a regular visitor to Australia in January. He comes out for some sunshine and to attend the annual Linux.conf.au organized by Linux Australia. He took some time out to speak to Rodney Gedda about a host of topics, including point releases, file systems and what it's like switching to GNOME. He also puts Windows 7 in perspective.
It's 2009 and Linux development is approaching 20 years. How do you look back at the past two decades? I feel like it's very natural, and I don't think it will go away. I have a suspicion I will be doing this for a long time, and there is no feeling of "it is done".
I don't have a feeling to pass it on [maintenance of the Linux kernel], but I let the people I trust make the decisions. I can't second-guess them, as it wouldn't work and I would waste a lot of people's time. All the submaintainers sync their Git trees with the main code, and I check they haven't done something horrible, but that's rare.
In recent years, there have been more "point releases" than major version upgrades. How is this going? The point-release thing has worked well, and we have added new features to point releases. It's both worrying and gratifying.
We have point releases so as not to screw development up in a big way. That's why we have stable trees, but we have not gotten to the point where we are adding code so fast we are losing stability. The point releases are getting bigger, even though we are keeping the release time consistent at about two or three months. And now we do more changes in those two or three months than we were doing a few years ago. So we are scaling our development well.
There's always the worry are we going to lose it and have huge stability problems. Andrew Morton keeps on talking about this, that we have to make sure quality does not degrade. We have stats on regressions and how long it takes to fix them and how many have to wait for a stable kernel. And some regressions show odd behavior. It might be a hardware issue or an old bug that was hidden before.
I'm happy with the point-release model, and I don't see how we could have anything but 2.6, so for now we have done nothing. In the end it's just the numbering. What I don't want to go back to is a development tree that breaks things for a few years. There may be architectural rewrites in the future, but we have been getting good at that, even in point releases. So there is nothing that would cause an upheaval that would require a new major version number. We can do unstable development now and not let it impact users.
What about older code in the kernel? Do you want to remove this? Some people want us to remove old code more aggressively, but I think if some people are still using it, we should keep it, as maintaining the old code is usually almost free, so we will keep maintain old code as is humanly possible. Occasionally we remove old device drivers.
There has been a lot of buzz about file systems lately, including Sun's ZFS. What would you like to see Linux adopt here? File systems are easy to get excited about. They are easy to add to the kernel, so there is almost no risk. We have something like 35 file systems supported, and a lot are not realistically used much. They are candidates for removal, but people are still using them. We add file systems easily and let history take its course.
In the development community, there are two camps: people that want stability and people that want to release often. End users will do crazy things that no amount of testing infrastructure will get, so there are competing pressures. You want file systems to be stable, but you can't be in beta forever. Btrfs is developmental, but it was merged in the main kernel to help people test it.
To some degree, Btrfs does what ZFS does. Some uni ran ZFS as a module in Linux, so using it with Linux can be done. The biggest thing Sun did with ZFS is they were good with PR and marketing. There are other projects that wanted to do what ZFS does on Linux. Sun started finding the NetApp patents, as the NetApp patents kept people from doing things they wanted to do. I hope ZFS clears the patents issue.
A few years ago, you were forced to change revision control systems for Linux development and Git was born. Now Git has a groundswell of support, just like Linux. How is the Git project going? I want all my code to be open source, but I will use the best tool for the job, and BitKeeper was the best tool, and at the time the alternatives sucked so bad. When the alternatives are so bad, I will take proprietary code. Proprietary was a downside, but what choice did I have? Hey, I usually do my presentation slides in PowerPoint.
In the end, BitKeeper was causing too many issues so I said I'm not using a version control system until another was suitable. I have used CVS in the past, and I knew enough to know that I hated it. And I won't use Subversion, as it has the same fundamental problems as CVS. In the open-source world, there were some small projects. Mercurial came about the same time as Git. So they were parallel, and there were existing ones like Bazaar. The one I liked most is a project called Monotone. I looked at it, and there are things I really liked about it and many things I disliked, and performance was one of them.
I took me two weeks to get to a Git that was unusable for anyone else. The user interface was something only I could love, but after two weeks it did something that CVS and Subversion didn't do, and that was merge correctly with history and everything. Then I took two years after that to get it useful and have an interface that people could use.
If you have the right idea you can do that well, but it takes a long time to refine it.
It's not uncommon for a project to have an official model, like Subversion, but then people would use Git for merges and then export the end result back to the official project. Now, just in the last couple of months, projects have started switching to Git. Perl is one of the better-known ones, and Git uses Perl internally, so it was a positive feedback cycle.
To some degree, source control management is such a core technology, it's important to know how it works, and if you don't know it you are spending a lot on mental effort on knowing what it's doing. Just through the kernel, there were thousands of people who knew Git, and then it was viral where other projects started using it.
Git has always had a "Unixy" mind-set where you create commands. So we never did what other projects do, where they have an API and a scripting language built into it. It's partly a design issue and for other reasons. So it can be hard to write a program that accesses Git internally, as there are no libraries. It turns out the best way to interface with it is with Java.
One of the things I like about Git, and am quite proud of, is the data structures are simple and you can reimplement it if you wish. It's a well-defined data model. There are Git-related projects like GUI tools, for example, with the Eclipse IDE. And if you come from the Windows world, people are used to the TortoiseSVN, so there is TortoiseGit. Those people were saying, "We want Git, but we are Windows developers and don't want the command line."
Also, the Git Web front end came from the developer ecosystem and into the main Git tree.
While there are hundreds of Linux distributions, in recent times Ubuntu, OpenSUSE and Fedora have captured most of the mind share. Do you think this will continue, and will the new netbook paradigm be how people end up getting Linux? It's a huge job to do a distribution. The reason there are hundreds is it is easy to start your own, but if you want to be a leader and introduce new code, the testing and Q&A involved is enormous. It depends on having enough users that you get coverage, and it is unreasonable to expect too many large distributions. Ubuntu grew surprisingly quickly, and maybe that can happen again.
I use Fedora for historical reasons. I have one of the Eee PC laptops and I reinstalled my own distribution, so I am the wrong the person to ask. However, most users don't want to do the installation and configuration.
We are in the first phase of netbooks, and there are some teething problems. The dumbed-down interface was a teething problem, and the first netbooks were underpowered.
I'm hoping the next generation will be more powerful and offer a better user experience. I was doing kernel development on a netbook and it was not at all horrible. The screen was too small, but we are getting to a stage where you can get a cheap good laptop.
A few years ago, you could get a small netbook but it would be twice the cost. The netbook market changed the game -- they are not seen as an executive toy, but a low-end laptop, which is much healthier.
With netbooks, a lot of the desktops have trouble going to smaller screens. All of sudden, you can't press the OK button because it's outside the screen. As screens go as small as phones, Google's Android could be a contender for netbooks, so you may see Android growing up instead of desktops growing down.
Linux on phones is hard, as there are so many regulations, but I was really happy about Nokia's decision to release Qt as LGPL.
What do you think of Windows 7 and Microsoft's operating system development cycle? Windows 7 being better than Vista is saying a lot. Microsoft may have a huge PR advantage, as people will compare it to Vista and think it is good so "angels will sing again" like they did with Windows 95 compared to Windows 3.1. So maybe Microsoft did this on purpose.
I think Microsoft has realized the Vista development cycle is way too long and it would be insane to do that again. They might aim for a two-year development cycle, and I think that is too long. They should decouple the operating system from the applications and release sooner.
For Linux, six months is quite tight. All the pieces you put together, you hope they are stable, but there will be surprises, and six months is a short cycle when you put together so many packages. An annual release cycle is a reasonable cycle for doing a whole distribution.
In the Linux space, once a year is reasonable, but then you have the incremental releases. It's hard for a commercial company like Microsoft that wants people to pay for releases, to do a yearly upgrade. Apple has done faster upgrades, but it has charged less for the releases. This is not a problem for open source, as it's free software, but this is one of the things Microsoft has to balance. They want people to rent the software, but users don't want to. If you do development over five years and make so many changes, it is more painful for the user. The cost of the pain is likely to be higher than the cost of the operating system, which is why people are slow to upgrade.
A lot of what made Microsoft successful in the '90s is gone. There is a reason why people don't think they are successful anymore, but hey, I don't have a business model at all!
Another area where Linux is used extensively is in hosted software or "cloud computing," where users don't have access to the source code. Is this a good or bad thing? It is to some degree inevitable, as within certain classes of software it's the only model that makes sense. Look at Google Maps. It does not make sense to have it on a device. The whole idea is to have it on the cloud because the information is huge and "out there." If that means the user never sees the source code, it's not something you complain about. I'm happy with Linux being used for that.
Projects that are specifically designed for software as a service in the back end, and only the output of the project is what gets distributed, then use the Affero GPL. Linux is not that project.
One of the problems I had with GPL Version 3 was it was possible to add Affero-like extensions, and "license creep" happened, which can make future versions of software license incompatible with previous versions.
Another open-source project that underwent a big change was KDE with Version 4.0. They released a lot of fundamental architectural changes with 4.0, and it received some negative reviews. As a KDE user, how has this impacted you? I used to be a KDE user. I thought KDE 4.0 was such a disaster, I switched to GNOME. I hate the fact that my right button doesn't do what I want it to do. But the whole "break everything" model is painful for users, and they can choose to use something else.
I realize the reason for the 4.0 release, but I think they did it badly. They did so may changes, it was a half-baked release. It may turn out to be the right decision in the end, and I will retry KDE, but I suspect I'm not the only person they lost.
I got the update through Fedora, and there was a mismatch from KDE 3 to KDE 4.0. The desktop was not as functional, and it was just a bad experience for me. I'll revisit it when I reinstall the next machine, which tends to be every six to eight months.
The GNOME people are talking about doing major surgery, so it could also go the other way.
How is life at the moment? Are you enjoying work at the Linux Foundation in Portland, Ore.? I'm all happy with my life. The reason I come to Linux.conf.au is it is summer here and freezing in Portland. My job is the same, and I do the kernel and nobody tells me what to do and they pay me for it, which is just the way I like it.
Are you going to say 2009 is the year of the Linux desktop? I make controversial statements without thinking a lot. I'm not going to say it's the year of the Linux desktop, as it is a small encroachment process. But look at what Firefox has achieved and how it is creeping up on Windows. It is important projects like Firefox and OpenOffice.org are spreading the whole notion of open source wider. They work cross-platform, and the project shouldn't be tied to the platform. People will realize the lack of tie-in means you can chose a platform, and that is much healthier from the market standpoint than having to make a platform decision for an application.
In a fair market, Linux will have a much easier time competing.
Copyright 2009