Agile software methodologies
November 16, 2002
Over the last few days, we have heard a lot about pervasive computing. And unfortunately for you, pervasive computing requires software. Software is everywhere, in your mobile phone, in your TV remote control and in other devices that are less obvious, like the Big Mouthed Billy Bass novelty singing fish.
As designers, I think you should know a lot more about how software is developed, because it is affecting your life and your work more and more. When you write software, you have to follow a process. And these processes have a lot of impact on the quality of the software produced. So, my aim today is to show you what has been done in the past, why it really sucked, and how we as software architects are trying to improve the situation.
I will start off with a question: how would you feel if, six times out of every ten you went to a restaurant, the waiter presented you with food that was really bad, spoiled, or burned. Or, if you weren’t presented with any food at all, but still had to pay the bill at the end. You would be very upset. But between 60 and 75 per cent of large software projects routinely fail. And the companies that initiated such projects still have to pay the bill! So it is likely they are more upset than you are about your restaurant.
These methodologies have evolved from the history of computer science. If we look, for example, at how the software was developed for the Apollo spacecraft, we realise that it was actually written by physically putting small magnetic beads into copper wires. For every time there was a 1 in the program you would put a small bead in the wire. If you wanted a 0 in the program, you wouldn’t put a bead. This shows that the process was manual. There was a lady (this is an original picture) who was putting these wires through manually every time they had a new release of the software. This is not practical, and obviously every time you want to make a small change, it’s really dramatic. So this has led to the development of methodologies that were very rigid, because every change was very painful.
In the past, when you wanted to develop software, you started out with a group of people called analysts. They would look at the problem and talk to the customer for months, producing a large quantity of documentation. After these people had done their work, the customer still had no idea of what was going to end up in their software, but maybe six months had already passed.
Then the designers came in, the software designers, so the amount of paper vastly increased. These people would take the analysis, and design the software on the basis of that. Still the customer wasn’t seeing anything.
Then after this phase, the developers came in. The developers are the people who take the design documents and then translate them into a language that is understandable for the computer. So developers are programmers. So they take all this information and they put it into the computer, so we get even more documentation.
Then we go into the de-bugging phase, which means finding defects in the software and trying to fix them. So it’s a very long and disastrous process because you spend hours and hours testing your software, finding faults and defects called bugs. And you try to remove them, and this generates even more paper. This development process requires a lot of people, we’re talking about 50 to 100 people, or even more, to develop large systems like payroll systems or large enterprise applications.
These methodologies are normally called waterfall methodologies, because whenever you move from one stage to another, there’s no way you can go back. So if I were coming down from that waterfall, I would be badly bruised when I got to the bottom, and this is what happens in a lot of projects. In my opinion, a lot of those 60 or 75 per cent (the figure depends on who you ask) software project failures are due to the process that was used to develop the software.
What’s more, after this lengthy process, the customer might the software and say, “Oh, but this is completely different from what I wanted.” Which is happens a lot. There are situations where the software that comes out doesn’t do what it’s supposed to do. Or, while the software was being developed, the customer has completely changed their line of business, so it is now completely useless, or it’s so full of bugs that you can’t fix it. So you’ve waited for a year, you’ve spent 20 million dollars, and still you haven’t got anything that’s working.
Unfortunately, there have also been cases where defects and bugs have caused deaths. There was the famous case of an X-ray machine, where a bug that was very difficult to track down caused overexposure to X-rays, leading to death.. There have been very expensive satellites that exploded because the two teams developing the software weren’t talking to each other. The process was wrong, they weren’t talking to each other, one piece of software was sending data in inches and pounds, the other was receiving it in centimetres and kilograms, and the satellite exploded. This is what can happen when the software development methodology is wrong.
There’s another issue associated with these methodologies, which is that they have a very forwards way of looking at software development, because they assume that everything is predictable long-term, that every body does their work in a predictable way, with very predictable productivity, and so on. This is just not true.
In fact, there is this person called Larry Wall, who is a hero of software developers. I’m sure none of you have heard of him, but he’s famous. He once wrote that the three chief virtues of a programmer are laziness, impatience and hubris. I would like to add a fourth virtue, over-optimism. Why over-optimism? Because when they are asked, “So how long do you think it’s going to take to write this piece of software?” They reply, “Ha ha, don’t worry, we can do everything in one week.” Three months later, the situation is critical. So whenever you think about software development, these elements have to be taken into account.
By the mid-1990s, software developers working on everyday applications (this was not something that happened in the universities and research centres), were finding it more and more difficult to deal with these strict and compartmentalised methodologies. So they began to look at different ways of developing software. They wanted to start from what people actually are in real life, as opposed to viewing the developer as a machine producing a certain number of screws.
This led to the development of what we call agile methodologies. These include a number of ways of developing software that take into account what people are like, and value their ability to embrace and adapt to change. Because one of the biggest problems of the previous waterfall model is that whenever you ask for a change in the software, it’s very difficult, painful, and expensive. If you ask for a change in the first phase, the analysis phase, that can be handled, it’s only paper. The second phase is still paper, but it’s more difficult because it’s changing more paper. And as you move along the chain, you fall down the waterfall, and it becomes more and more expensive.
Companies don’t currently have the luxury of waiting for years for software to be delivered. What people need is a method of developing software that uses a lot less resources, because nobody today has the resources to use 100 developers. We need more flexible ways of developing software. Agile methodologies are people oriented. They explicitly make a point of trying to work with human nature, rather than against it. They emphasize that software development is an enjoyable activity. Now, I’m not really sure about the enjoyable activity bit, but what we are trying to build a methodology that takes into account the fact that large groups of people are not very organised. So they need to communicate more rigorously. If you put 200 people in different floors, it’s very difficult to communicate, and the paperwork creates more issues.
So, what are the principles of these agile methodologies? I have summarized them here:
The previous model featured a supposedly heroic developer working 12 hours a day, seven days a week. This is not the right combination. People should work for 40 hours a week, have a life and enjoy that life. Then they will work much better.
So those are the major points of agile methodologies.
How about the future? Well, I have worked on software projects that have used this system with great success. But as we enter the age of pervasive computing, unfortunately there is a mixture of graphic user interface and tangible user interface, calling for product design skills that software engineers don’t have. And sometimes even the classical designers can’t help you.
So we need to improve the methodology still further, by adding people like interaction designers who understand users and can design products for them as a starting point. The future is in taking these lightweight, adaptable methodologies, and combining them with input from interaction designers. If we do this, the software we develop in the pervasive computing environment will be much better.
Transcript of a talk given at Doors Of Perception conference.
Page last modified on December 26, 2003, at 02:38 PM
Copyright 2002 http://www.potemkin.org/