Interesting (if rather thin) article by Peter De Jager on the problems of abundance.
Here's the final line of the article:
Any technology which creates abundance poses problems for any process which existed to benefit from scarcity.
Let's run with this for a second. Suppose that this is a true statement, then we can infer that if a process that previously worked well had problems, then looking for technology changes that have created an abundance might be a good way of solving the process's problems.
Looking at say...software process for example (who didn't see that one coming?) perhaps we can understand the problems faced by traditional software development efforts by considering whether the technologies used have changed, while the processes to use them have not, thus shedding light on why Extreme Programming is not as crazy as its detractors want us to believe.
Many software development processes have their roots in the 70's and earlier, so let's start there. What technological abundances have occurred that impact software development in the last 30 years?
There are several candidates, here are a few:
- Processing Time. Once a scare commodity, now abundant due to the falling cost of hardware, and the vast increase in speed of computing.
- Programming Languages. Higher level languages, reduce (theoretically) the amount of code that has to be written to produce the same result, and increase the level of abstraction that is available.
- Information Publishing. Yes the Internet, but not just mailing lists & web pages, but also the barrier for publishing new tools, and technologies is much lower making the amount of information available to a programming team, staggeringly large
- Information Storage. Harddrives instead of tape -- or punch cards, no further explanation necessary.
There are probably more that are relevant, but even these alone should be enough to cause the effect described by De Jager.
Extreme Programming can thus be understood as an attempt to engage our sudden overflow of riches in these areas. Predictive approaches make perfect sense with day long compiles, punch cards, & expensive shared machine time. Simple, expressive code -- even if it means more method calls -- is too problematic on slow machines where every clock tick is vital, and every block of memory overused. Encouraging changes in requirements after coding has begun is suicidal under those conditions. Running test suites a dozen times a day is too expensive. Checking every single change into a repository and having it compiled and tested would be unthinkable.
And if you have predictive development, delayed coding, and cryptic, highly optimized code, can collective ownership be at all feasible?
The only (controversial) XP practice that makes sense -- unchanged in both environments -- is Pair Programming. In fact it makes more sense when programmer time was cheaper than it is now, but then this is the one we often hear people say they've been doing for a long time, so maybe its not so surprising after all.
So maybe XP isn't radical, its just, well keeping up with the times. Sure a dozen builds a day is expensive -- but we can afford it. Sure compiling and testing after every line of code is wasteful -- again we can afford it.
Continuing to develop software like we don't have these resources? Now that's waste we can't afford.