Companies often develop crisp stories about how they have nurtured a “culture of innovation.” They say that all their employees can be innovators, and even take some initiative to move their ideas forward. Not only that, they claim, but failures are tolerated, if not celebrated. Failure is, after all, an integral part of the innovation game.
That may be, but as Financial Times economics columnist Tim Harford points out in his new book, there is one small problem: Contrary to the gospel of open innovation and dorm-room programmers, failure is rarely free. In fact, it’s becoming more and more expensive.
When real dollars are at stake, there is a big difference between a bad failure and a good one. A bad failure is expensive and emotional; it results from a willingness to experiment without the discipline to learn. A good failure, by contrast, comes as quickly and as inexpensively as possible; it is managed through a rigorous process of testing clearly stated hypotheses.
It is not easy to run disciplined experiments. But a company without such discipline — and without a way to reward innovators for only good failures — is a company without a true culture of innovation. It is a company that spends money aimlessly in hopes of stumbling upon success, instead of one that smartly and swiftly adapts.
— Chris Trimble
An excerpt from Chapter 3 of Adapt: Why Success Always
Starts with Failure
Look at the world’s leading companies and consider how many of them — Google, Intel, Pfizer — make products that would either fit into a matchbox, or have no physical form at all. Each of these large islands of innovation is surrounded by an archipelago of smaller high-tech start-ups, all with credible hopes of overturning the established order — just as a tiny start-up called Microsoft humbled the mighty IBM, and a generation later Google and Facebook repeated the trick by outflanking Microsoft itself.
This optimistic view is true as far as it goes. Where it’s easy for the market to experiment with a wide range of possibilities, as in computing, we do indeed see change at an incredible pace. The sheer power and interconnectedness of modern technology means that anyone can get hold of enough computing power to produce great new software. Thanks to outsourcing, even the hardware business is becoming easy to enter. Three-dimensional printers, cheap robots and ubiquitous design software mean that other areas of innovation are opening up, too. Yesterday it was customised T-shirts. Today, even the design of niche cars is being ‘crowd-sourced’ by companies such as Local Motors, which also outsource production. Tomorrow, who knows? In such fields, an open game with lots of new players keeps the innovation scoreboard ticking over. Most ideas fail, but there are so many ideas that it doesn’t matter: the internet and social media expert Clay Shirky celebrates ‘failure for free.’
Here’s the problem, though: failure for free is still all too rare. These innovative fields are still the exception, not the rule. Because open-source software and iPad apps are a highly visible source of innovation, and because they can be whipped up in student dorms, we tend to assume that anything that needs innovating can be whipped up in a student dorm. It can’t. Cures for cancer, dementia and heart disease remain elusive. In 1984, HIV was identified, and the US health secretary Margaret Heckler announced that a vaccine preventing AIDS would be available within a couple of years. It’s a quarter of a century late. And what about a really effective source of clean energy — nuclear fusion, or solar panels so cheap you could use them as wallpaper?
What these missing-in-action innovations have in common is that they are large and very expensive to develop. They call for an apparently impossible combination of massive resources with an array of wildly experimental innovative gambles. It is easy to talk about ‘skunk works’, or creating safe havens for fledgling technologies, but when tens of billions of dollars are required, highly speculative concepts look less appealing. We have not thought seriously enough about how to combine the funding of costly, complex projects with the pluralism that has served us so well with the simpler, cheaper start-ups of Silicon Valley.
When innovation requires vast funding and years or decades of effort, we can’t wait for universities and government research laboratories to be overtaken by dorm-room innovators, because it may never happen.
If the underlying innovative process was somehow becoming cheaper and simpler and faster, all this might not matter. But the student-startup successes of Google and Facebook are the exceptions, not the rule. Benjamin F. Jones, an economist at the Kellogg School of Management, has looked beyond the eye-catching denizens of Silicon Valley, painstakingly interrogating a database of 3 million patents and 20 million academic papers.
What he discovered makes him deeply concerned about what he calls ‘the burden of knowledge.’ The size of teams listed in patent citations has been increasing steadily since Jones’s records began in 1975. The age at which inventors first produce a patent has also been rising. Specialisation seems sharper, since lone inventors are now less likely to produce multiple patents in different technical fields. This need to specialise may be unavoidable, but it is worrying, because past breakthroughs have often depended on the inventor’s sheer breadth of interest, which allowed concepts from different fields to bump together in one creative mind. Now such cross-fertilisation requires a whole team of people — a more expensive and complex organisational problem. ‘Deeper’ fields of knowledge, whose patents cite many other patents, need bigger teams. Compare a typical modern patent with one from the 1970s and you’ll find a larger team filled with older and more specialised researchers. The whole process has become harder, and more expensive to support in parallel, on separate islands of innovation.
In academia, too, Jones found that teams are starting to dominate across the board. Solo researchers used to produce the most highly cited research, but now that distinction, too, belongs to teams of researchers. And researchers spend longer acquiring their doctorates, the basic building blocks of knowledge they need to start generating new research. Jones argues that scientific careers are getting squashed both horizontally and vertically by the sheer volume of knowledge that must be mastered. Scientists must narrow their field of expertise, and even then must cope with an ever shorter productive life between the moment they’ve learned enough to get started, and the time their energy and creativity starts to fade.
This is already becoming true even in some areas of that hotbed of dorm-room innovation, software. Consider the computer game. In 1984, when gamers were still enjoying Pac-Man and Space Invaders, the greatest computer game in history was published. Elite offered space combat depicted in three dimensions, realistic trade, and a gigantic universe to explore, despite taking up no more memory than a small Microsoft Word document. Like so many later successes of the dot-com era, this revolutionary game was created by two students during their summer vacation.
Twenty-five years later, the game industry was awaiting another gaming blockbuster, Duke Nukem Forever. The sequel to a runaway hit, Duke Nukem Forever was a game on an entirely different scale. At one stage, thirty-five developers were working on the project, which took twelve years and cost $20 million. In May 2009, the project was shut down, incomplete. (As this book was going to press, there were rumours of yet another revival.)
While Duke Nukem Forever was exceptional, modern games projects are far larger, more expensive, more complex and more difficult to manage than they were even ten years ago. Gamers have been eagerly awaiting Elite 4 since rumours of its development surfaced in 2001. They are still waiting.
— Tim Harford
Excerpted with permission by Farrar, Straus and Giroux LLC. Copyright © 2011 by Tim Harford. All rights reserved.