For business executives, separating true technological breakthroughs from wishful thinking is inherently challenging. As Danish physicist Niels Bohr stated, “Prediction is very difficult, especially about the future.” Most of us can recall examples of misguided technology predictions great and small, such as the Internet bubble and the Segway. Perhaps the best example today of an innovation whose surrounding hype may be obscuring its substance is 3D printing. It thus provides an ideal case study on the importance of applying time-tested forecasting tools before getting too caught up in new-technology excitement.
Claims that 3D printing (also known as digital printing) is poised to shake up the manufacturing industry in dramatic fashion have been on the rise. A September 2013 report from investment advisor the Motley Fool even went so far as to assert that the new technology will “close down 112,000 Chinese factories...and launch a 21st-century industrial revolution right here in the U.S.A.” As much as we would like to see manufacturing return to Western shores, we’re a bit less sanguine than the prognosticators. Indeed, before we send pink slips to millions of Chinese workers, we need to step back and analyze 3D printing through the lens of the experience curve, and how it both drives and responds to consumer adoption of new technologies. And before we predict widespread change to the manufacturing industry’s structure, we must reflect on how economies of scale and total landed cost drive investment decisions.
There’s no question that 3D printing offers a new manufacturing model. It eliminates the need for expensive, customized tooling. And as an additive manufacturing approach rather than a subtractive one, it uses less material. The cost of digital printers continues to decline; startups are now offering hobbyist versions for less than US$250. But as our technology forecasting analysis will show, 3D printing isn’t poised to take the place of factory production anytime soon.
Tools of the Trade
One of the most effective tools for forecasting technology dates back to the observations of aerospace engineer Theodore P. Wright. He began his career with the U.S. Navy Reserve Flying program, where he received pilot training and became an aircraft inspector. He later joined the Curtiss Aeroplane Company, eventually rising to vice president of engineering. In 1936, Wright published an article in the Journal of Aeronautical Sciences in which he offered a mathematical model for predicting cost reductions over time based on years of observing airplane manufacturing. Specifically, he proposed that the number of labor hours required for building an airplane declined predictably as a function of the cumulative number of units produced because of the increases in skill and efficiency that came from experience and practice. For every doubling of cumulative unit production, the number of labor hours dropped by a fixed percentage. Wright’s resulting exponential curve, dubbed “the learning curve,” drops swiftly initially, but eventually flattens as the number of units required to double cumulative production grows large.
In the 1960s, Bruce Henderson, founder of the Boston Consulting Group, built on Wright’s idea with the concept of “the experience curve.” He argued that the exponential curve could be extended to a broader range of products if one focused on the total manufacturing cost per unit rather than merely the labor cost. Around the same time, Gordon Moore, director of research and development for Fairchild Semiconductor Inc., made an observation based on his deep industry knowledge of computer chips. But Moore used time as his driving factor rather than cumulative production volume. He asserted that the number of transistors per computer chip had doubled every year and would continue to do so for another 10 years, thereby reaching what seemed a mind-boggling total of 65,000 transistors on a single chip.