At first, the professor and his assistant were so taken aback by their results that they suspected they’d made a mistake. They made extensive checks on their own calculations but could find no errors at all. The initial shock now over, they began to cheer up. If there’s one thing that makes up for an academic proving himself wrong, it’s the opportunity to show that other eminent authorities are wrong too. So the professor submitted a paper on his surprising and important findings to a prestigious, learned journal and waited for the plaudits to start rolling in. This in itself turned out to be another forecasting error. The paper was rejected on the grounds that the results didn’t square with statistical theory! Fortunately, another journal did decide to publish the paper, but they insisted on including comments from the leading statisticians of the day. The experts were not impressed. Among the many criticisms was a suggestion that the poor performance of the sophisticated methods was due to the inability of the author to apply them properly.
Undaunted, the valiant statistician and his faithful assistant set out to prove their critics wrong. This time around they collected and made forecasts for even more sets of data (1,001 in total, as computers were much faster by this time), from the worlds of business, economics and finance. As before, the series were separated into two parts: the first used to develop forecasting models and make predictions; and the second used to measure the accuracy of the various methods. But there was a new and cunning plan. Instead of doing all the work himself, the author asked the most renowned experts in their fields — both academics and practitioners — to forecast the 1,001 series. All in all, fourteen experts participated and compared the accuracy of seventeen methods.
This time, there were no bad surprises for the professor. The findings were exactly the same as in his previous research. Simpler methods were at least as accurate as their complex and statistically sophisticated cousins. The only difference was that there were no experts to criticize, as most of the world’s leading authorities had taken part.
That was way back in 1982. Since then, the author has organized two further forecasting “competitions” to keep pace with new developments and eliminate the new criticisms that academics have ingeniously managed to concoct. The latest findings, published in 2000, consisted of 3,003 economic series, an expanding range of statistical methods, and a growing army of experts. However, the basic conclusion — supported by many other academic studies over the past three decades — remains steadfast. That is, when forecasting, always use the KISS principle: Keep It Simple, Statistician.
— Spyros Makridakis, Robin Hogarth, and Anil Gaba