Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It
by Erica Thompson, Basic Books, 2022
The single most famous weather forecast in British history is also one of the worst. In October 1987, a meteorologist working for the British Broadcasting Corporation reassured a concerned viewer that rumors of an approaching hurricane were unfounded. Hours later, 22 people had been killed and billions of pounds of damage done by highly unusual hurricane-force winds. Although the erroneous forecast owed to a lack of data in parts of the North Atlantic, it was the meteorologist, Michael Fish, who became a synonym for flawed prediction models. The work of everyone who uses such mathematical models to produce explanations of how complicated things work is the subject of a new book by Erica Thompson, an academic at the London School of Economics. Her contention is that too many of us have become ensconced in a comfortable but ultimately unhelpful place, which she dubs Model Land.
Thompson believes Model Land is a great place for theorists—economists, climatologists, financiers, political scientists—because models are entirely controllable. Experimenters can set the parameters, run their tests, and write with confidence about their results. There are no messy or uncomplicated factors. “Whole careers can be spent in Model Land,” Thompson writes, “doing difficult and exciting things.” Except these things are not real. Or rather, they do not apply to the real world. It is this delusion that has led governments and businesses that are unquestioning of model results into trouble—and prompted Thompson to write her redress, Escape from Model Land.
In the late 1940s, one of Thompson’s predecessors at LSE, an undergraduate named Bill Phillips, and his colleague Walter Newlyn, built a physical model of the UK economy out of water tanks, pipes, and pumps to demonstrate how something very complicated works in a simplified, visual way. The water represented the flow of money through the economy and was held in tanks, representing banks and the government, while the pump represented taxation. The aim, as Thompson interprets, was to set the pumps and valves at a level “that allows for a closed loop that prevents all of the water ending up in one place and the other tanks running dry.”
But to Thompson, Phillips’s work is a product of Model Land. “The only way that the Phillips–Newlyn machine can represent economic failure, for example, is by the running-dry of the taxation pumps; there is no concept of political failure by…failing to provide adequate public services. And the only success is a continued flow of money.” In other words, the land of Phillips’s model is a creative and admirable approximation of how real economies work, but it has edges and limits to its scope that do not apply to the real world.
And when we let these models dictate behavior in the real world, we flirt with disaster. Thompson writes an excellent section on financial crises to illustrate her point. The classic mistake made by banks, hedge funds, and other investors is “assuming the data we have are relevant for the future we expect.” During times of stability, this approach can be profitable. But when events that models believe are very unlikely suddenly materialize, as the collapse of Southeast Asian currencies did in 1997–98 or the unravelling of the US mortgage market a decade later, model-guided investors can be caught out.
‘Whole careers can be spent in Model Land doing difficult and exciting things,’ writes Thompson. Except these things are not real. Or rather, they do not apply to the real world—a delusion that has led governments and businesses into trouble.
Thompson believes these failures are often owing to misaligned incentives: “Those who correctly estimate significant tail risks [i.e., deviations from the normal distribution in a statistical model] may not be recognized or rewarded for doing so. Before the event, tail risks are unknown anyway if they can only be estimated from past data,” and “after the event, there are other things to worry about.” In short, it was in investors’ interest to design a model that characterized unlikely risks as infinitesimally so, and regulators weren’t paying attention.
So why should we bother with models at all? Occasionally, Thompson believes, they do get it right. Her preferred example concerns research by two chemists, F. Sherwood Rowland and Mario Molina, who in the 1970s modeled the potential impact on the ozone layer of the continued release of chlorofluorocarbons, or CFCs. Within 15 years of their research, an international agreement, the Montreal Protocol, had been signed to limit CFC use, and it is now possible that the ozone layer could recover to its 1980 level by 2050. “The acceptability of the model was a function of the relatively simple context and the low costs of action,” Thompson explains, before warning, “this is in direct contrast with the situation for climate change.”
In the end, Thompson comes back to experts. Michael Fish interpreted his data correctly, but the data itself was not sufficient. The real mistake made by the Met Office, the UK’s national weather service, was putting too much faith in its model. She recounts the Challenger disaster in 1986. Previous missions had revealed several faults in the space shuttle’s O-rings, which sealed its rocket boosters. Some engineers had calculated that the likelihood of a major disaster was high. Others saw it differently: the fact that Challenger had been able to complete the previous flights provided a data set that underlined its strength. “On the face of it, and with reference to the data,” Thompson argues, “either scenario is feasible.” It is only when the modeling is supplemented with expert judgment that we stand a chance of escaping from Model Land and finding ourselves with information that can be applied in the real world.
- Mike Jakeman is a freelance journalist and has previously worked for PwC and the Economist Intelligence Unit.