Skip to contentSkip to navigation

Poll Positions

The failure of prognosticators to call the U.K. election, and before that, the U.S. presidential election, should make us take another look at our own preconceived notions about big data and analytics. It takes human contact to predict an election.

[Updated on June 12, 2017]

First Brexit. Then the U.S. presidential election. Now the U.K. election. The data-aggregation industrial complex that has grown up around political events has failed again, in spectacular and highly public fashion.

In recent years, as big data and analytics have become all the rage in the professional and corporate worlds, a crowd of pollsters, aggregators, and number crunchers have assumed a central and outsized role in our prognostication of contemporary events.

Polls predicted that Prime Minister Theresa May would comfortably win the U.K. election on June 8. And on the eve of the election, the consensus among seers like Nate Silver’s FiveThirtyEight, the New York Times’ Upshot, the Princeton Election Consortium, and the RealClearPolitics polling aggregator was that November 8 would bring a highly predictable, landslide victory for Democratic nominee Hillary Clinton. At FiveThirtyEight, the odds of a Clinton victory posted on November 7 were about 70 percent; at the Princeton Election Consortium, it was 99 percent!

And these projections of odds and probabilities were more than enough for many veteran observers. After all, data-crunching, objective, code-driven tools are perfectly in tune with the zeitgeist. In the last few cycles, they had forecast state-by-state results with uncanny accuracy. Instead of relying on shoe-leather reporters, who do the spade work in key districts, or pundits, who use experience and inside contacts to make projections, why not let the machines do the heavy lifting? Whether you are trying to predict the outcome of a baseball game or a presidential campaign, gathering historic and current data, writing code, and running regressions is a surefire way to, as the title of Nate Silver’s book puts it, distinguish between The Signal and the Noise.

Intuitively, this approach makes sense. We know that computers are far less fallible and emotional than people. The amount of data at our fingertips is immense — historic voting behavior, voter registration trends, demographic insights. Thanks to cheap and powerful computing power, we can construct, test, feed, and manipulate models quite easily. The fact that they deal in odds and probability — not guarantees and declarations — lends an air of humility to the project.

To a degree, it was precisely the lack of human touch that made this approach so appealing. Data supposedly doesn’t lie. Anecdotes — the stories you glean from reporting and conversations — may be telling, but they’re not determinative. The plural of anecdote is not data, as the saying goes. In our age of data analytics, competitive advantage often accrues to those organizations that can combine different data sets in interesting ways, that understand the correlations between wishful thinking and real-world behavior, and that have a granular view of the makeup of a target market. That holds for political prognosticators as much as it does for retailers.

This is the eighth election cycle I have covered in some fashion as a journalist. This year, I spent far less time talking to voters or watching pundits on television than I did checking in at FiveThirtyEight or constructing my own maps at 270toWin or looking at the early voting data dispensed by the indispensable U.S. Elections Project and comparing it to previously published data on 2012 turnout. Given the surfeit of information and aggregation sites, people could act as their own big data analysts, marrying the available information to their own experiences and insights — and in some case, hopes and biases — to draw conclusions. I was certain, for example, based on early voting data, polls, and a deep dive into the 2012 results and demographic statistics, that Hillary Clinton would win Florida. She didn’t, obviously.

The failure of big data prognosticators to accurately call this election should make us take another look at our own assumptions about analytics. It should make us realize that the purely data-driven approach isn’t quite as evidence-based or infallible as its advocates like to think. There are a few reasons.

First, these efforts rely a great deal on polls. To be sure, polls and consumer surveys have become more sophisticated and efficient in the Internet era. But, ironically, in this age of mass personalization and near-perfect knowledge about consumer behavior, polls and surveys offer a false sense of certainty. Polls ask people what they say or think they’ll do, not what they actually do. And there’s often a big gulf between them. (Ask yourself to estimate how many times you’ll exercise in the coming month. Then at the end of the month, write down how many times you actually did.)

In this age of mass personalization and near-perfect knowledge about consumer behavior, polls and surveys offer a false sense of certainty.

Second, there’s a strong human element in the work of aggregation and number-crunching, an element that is often overlooked. Poll-takers must decide what questions to ask, how to phrase them, and how to weight their samples — how many people of different demographic, gender, or age groups are necessary to accurately extrapolate the findings from 600 interviews to the potential behavior of 20 million. They make these decisions in a highly complex context. If you’re polling in Connecticut, whose population is quite stable and doesn’t vary much from year to year, extrapolating from a sample might be easy. But in swing states that were polled a great deal, like Florida, the population in 2016 is quite different than the population in 2012 or 2008. The state’s population is growing rapidly, fueled by migrants from other U.S. states and Puerto Rico, and by immigrants from a host of countries. These constantly shifting demographics make it much harder to blow up a snapshot into an accurate, large picture.

Third, there’s an equally strong human element in the synthesis and presentation of findings. What weight do you give to polls that have been known to be more accurate in the past? How do you build the possibility of unexpected results and uncertainty into the model? Should you adjust the weighting of the polls based on early voting behavior in key states?

In the end, predicting an event like an election isn’t simply a matter of gathering data points and analyzing them according to predetermined algorithms and patterns. Tens of millions of people make their own voting decisions, spurred by a plethora of emotions, desires, incentives, and fears. I don’t think we’ve yet designed the computer or coded the program that can fully capture this complexity. It may not take a village to predict an election, but it certainly takes a strong intuitive feeling for humanity. 

Daniel Gross

Daniel Gross is editor-in-chief of strategy+business.

 
Get s+b's award-winning newsletter delivered to your inbox. Sign up No, thanks
Illustration of flying birds delivering information
Get the newsletter

Sign up now to get our top insights on business strategy and management trends, delivered straight to your inbox twice a week.