Skip to contentSkip to navigation

The algorithmic trade-off between accuracy and ethics

In The Ethical Algorithm, two University of Pennsylvania professors explain how social values such as fairness and privacy can be designed into machines.

A version of this article appeared in the Summer 2020 issue of strategy+business.

The Ethical Algorithm: The Science of Socially Aware Algorithm Design

by Michael Kearns and Aaron Roth, Oxford University Press, 2019

Strava, a San Francisco–based fitness website whose users upload data from their Fitbits and other devices to track their exercise routines and routes, didn’t set out to endanger U.S. military personnel. But in November 2017, when the company released a data visualization of the aggregate activity of its users, that’s what it did.

Strava’s idea was to provide its users with a map of the most popular running routes, wherever they happened to be located. As it turns out, the resulting visualization, which was composed from three trillion GPS coordinates, also showed routes in areas, such as Afghanistan’s Helmand Province, where the few Strava users were located almost exclusively on military bases. Their running routes inadvertently revealed the regular movements of soldiers in a hot zone of insurgency.

The problem, explain University of Pennsylvania computer and information scientists Michael Kearns and Aaron Roth, authors of The Ethical Algorithm: The Science of Socially Aware Algorithm Design, is “that blind, data-driven algorithmic optimization of a seemingly sensible objective can lead to unexpected and undesirable side effects.” The solution, which they explore for nontechnical leaders and other lay readers in this slim book, is embodied in the emerging science of ethical algorithm design.

“Instead of people regulating and monitoring algorithms from the outside,” the authors say, “the idea is to fix them from the inside.” To achieve this, companies need to consider the fairness, accuracy, transparency, and ethics — the so-called FATE — of algorithm design.

Blind, data-driven algorithmic optimization of a seemingly sensible objective can lead to unexpected and undesirable side effects.”

Kearns and Roth don’t deal with the FATE traits in a sequential manner. Instead, they describe the pitfalls associated with algorithms and discuss the ever-evolving set of solutions for avoiding them.

The first pitfall is privacy, which is too often addressed by the anonymization of data. As has been repeatedly demonstrated over the years, anonymized data can be relatively easily de-anonymized. Twenty-five years ago, Massachusetts governor William Weld learned this lesson after publicly proclaiming that a release of anonymized data summarizing the hospital records of state employees adequately protected patient privacy. An MIT Ph.D. student named Latanya Sweeney analyzed the data, picked out the governor’s medical records, and sent them to his office.

Kearns and Roth’s solution to “re-identification” is differential privacy. Boiled down, differential privacy mathematically ensures that an algorithm can’t learn anything more from a data set that includes an individual record than it could learn if the record were not included. This protects individual privacy. It doesn’t, however, protect secrets that can be derived from the records in aggregate — witness Strava.

The second algorithmic pitfall — and also a core element of FATE — is fairness. In 2018, Amazon disbanded a team building an algorithm to evaluate the resumes of software engineers after it discovered that the algorithm was explicitly downgrading those resumes that contained the word women’s and the names of two all-women colleges. Was this sexism? Misogynism? Not according to Kearns and Roth, who write, “Those explanations might have been more reassuring than the truth, which is that the bias was the natural if unexpected outcome of professional scientists and engineers carefully applying rigorous and principled machine learning methodology to massive and complex data sets.”

Building fairness into algorithms requires identifying a model that minimizes unfairness. This rather tautological quest is pursued by purposely imposing restraints on the algorithm, such as equalizing the false rejection rate for bank loans across different groups of people. Deciding what these restraints should be is a chore more appropriate for leaders than for engineers — it entails human judgement, policy, and ethics.

The remaining pitfalls described by Kearns and Roth are caused not so much by algorithms as by humans trying to optimize algorithmic outcomes for themselves. For instance, people who live in residential neighborhoods that offer alternative routes to traffic-jammed freeways have been known to report nonexistent accidents to the navigation app Waze to induce it to steer drivers away from them.

The solution set to these pitfalls includes teaching algorithms to anticipate and adjust for efforts to game them, using concepts such as simulated self-play. Gerald Tesauro of IBM Research first applied this idea successfully in 1992, when he created a world-class backgammon program by inducing it to learn by playing itself. In the same way, simulated self-play can help algorithms teach themselves to counter the effects of human game playing.

Some of the FATE traits are more easily attained than others, but there is an ever-present tension underlying the acronym that Kearns and Roth point to throughout The Ethical Algorithm. This is the tension between accuracy and ethics, and it will need to be addressed and resolved in any company that uses algorithms.

If you run a credit card company, for instance, you want an algorithm that will continuously refine its ability to deliver up the names of consumers who will not default on their debt. The better the algorithm is at this task, the more profitable your company becomes. The algorithm isn’t going to limit itself: It will encroach on consumer privacy, and it will eliminate people wholesale based on their age or race or zip code unless you stop it in the ways described by Kearns and Roth. But when you limit the algorithm, its ability to identify creditworthy consumers is diminished. Thus, there is a trade-off between algorithmic accuracy and ethics — which the authors describe as “sliding scales that are under our control.”

Designing social values into algorithms (versus regulating and monitoring them) presumes that companies not only will be savvy enough to operate these scales, but also that their leaders will use them in ways that could reduce the bottom line. You can draw your own conclusions about the likelihood of both conditions being met in any given company.

Author profile:

Get s+b's award-winning newsletter delivered to your inbox. Sign up No, thanks
Illustration of flying birds delivering information
Get the newsletter

Sign up now to get our top insights on business strategy and management trends, delivered straight to your inbox twice a week.