Skip to contentSkip to navigation

When Robots Miss the Minutiae

As tasks such as ad placement are becoming automated, it’s obvious that machines will need to make a huge leap to discern context.

Automation is on the rise because machines, software, and algorithms can do many tasks more effectively and more efficiently than humans. Operating without humans’ frailties, ticks, and biases, they can produce more effective outcomes. One of the largest asset managers in the world, the New York Times reports, is increasingly relying on computers rather than humans to pick stocks.

But the case for automation isn’t universally clear. For although they may lack certain human foibles, software programs also lack certain human attributes that can be enormously useful in business. As I’ve argued, artificial intelligence desperately needs some emotional intelligence. Now we’re seeing that the lack of such emotional intelligence can, in fact, prove detrimental. Exhibit A: the programmatic purchase and distribution of advertising on platforms like YouTube.

Buying ad space on media outlets used to be a relatively simple prospect. Back in the 1960s, when there were only three television networks, a few dozen national magazines, and scores of large-scale newspapers, a few ad agency executives could get together with a client over midday martinis and plot out a strategy. A couple of account executives would place some phone calls, and the space would be booked. On the sell-side, a small group would draw up schedules and direct the placement of the ads in the appropriate spaces or time slots.

In today’s digital world, with its millions of websites, endless mix of portals, narrow-casting websites, and vast social platforms, placing ads carefully and effectively is an entirely different prospect. Clients want their ads to be sprayed around the world, targeting consumers wherever and whenever they spend their time online. And it makes sense to have computers make those decisions and execute the plans. For YouTube, which places ads in millions of videos each day and whose inventory is expanding daily, it makes no business sense to have a person examine every video and determine whether a brand’s placement inside it is appropriate. And so its brilliant coders have taught the machines to place the ads, based in part on the content and in part on what they know about the viewers.

It’s all really effective and efficient — until, suddenly, it isn’t. The margins may be higher when computers place the ad orders than when humans do it. But the long-term consequences of letting bots run the show can wipe out margins as well. To a large degree, the online advertising business now lets computers decide, without human intervention and judgment, what content a brand’s message gets displayed near. And that’s an enormously sensitive task. In recent weeks, for example, the Wall Street Journal reported that ads from major brands on YouTube were displayed on videos that had offensive and racist content — even after the company had assured customers the practice had stopped. When the reports came to light, big-spending brands including Coca-Cola, Walmart, and Dish Network pulled their spending from the platform. “The content with which we are being associated is appalling and completely against our company values,” Walmart said. Google has apologized to the advertisers and outlined some changes to address the issue.

The screening processes the platform had in place clearly weren’t functioning perfectly. As the Journal reported: “Google has said it uses software to automatically screen videos’ titles, descriptions, images and dozens of other signals to prevent ads from appearing on inappropriate content. But the software has been imperfect. It has pulled ads from innocuous content, allowed ads that violate Google’s existing policies and can miss context or nuance.”

When it comes to content, context really matters.

Here’s the problem: When it comes to content, context really matters. In one context — for example,a documentary about the life of Nobel Prize winner Elie Wiesel — images of Nazi-era concentration camps are appropriate and make sense, and one could see a brand wanting to be associated with the writer/activist. In another context — say, a video posted by a Holocaust denier — the same images would be regarded as transparently abhorrent, clearly not something an advertiser would want to be a part of. A Fortune 500 company may want to associate its brand with a documentary on the inspiring history of the American civil-rights movement. But it would explicitly not want to associate its brand with a video posted by a modern-day white supremacist. A filter that blocked ads from appearing alongside content that had images of cross-burning or Ku Klux Klan members wouldn’t necessarily differentiate between the two.

In time, of course, it’s possible that computers will develop the capacities of empathy, historical understanding, and social awareness that humans have. But it’s clear they are not there yet. And in the meantime, it poses a dilemma. The business of online advertising – as so many others do today – relies increasingly on automation. Insert more people into the process, and the price goes up while the pace of execution declines dramatically. That’s bad for margins. But there are clearly times when leaving the computers to their own devices can lead to results that drive clients to rethink their decisions to do business on a platform in the first place. And that’s even worse for margins.

Daniel Gross

Daniel Gross is editor-in-chief of strategy+business.

 
Get s+b's award-winning newsletter delivered to your inbox. Sign up No, thanks
Illustration of flying birds delivering information
Get the newsletter

Sign up now to get our top insights on business strategy and management trends, delivered straight to your inbox twice a week.