Skip to contentSkip to navigation

Understanding the potential of artificial intelligence

Daniel Hulme, CEO of the AI solutions startup Satalia, offers other chief executives a primer on the technology that will shape the future of work and business.

A version of this article appeared in the Spring 2019 issue of strategy+business.

This interview is part of the Inside the Mind of the CEO series, which explores a wide range of critical decisions faced by chief executives around the world.

In 2008, Daniel Hulme started Satalia, a company that uses data science, machine learning, and optimization (making the best use of resources) to build customized platforms that solve tough logistics problems involving products, services, and people. Lately, Hulme has spent a good portion of his time explaining the ins and outs of artificial intelligence to other CEOs. He sees a big information gap at the top of most companies — yet this is where technology investment decisions are made. Misunderstanding AI, Hulme believes, can lead people to both overestimate its value and underestimate its impact.

Satalia’s work is a leading example of what AI is currently good at. Not coincidentally, it is also the commercialization of Hulme’s research at University College London (UCL), where he is the director of the business analytics master’s degree program. Satalia’s clients are household names in the U.K.; they include Tesco, DFS, and the British Broadcasting Corporation. (Disclosure: PwC, which publishes strategy+business, uses Satalia’s technology and is working with the company to develop an offering to take to mutual clients.)

The increasingly competitive market for AI expertise is both a blessing and a curse for Satalia. The company, with a staff of 30 that is expected to grow quickly in 2019, can’t attract talent through salaries alone, so it also relies on an innovative management concept. This organizational structure reflects what Hulme believes will become the prevailing model of successful corporations in the future. Satalia, a loose-knit operational hub, based in a trendy North London shared workspace, taps into a “gig economy”–style global talent network and offers flexibility and fun, with few if any layers of management.

The setting is a long way from Morecambe, a sleepy coastal resort town on the chilly northwestern coast of the United Kingdom, where Hulme, 38, grew up. Morecambe is famous not for computer science but for potted shrimp, a late British comedian (Eric Morecambe, who took the town’s name as his own), and a very British playwright, Alan Bennett. A girl Hulme met at age 16 came from London, and inspired him to move 250 miles south after finishing secondary school. There, he entered UCL and the world of artificial intelligence. In September 2018, Hulme sat down with strategy+business in the cafeteria of Satalia’s shared offices to explain the artificial intelligence revolution and why there are no truly intelligent machines — yet.

S+B: What drew you to artificial intelligence?
HULME:
I’ve always been interested in what it means to be human, and in the nature of the universe. I did my undergraduate [degree] in AI and my master’s in AI, and my Ph.D. in AI, so I guess I’ve been doing 19 years’ worth of activity in AI.

S+B: What is your definition of AI?
HULME:
There are two definitions of AI, and the more popular one is the weakest. This first definition [concerns] machines that can do tasks that were traditionally in the realm of human beings. Over the past decade, due to advances in technologies like deep learning, we have started to build machines that can do things like recognize objects in images, and understand and respond to natural language. Humans are the most intelligent things we know in the universe, so when we start to see machines do tasks once constrained to the human domain, then we assume that is intelligence.

 
Get the strategy+business newsletter delivered to your inbox

(sample)

 

But I would argue that humans are not that intelligent. Humans are good at finding patterns in, at most, four dimensions, and we’re terrible at solving problems that involve more than seven things. Machines can find patterns in thousands of dimensions and can solve problems that involve millions of things. Even these technologies aren’t AI — they’re just algorithms. They do the same thing over and over again. In fact, my definition of stupidity is doing the same thing over again and expecting a different result.

S+B: And the second definition of AI?
HULME:
The best definition of intelligence — artificial or human — that I’ve found is goal-directed adaptive behavior. I use goal-directed in the sense of trying to achieve an objective, which in business might be to roster your staff more effectively, or to allocate marketing spend to sell as [much] ice cream as possible. It might be whatever goal you’re seeking.

Behavior is how quickly or frictionlessly I can move resources to achieve the objective. For example, if my goal is to sell lots of ice cream, how can I allocate my resources to make sure that I’m achieving the objective?

But the key word for me in the definition of goal-directed adaptive behavior is adaptive. If your computer system is not making a decision and then learning whether that decision was good or bad and adapting its own internal model of the world, I would argue that it’s not true AI. And it’s OK for companies at the moment to be calling machine learning AI. So for me, the true [definition of] AI [involves] systems that can learn and adapt themselves without the aid of a human. Adaptability is synonymous with intelligence.

In fact, most companies don’t have machine learning problems — they have optimization problems. Optimization is the process of allocating resources to achieve an objective, subject to some constraints. Optimization problems are exceptionally hard to solve. For example, how should I route my vehicles to minimize travel time, or how do I allocate staff to maximize utilization, or how do I spend marketing money to maximize impact, or how do I allocate sales staff to opportunities to maximize yield? There are only a handful of people across the world who are good at solving problems like this with AI.

S+B: How do you think CEOs see AI today?
HULME:
Many CEOs feel they need to bring AI into their organization. There’s this fear factor that if you’re not on the AI bandwagon, then you’re going to lose out to competitors that are going to be eating your market, because they’re using technologies to make decisions faster and better than you.

They may ask the chief information officer, “What are we doing in AI?” And the CIO will then hire or try to hire data scientists, whose work represents a kind of proxy for AI. But data scientists only have a certain type of skill. They understand how to use statistics and machine learning to find patterns in data. They’re not necessarily good at building production-grade systems that can make decisions or that can adapt themselves.

S+B: So you don’t believe that machine learning, in itself, can evolve into the kinds of adaptive systems that companies need.
HULME:
A lot of people are saying, “Well, [with] these deep learning models, these data scientists will solve all our problems,” and actually, they won’t. As I said, machine learning, data science, and statistics are great at finding patterns in data. But the most important thing is making decisions that leverage the patterns found in data. This requires a completely different set of skills: discrete mathematics, operations research, and optimization. These skills are massively underrepresented in industry.

S+B: What kinds of questions, then, should CEOs be asking about AI?
HULME:
One is, what technologies and solutions should they be bringing into their organization to remove the biggest frictions? So first they need to identify the big frictions that are aligned to their core competencies and assess what technologies they need to innovate the core competencies. The biggest frictions might come from having lots of costs associated with employing people. Or the company might have frictions associated with customer experience. Or, if I have a lot of analysts reading lots of reports and then trying to synthesize those reports into information, we can get machine learning to do that better.

CEOs also need to have a very clear understanding about the competitive landscape. Most companies don’t just have direct competition; they also have indirect competition from the Googles and the Facebooks and the Alibabas. Lots of those big companies can enter almost any market and shake it up. So companies need to be looking tangentially at indirect competitors and assessing what these competitors could do, given all the data that they’re currently sitting on — because once they figure out how to mobilize that data, they can cannibalize those markets.

And then the third question CEOs should ask is how do they bring the right talent capability into their organization to help execute a strategy to remain competitive? It’s hard for most companies to do this, so you have to learn to work with startups and third-party vendors to deliver innovations and help you quickly adapt to a changing world.

S+B: On the question of talent and skills: You say data science is only part of it. What other categories should companies be looking at?
HULME:
There are four categories of AI skills. The first category is the data. Companies should ask: Are we getting our data into the shape where people can consume it? There are lots of companies out there that are throwing money at building data lakes — that’s all the raw data that a company holds from code generation to sales information — because they think at some point in the future data lakes will be useful. That’s not a bad investment, but I would also suggest that you need to be building applications straight away on top of that data lake that drive value into your business. Companies…should be thinking about building digital twins of their organizations, i.e., a perfect digital representation of their physical assets, like their infrastructure and employees.

S+B: What’s the second AI category?
HULME:
Next is recruiting data scientists who have the machine learning and statistics skills to find insights from the data. Then, the third is [finding] what I call the decision scientists: people who can understand how to make decisions or solve optimization problems that leverage those insights.

And fourth, crucially, for true AI, you need to have an AI architect who understands how to glue these three components together: the data, the machine learning, and the optimization to build adaptive systems. And at the moment, it’s the CIO who is trying to step into that role of overseeing this. But I don’t know of many companies out there that have true AI architects. For now, companies are managing maybe part of this, but not all four categories.

S+B: Can you say more about the digital twin concept? What can digital twins be used for?
HULME:
Digital twins are the next evolution of digital transformation. To be able to adapt more quickly to a changing world, companies need to create a digital replica of all of their physical assets, their infrastructure and people. Once you have a twin, you can start to run experiments and simulate scenarios to operate your business more effectively. Further down the line we may even have AI setting those experiments, and running experiments without the aid of the human. The role of the strategist and of leadership is to develop a strong vision and purpose, i.e., [determining] what key objective the organization needs to aspire to. I hope that organizations will realize that this objective needs to be much more sophisticated than a financial return to be able to attract, empower, and motivate talent. Exceptional talent wants to align with a strong purpose and inspirational leaders.

S+B: Are we going to have a wave of wasted AI spending?
HULME:
There is a bit of a bubble in AI. I don’t think that it’s going to go to waste. I think that all this investment will be additive, but there’s an over-expectation of what machine learning can bring right now, because of a lack of appreciation of the fact that machine learning is only part of the journey. And the next part of the journey for most big companies is optimization and decision making.

S+B: You’re saying that AI is simultaneously overhyped and underexploited.
HULME:
As [futurist] Roy Amara noted, the impact of technology tends to be overestimated in the short run and underestimated in the long run. For now, you can probably ignore the idea of having adaptive systems in your business. That will come later. In the short run, you can use AI to remove the friction of mundane and repetitive tasks across the organization. If used correctly, this can absolutely change your business. But there’s a lot of hype out there, and a lot of people investing in these technologies don’t know what they’re doing.

S+B: How well equipped is AI to help business leaders forecast the future?
HULME:
The world is changing so quickly, it’s very difficult to actually have all the necessary data points to be able to help you forecast accurately. At the moment, that’s still in the realm of human beings.

“If used correctly, AI can absolutely change your business. But there’s a lot of hype out there, and a lot of people investing in these technologies don’t know what they’re doing.”

I’ll give you an example. Someone I know who worked at a loan company told me this story. They were trying to predict who might default on their loans, and they decided to collect social data from LinkedIn and Facebook to see whether they could find indicators there. Actually, all that data was useless. But there are two really good predictors of defaulters: One is the Internet caches [browsing histories] that contain websites with a particular font that is often found on gambling sites. The other is the number of mistakes people make when they are filling out the loan application, which [can be] an indication of whether they are intoxicated.

So you don’t need all the data in the world. You just need the right data, and the right amount of data. It all stems from: What is the problem we’re trying to solve, and how are we solving that problem at the moment? What data are people consuming, and what algorithms are they using? Because it’s most likely that data will solve the problem. We just need to start by replicating what’s in experts’ heads.

S+B: As the CEO of an AI company, what do you think are the greatest threats your company faces?
HULME:
I used to think that technology was a threat, in the sense that my competitors had access to advanced technologies and data. But now I think [the worry] is not getting the talent to use that technology. How do you attract and retain that talent? And that comes back to culture and purpose.

One of the things I’m worried about is my team going to work for other companies that can pay twice as much. Bigger companies now have access to very cheap capital, and they understand how to get users, even by losing money. They know that users are going to be valuable in the future. They know how to attract the right talent; they can pay high salaries; they know how to keep them happy by giving them beanbags and free food and all of this kind of stuff. And it’s going to be very, very difficult, I think, for traditional startups to compete with those organizations without them being hoovered up very quickly. Even in academia now we’re seeing the really good professors just being hoovered up by these large organizations. So the biggest threat for me is large companies that have access to infinite amounts of cheap capital that will be able to out-innovate me because of their access to [people].

S+B: Do you expect AI experts to be in high demand in general?
HULME:
I do. Lots of companies will be telling their investors, “OK, we need to get money to build our own AI teams or data science teams.” And in reality, it’s going to be very difficult for most companies out there to attract and retain that talent. In some respects, that’s a problem we are trying to solve at Satalia. We try to help companies understand what kind of people they need and be honest with themselves.

S+B: Which countries do you think will move fastest with the next stages of AI?
HULME:
It comes back to the ethical questions around GDPR [the European Union’s recently enacted General Data Protection Regulation] and building “explainable algorithms.” So if you’re building algorithms now that are making decisions in people’s lives, in Europe you need to be able to explain how those algorithms are making those decisions.

Unfortunately, countries that don’t have constraints — maybe the Chinas and Russias of this world —may be able to out-innovate countries that do have those restrictions, because it’s very, very, very hard to build explainable algorithms. And if there is no legislation for this, you may find unscrupulous organizations trying out systems that could have horrible outcomes, but there will be unclear jurisdictional repercussions. In a hospital, for example, is it you or the algorithm that just made the mistake? That’s why it’s important to understand and to explain how a computer is making its decisions. And we are not there yet.

S+B: Overall, what effect do you think AI will have on jobs? Will it create more?
HULME:
In the short term, over the coming decade, I believe that AI will create jobs. In the long term, it will remove more jobs than it creates. I spend a lot of time thinking about the concept of economic singularity. This is the point at which AI will free people from their jobs and those people won’t be able to retrain fast enough to get another job, because AI will have taken it, too. Some experts believe that this could happen in the next 10 to 20 years, and that governments and our economy aren’t prepared for it. Satalia’s purpose is to try to address these future problems. We need to somehow create a global infrastructure that supports those people who are going to be out of work.

There’s another concept called the technological singularity, in which we build AI smarter than us in every possible way. It will be the last invention humanity needs to create, because it will be able to think infinitely faster and better than humans. Many scholars predict we will birth a superintelligence around the middle of our century. It will either be the most glorious thing to happen to humanity or perhaps our biggest existential threat. My concern is that if we are not cooperating as a global species by the time we create it, then it will see us as a threat and remove us from the equation. My purpose is to steer the world toward cooperation, and that means reinventing our political and economic models, and agreeing on a new objective function for humanity. The impulse for countries to increase GDP and companies to make profits means that more and more investment will be made to drive efficiencies and profits, which is leading us to a global economic and environmental crisis. We need a sustainable objective function, and we need to get everyone on the planet contributing to it; otherwise, we may destroy ourselves. I don’t believe that governments are prepared or can act quickly enough, so I hope the change will come from business leaders who have a huge influence and responsibility to steer us toward a positive future.

S+B: This implies that we need to put in safeguards now to ensure the ethical development of AI.
HULME:
For millennia, philosophers have been debating how society should be structured and what it means to live a “good life.” As our environments start to intelligently interact with us, we’re giving them the power to create and destroy. We have to embed ethical behaviors into these system, which makes it an extremely exciting time for humanity, because we now have to agree on what those ethical behaviors should be.

S+B: Are there other technologies — perhaps blockchain — that will deliver this AI-enhanced world?
HULME:
Blockchain technology is giving the world a trusted data platform, and AI is providing the means to collaborate and connect without friction. Over the coming decade we might see the emergence of a DAO [decentralized autonomous organization]  that will allow for truly decentralized and distributed decisions and actions. I can imagine a world in which anyone could boot up a project by launching a DAO that enables contributions from anywhere in the world. The DAO is similar to the open source movement, but in this new paradigm, anyone  —  software engineers, designers, marketers, accountants, and even strategists  —  will be able to rally around an idea and contribute to its development. Work won’t be provided for free or for kudos, as in the open source model. Instead, fiscal remuneration will be determined by the quantity and quality of the contribution. This means that anyone will be able to contribute to a project, even for just a few hours, and they will be rewarded fairly for their work. As people work on these open projects, the DAO captures their contribution on a public blockchain. These contributions accumulate to form a reputation that determines the rate of remuneration on future projects. People develop different rates for different skills, and the rate evolves dynamically over time. You would be paid a different rate for marketing work than for software development, depending on your relative skill in each.

S+B: This upends the financing models we have today, similar to the ICOs (initial coin offerings) that have been developed out of bitcoin. Will this continue, and what is your vision of this autonomous world?
HULME:
Many of these open projects will use digital tokens as their economic model. A Cambrian explosion [a rapid diversification among animals found in the fossil record] of funding models will appear, such as ICOs and other types of token sales. Selling tokens will give DAO projects the capital to get started. By reducing the waste and friction, we may reach a point at which new innovations help ensure that everyone’s basic needs are met. Giving everyone seamless access to healthcare, nutrition, and education will mean that people have the freedom to create and contribute to DAO projects without the need for initial funding. Since digital tokens have no jurisdiction, contributors from anywhere in the world can be remunerated with the same currency. Someone in Europe who contributed the same value to a DAO project as someone in India would receive the same remuneration. And because everyone has a fair opportunity to contribute to DAO projects, there may be a rapid redistribution of wealth.

One of the founding principles of the DAO is that all products are open source. The creation of a completely frictionless free market, where the cheapest and best-placed people could contribute, means that toxic companies are starved of labor and customers. Efficient markets coupled with conscientious consumption could spawn tens of thousands of new organizations whose products and services are developed to meet real needs and provide real benefits.

People will be able to work anywhere they want, which could cause mass migration. Digital nomads could force governments to reassess and innovate their policies to attract and retain corporations and talent by reducing taxes and slackening employment laws. The freedom to work anywhere will cause substantial population shifts and reenergized communities, with people growing their own food, harnessing natural energy sources, and turning away from mass-produced or packaged solutions. This reemergence of community after years of isolated self-interest could have a huge impact on the happiness levels of all age groups.

S+B: Back to the here and now. How far away are we from solving technology problems so that being a small company today in this world of AI won’t be a disadvantage?
HULME:
The tools themselves are not too far away. One of Satalia’s aspirations is to build a platform that allows somebody to boot up a company or an organization, and then attract people from anywhere in the world to contribute to a particular product or service. I have a personal mission to try to get the world cooperating as a global community because I don’t believe that our current economic and political systems are sustainable for the planet.

In some respects, the biggest threat to a company like Satalia is the limitation of its own internal structure. In the future, the success of Satalia will be to remove the concept of a centralized organization, and that means completely decentralizing Satalia itself. That’s OK. And that’s why I’m trying to operate Satalia as a self-organizing company. I want to build a platform to create a decentralized world and be an exemplar for what amazing innovations you can build if you harness and empower amazing talent. In a decentralized world, people could learn to cooperate and contribute more positively to society, and potentially help address some of the threats I’ve mentioned above. If I could create a world where anybody could boot up an idea and get people to surround it and drive it forward, then that’s a world that I want to live in.

Read more: PwC's 2019 AI Predictions and the six priorities businesses must consider in the coming year.

Author profiles:

  • Euan Cameron is a partner with PwC UK based in London. He is the U.K. artificial intelligence leader, focusing on designing and deploying applied machine learning and AI solutions for both clients and PwC itself. Prior to his current role, he worked for two decades in corporate strategy development and M&A.
  • Deborah Unger is a senior editor of strategy+business.
Get s+b's award-winning newsletter delivered to your inbox. Sign up No, thanks
Illustration of flying birds delivering information
Get the newsletter

Sign up now to get our top insights on business strategy and management trends, delivered straight to your inbox twice a week.