Skip to contentSkip to navigation

Aiming AI at society’s toughest challenges

As Chief Science Officer of Alphabet’s innovation arm, David Andre applies AI to moonshot challenges—and sees generative AI as a pivotal new tool in his efforts.

A portrait of David Andre shows him smiling and wearing a purple sweater, standing in front of a dark background

From a young age, David Andre was deeply fascinated with both human and artificial intelligence, a passion kindled by watching his mother, Sheryl, pursue a pioneering career as a computer programmer. Over the span of three decades, Andre has applied his deep AI expertise and entrepreneurial spirit to diverse fields, including asset management and wearable technology. Yet, it is his current role as Chief Science Officer at Alphabet’s X, the moonshot factory, that truly allows him to fulfill a childhood aspiration: leading a visionary science lab that’s dedicated to addressing humanity’s most perplexing challenges.

Established by Google in 2010 and now under Alphabet, X serves as a hub of innovation, embarking on ambitious projects that seek to solve complex problems with radical technological solutions. It has birthed trailblazing Alphabet subsidiaries such as Waymo, the self-driving vehicle rideshare provider, and Wing, a pioneer in autonomous delivery drones. Currently, over half of X’s endeavors take aim at climate change.

Tackling such daunting problems demands not only ambitious problem-solving and technical acumen but also steadfast resilience to a high—and expected—incidence of failure. In our conversation with Andre at last fall’s World Summit AI in Amsterdam, he shared how he keeps his team motivated, as well as the ways generative AI is unlocking new avenues for breakthroughs at X. An edited version of our conversation follows.

S+B: How do you create a culture of innovation at X?
ANDRE:
Innovation is a tricky thing. We do a lot of work to keep the culture at X appropriate for it. A key to innovation is to iterate quickly, and then to accept the idea that most of the things you try are not going to work. With that in mind, you always go after the hardest part of the problem first, because if you can’t solve that part, why are you wasting your time on the rest? 

Besides that, we focus on achieving 10x improvement rather than 10%. We don’t want to iterate our way to a great solution. We want to immediately jump for the thing that’s going to be a revolution for the world. Our job is to create other opportunities for Alphabet that are the same size as Google, and the only way to do that is to go after really big problems.

We do that with a portfolio-based approach where we start incredibly small, often with only one person, or even part of a person’s time, on a project. We have dozens of those happening at any time. What that means is, of those dozens of projects, we expect some of them to stop and fail so other ones can succeed.

David Andre talks with s+b about AI at the World AI Summit

The Chief Science Officer of Alphabet’s innovation arm on how he applies AI to moonshot challenges, and why he sees generative AI as a pivotal new tool in his efforts.

S+B: How do you keep employees motivated when their projects fail more often than not?
ANDRE:
Everything we work on is trying to solve a big problem for humanity. When we are more passionate about it, it’s easier to get the team to have high morale. And failures can help us learn. There’s a project that we’re working on called Taara, and it’s a fascinating project because it came out of another one that failed, called Loon. Loon was focused on bringing internet to people living in rural areas where there are no cell towers. It involved deploying weather balloons that were floating cell towers. In order to make that work, the team developed point-to-point laser communications that would send gigabits of information through the air between balloons that were up to 20 kilometers away from one another.

Taara took this laser technology and put it into terminals that they can deploy anywhere in the world—for example, on towers or on buildings—and basically beam high-speed internet wherever they want it to go. They’re using it in 13 countries around the world, and their goal is to connect the 3 billion people who don’t have any connectivity at all today.

S+B: You’ve been in computer engineering and data science for three decades. What strikes you most as you think about how the field has evolved over that time?
ANDRE:
It has, as you say, changed a lot. There have been winters. There have been summers. But there are two major things that are still true. First, the fundamentals matter. Understanding linear regression, an ancient technique, is still incredibly important, as is what can go wrong with it. The biggest problems I see with startups or projects misapplying machine learning or AI is over-fitting on their data. It is an age-old problem that is still showing up.

Number two is that the idea of doing unsupervised and semi-supervised training has been magical for the field, because this has meant that all of the data that’s out there—the documents online, the photographs online—becomes the training data for algorithms. So now you don’t have to have these very expensive supervised training examples.

The last thing is the advent of the transformer model that powers today’s large language models [LLMs]. This has meant that you can go from natural language to just about anything: from natural language to code, from one language to another, from natural language to images. And all of those are being powered by the same core technology, which has been a game changer because now, an advance in one field—say, in vision processing—suddenly affects language processing and everything else as well, because they’re all using the same model.

S+B: Many top AI researchers have said that the advanced capabilities achieved by the large language models powering generative AI took them by surprise. Did they surprise you?
ANDRE:
I will say that I was surprised at one point when some of my colleagues were working on a paper using LLMs to solve math problems, called the Minerva paper. I was seeing the kinds of results they were able to get and the kinds of problems the system was able to solve, and I was just blown away. Because I had been thinking that LLMs were mostly just predicting the next word or token, but it turns out that the transformer architecture enables all kinds of higher-level reasoning and all kinds of algorithms to essentially be run inside the computation that happens as data is flowing through one of these neural nets. This allows it to solve incredibly complicated problems. It allows you to do things like metaphorical reasoning. It’s a really powerful technique. So while, yes, I was initially surprised, once I understood this computational power, it started making sense.

S+B: How have the advances affected your approach to solving problems?
ANDRE:
Interestingly, it’s largely stayed the same. The hard part of almost every problem is not the AI or the machine learning. What really matters is all of the connective tissue around it that attaches that algorithm to the real world. If you solve the problem in the machine, and it doesn’t end up matching up with what you’re really trying to do, that’s not helpful at all. So the things that we learned back in the ’90s in trying to apply machine learning still apply because we were building connective tissue to match up an algorithm with the real world.

The hard part of almost every problem is not the AI or the machine learning. What really matters is all of the connective tissue around it that attaches that algorithm to the real world.”

S+B: Are you using generative AI at X right now?
ANDRE:
We utilize these kinds of techniques in many of our applications. Sometimes they’re straight-up language applications, but other times we’re applying it to hard sciences, and we’re seeing amazing benefits in areas from computational biology to logical reasoning to geospatial reasoning. Personally, I use it when I’m programming something that is pretty basic, usually in a language I don’t know terribly well, but I want to get it done quickly. I’ll ask it to get me started. I use it a little bit for writing, but I tend to find that it’s not quite where I need it to be for that yet.

The last thing I use it for is something called inverse design. This is a different spin on generative AI than what most people are playing with today. It’s where you ask a computer to solve a problem for you. So, you lay out a problem that you want to solve. You give it all the details and constraints around it, and then you let large compute solve it for you. You get lots of machines banging on the problem in the background, trying different solutions until you find one that you like. That’s the one I use almost every day.

S+B: How can companies position themselves to continue to take advantage of innovations in AI when they’re happening so rapidly right now?
ANDRE:
The most important thing is to start now, because you’re always going to wish you had started yesterday. And if you haven’t started yet, now’s always the right time. The second thing is, start playing with the tools. Work with large language models like Bard; try solving your problems with them, and see what happens.  Learn where they fail. Once you see where they don’t work, that gives you the target for what to focus on and learn about.

Certainly, hiring more people with mathematics and AI skills is also helpful. But everyone can play with these tools today. That’s a big part of what’s happening in this movement—it’s democratizing access to artificial intelligence so everyone can get involved.

Get s+b's award-winning newsletter delivered to your inbox. Sign up No, thanks
Illustration of flying birds delivering information
Get the newsletter

Sign up now to get our top insights on business strategy and management trends, delivered straight to your inbox twice a week.