Something's obviously not quite right about the definition of intelligence.
If we play this out, what's the limit here?
The best use case of AI was to improve human health. It was the moment I've been waiting for that could achieve something no other system could. I want to use AI as a tool to help us understand the nature of reality around it.
Governments are going to use AI. What would you hope that they use it for?
There's two things to worry about. One is—
That's Demis Hassabis, the CEO of Google DeepMind, Nobel Prize winner. He is one of the most important people alive on what is quickly becoming the biggest technological leap in our lifetime. Because the biggest way that AI is going to impact our lives isn't something that we can see. It's not a chatbot. It's not an image generator. It's tools that are invisible to us in drug design and natural disaster detection and nuclear fusion and quantum computing. Tools that he and his team are building here. He is winning the Nobel Prize for just one of those tools.
So, who he is and what he chooses to build matters a lot for you and me. And he's fascinating. He's a childhood chess prodigy who at 17 turned down a reportedly million-dollar job offer from a gaming company to go to college instead and then got a PhD in cognitive neuroscience. He founded his company DeepMind with a mission to solve intelligence, starting with beating video games. He then sold that company to Google specifically because they promised to let DeepMind focus on scientific research.
But as this has turned into the most intense technological battle in recent history, Demis is now in charge of much, much more. He's now behind everything Google does in AI. He's making decisions that affect your life and millions of other lives every single day. So, what is he planning to do with all of that power? My goal is to show you the future that Demis Hassabis wants to build so that you can decide for yourself what you think of it. Welcome to Huge Conversations.
Thanks so much for doing this.
It's great to be here.
Really appreciate it. You already know that Huge Conversations is a different kind of interview. I'm not going to ask you about financials. I'm not going to ask you about your management style. That's all well covered elsewhere. What I'm hoping to do in this conversation is think about it more like an explainer that we're making live together. And I have some props. This is not meant to be a Jenga game.
We're going to play Jenga?
Each block represents a project or a model and I want to talk about them and how they fit together. And so they were meant to be visual aids, but as we were setting up we started playing Jenga with them and it turned out to be way more fun than anything I had planned. Also I know that you like games.
Yes, I love games. So this is great—first in an interview anyway. So yeah.
So my hope in this conversation is to make this explainer together and to help people see what's happening right now in AI really and what is the future that you see coming. What are you hoping to do in this conversation?
A lot of the reasons that I got into AI 30 plus years ago now is to advance science and medicine, and I've always thought of AI as potentially the ultimate tool to do that. So, I'm hoping we're going to talk about that today. And really, that's been my passion for what to apply AI to, although of course it can be applied to many things.
Oh, this is going to be a lot of fun. So, in this Jenga game that we have, a lot of these are blocks that people will have heard of, right? This one is Gemini. But I would argue that the ways in which AI is meaningfully shaping people's lives most are the things that are invisible to them most of the time. So I want to start by talking about the project that you won the Nobel Prize for—AlphaFold. Yeah, good Jenga playing.
I want to tell the story of AlphaFold with all of its drama because some people might not have heard it, but then I want to get really quickly to the cutting edge of this sort of category of science. Why did you decide to tackle this problem out of all of the many?
Well, I came across it as an undergrad in Cambridge. So I had a lot of biologist friends and one of them specifically was obsessed with what's called the protein folding problem. So proteins are what everything in your body relies on. They make biology possible and living possible. And what's important about them is their 3D structure. So in the body they fold up into kind of 3D structures and those structures determine what function they have or partially determine what function they have.
And so the protein folding problem is really about: can you predict this 3D structure just from the one-dimensional amino acid sequence? So that's the kind of 50-year grand challenge of protein folding. So I love challenges. I love puzzles. So I couldn't resist it from a scientific point of view as this probably, you know, is described to me as the equivalent of Fermat's last theorem but for biology. So who couldn't be interested in that?
But also when I first heard it, I thought the kind of problem it was would be suitable for AI one day. Even though we, of course—this is in the late '90s—we didn't have any kind of AI that would be possible to work on this. But I thought one day that would be possible. And then the final thing was just the impact it would make if you cracked it because it would open up all these downstream possibilities for research and especially in things like drug discovery and understanding disease. So, which I think is the most important thing to apply AI to is improving human health.
And the reason that this would be huge for human health is that up until now, in order to develop new medicines, we'd had to spend hundreds of thousands of dollars and years of human effort to find out the structure of a single protein by shooting X-rays at it. So, we had figured out some protein structures, but it was slow and expensive. So I'm skipping over an enormous amount of hard work here by you and your team, but I think by the way that I'm asking the questions, it is very obvious to people that you solved it.
Yeah.
So there's this moment where you realize that it is genuinely useful and you have solved what had been called one of the most important unsolved problems in modern medicine. And it's 2021. You're in a meeting—I am so glad that there was a camera in this meeting. It is one of the most incredible moments I have ever seen. Can we use AlphaFold to solve it? I think you're talking with your team about setting up a system where scientists could send in a request for a specific protein, like a website, and then get the protein folded.
Yes.
And then someone else has a very different idea. Can you walk me through what happens in that meeting? Your reaction is incredible, and I really want to know what you were thinking.
Yeah, sure. Well, look, it was funny that the cameras happened to be in that particular meeting. It was crazy that it was that day; they very rarely followed us, but it was for that meeting. And normally what happens for these sorts of prediction models is you—the traditional thing is you kind of set up a server and then other scientists send you their protein sequences and they say, \"Oh, I'm interested in this protein. Can you send me back the predicted structure?\"
And that's how it had been done in the whole field for the last 40-plus years. And the reason is because most of the prediction algorithms are quite slow. So maybe it would take a few days and then you'd email back the structure and then you'd ask for the next one. But once I realized in that meeting how quickly, not only how accurately, we could fold the proteins but how quickly—in a matter of seconds—I was just doing the back-of-the-envelope calculation: how many proteins are there known to science, known in nature? 200 million. And then how many computers do we have, and how many would we need?
If we folded one every 10 seconds, I realized in the middle of that meeting, I was fiddling on my phone, that it would be possible in a year. So why go to all the effort of building the servers and the databases and the email client and all of that when we could just fold everything ourselves—everything anyone could ever request and ever want—and then put it on a database somewhere for free for all the scientists in the world to use? So it just suddenly hit me: we should just do that. Now, why don't we just do that?
Well, so that's one of the options... \"We should just do that,\" that's a great idea. We should just run every protein in existence and then release that.
Suddenly all these things must have been going on in the back of my mind and I suddenly realized that would be the obvious thing to do, and it would be probably less effort than standing up the server. So it would save us time.
And in that meeting your reaction is something like, \"Why don't we just do that? That would be way better. We should clearly do that.\" And then you do. All of a sudden, this crucial process that had been so hard is suddenly fast and easy and it's being used by scientists all over the world. This huge unsolved problem, now solved. Is it correct to say that we have now predicted the structure of almost all proteins known to science?
Yes. And we keep updating it. So every time somebody scoops a pail out of the ocean somewhere and there's loads of different types of organisms in that bucket of seawater, and then they sequence them all. The sequencing technology has obviously improved many orders of magnitude since the human genome was sequenced. So now the problem was the structural biology—finding these 3D structures was lagging far behind the genetic sequencing.
So now with these computational resources like AlphaFold 2, we can keep up with, \"Oh, here's a new million genetic sequences from some new strange organisms we found; oh, here are the structures.\" And so we have a kind of small team at the European Bioinformatics Institute that keeps updating every year all the new sequences that have been found that year. So, we're now always at the cutting edge. We know what all of these different protein structures mostly look like.
That's so awesome.
It is pretty amazing. It's especially amazing for the researchers that work on slightly more obscure organisms or animals. For example, wheat. I found out a lot of plants have way more genomic data than mammals and humans, which is very strange. They seem to have multiple copies of their genome and things; it's a kind of strange and bizarre world, the plant world.
My plant scientist friends tell me they don't have the resources—a lot of work's been done on the human genome, but some of these more obscure organisms that are still really important for humanity, like crops and things like that, now we're able to immediately jump to the science around what they want to do with the proteins. Maybe help them be more resilient to climate change, things like that. And they can jump straight to the problem they're interested in rather than getting bogged down with trying to crystallize the proteins.
Another boon is for researchers who work on neglected diseases that affect primarily the more developing parts of the world—things like malaria or Chagas disease or leishmaniasis. These affect hundreds of millions of people around the world, but there's not a lot of money in that if Big Pharma try to research that and find cures because they're in the poorer parts of the world, so the research tends to be neglected. So there's these amazing nonprofit organizations that do the research on that, but they don't have a lot of money or resources. So giving them the structures of the proteins that are involved in, say, malaria virus is a huge boon for them too, because they can go straight to the drug discovery phase.
That was one of the hardest things to figure out as I was doing this research because there's this moment where scientists all around the world have access to AlphaFold. You can see the map lights up; you can see that people are using it. But I wasn't easily able to figure out what a great example would be of a scientist using AlphaFold and then that speeding up a drug process that results in a drug that I could now take. What is your favorite example of a scientist using AlphaFold for something the audience might understand or have seen?
So over 3 million scientists are now using AlphaFold. We think it's pretty much every biologist in the world at this point. And one scientist at a pharma company said to me that almost every drug developed from now on will probably have used AlphaFold in its process, which is mind-blowing and amazing. But it still takes time with drug discovery. So we're still mostly in the fundamental biology stage of understanding the disease: what is the protein we're targeting, is that the right biological mechanism? And then, as I understand it, some of these drugs are now in the clinical trials phase, and then hopefully we'll see in a few years' time whole dozens of drugs that were partially helped by at least AlphaFold.
In terms of my favorite breakthrough that has happened with the help of AlphaFold, there's this protein called the Nuclear Pore Complex, and it's one of the biggest proteins in the body. It's huge for a protein and it does a very important job. It's the gateway that opens and closes to let nutrients come in and out of the cell nucleus. It's like a big donut ring that opens and closes. And we didn't know until very recently what the structure of this was because it's so big and complicated; it's pretty hard to crystallize and see.
And so, almost I think it was pretty much six months or a year after we put AlphaFold out, some teams used it along with experimental data to finally work out what this beautiful shape was of this gateway protein, and that was amazing to me. It's one of the biggest proteins in the body, and AlphaFold was very useful in helping determine that structure.
And so perhaps we can design drugs or treatments that use that somehow, that better access the nucleus?
Yes, potentially. I think that was more for fundamental biology understanding, but obviously, we ourselves have spun out a new company, Isomorphic Labs, that tries to build on AlphaFold and uses it—as in this block here—to, as one of the pieces of the puzzle, massively speed up drug discovery. So on average it takes like 10 years to develop a drug. It's a crazy long time, an unbelievable amount of hard work, very expensive, and huge failure rates—only about 10% of drugs get through all the clinical stages.
So we need to vastly improve that if we want to improve human health, I think. And I think the way to do that is by using in silico methods, AlphaFold 2 being one of those components. But knowing the structure of a protein is only one small part of the drug discovery process. You need a lot of chemistry—like what compound should you design to bind to it—all of these things. So at Isomorphic, we're building these systems, you can think of them as adjacent systems that work with AlphaFold—more advanced AlphaFold, AlphaFold 3, AlphaFold 4 you could call it—and then end-to-end create these drugs that have very minimal side effects and are incredibly effective at addressing the type of disease we're trying to help with. We're working on, I think at this point, like 18 or 19 different drug programs across the gamut of things from cardiovascular heart disease to cancer to immunology. So I think eventually these types of technologies should be able to help across almost every therapeutic area.
In prep for this, I did a background interview with your fellow Nobel Prize winner John Jumper. He really stressed that it's one part of a larger problem of drug discovery. And so that brings us to the cutting edge today. I've taken some of the examples that I want to talk about. What is the cutting edge now?
Sure. So we're building many different components that can go together. AlphaFold is one of the linchpins; that's the structure of the protein. But if you think about it, let's say you understand what the shape of the protein is—okay, then which bit of the protein is the important part that does its function? So now if you think about drug discovery—say you want to block the effect of that protein or enhance it in some way—which part of the protein surface do you have to bind to?
Now you have to discover a chemical compound that will attach to the right place on the protein, and you want to know how strongly it will attach. And then on top of that, even more important is not just will it attach to the thing you're interested in, but make sure it doesn't attach to other things, because if it does, that would be toxic. We call those side effects with drugs; you want to minimize those.
Because now we have all of these amazing algorithmic tools, we can sort of do a virtual screen: \"Oh, here's a compound one of our AI systems has designed. It binds... this is our prediction of how strongly it binds to the protein surface.\" And then we can check that very quickly—like in a matter of hours—that particular compound, how does it attach to any of the other 20,000 proteins in the human body? So we can just do it like that within a few minutes and then keep modifying the compound so that it has fewer and fewer side effects—ideally none on any of the other proteins—but an increasingly strong effect on the one that you want.
You can see I've just outlined a self-improvement or self-modification process, and this is extremely fast and efficient if you can do it in silico on computers. Then, only at the final stage do you check it in the wet lab. So you still have to validate it. You do all your search in silico, but then at the final stage you check your final proposed compounds in the wet lab and see that they really do what the predictions say. But you can imagine that would save—you can search thousands of times more compounds, or maybe even millions at some point, more quickly and efficiently that way and then just at the end check that they're correct. That's so much more efficient than doing the search in the wet lab, which is effectively what's done today.
One of my favorites also is AlphaGenome.
Yes.
So, I reached out to yet another Nobel Prize winner, Dr. Jennifer Doudna, who I've had on the show.
Fantastic.
And she sent a question for you. So, I'm going to read this question from Dr. Doudna: \"CRISPR, the gene editing technology that she pioneered, can now target nearly any DNA sequence. But for most genetic diseases, we still don't fully understand which changes in the DNA are driving the problem, especially in the 98% of the genome that doesn't code for proteins. With tools like AlphaGenome starting to decode that 98%, how close do you think we are to the moment where AI can reliably point to the exact genetic change causing a patient's disease so that technologies like CRISPR can fix it?\"
Yeah, what an awesome question. I've discussed this with her in the past, and it is really exciting. AlphaGenome is exactly that kind of technology. It takes the big, long genetic sequences and then it tries to predict: if you have made a mutation to this particular single position in the genetic sequence, will that be a harmful mutation that might cause disease, or is it benign and it won't do anything? AlphaGenome, which we just released, is the best system in the world for predicting that.
That's exactly what you then want. It's probably not good enough yet, but you can imagine a future version of AlphaGenome that is accurate enough to really know, \"Oh, that particular mutation in combination with this other one...\" That's the hard part—what if they are multigenic diseases where cascades of mutations cause the problem? Those are even harder to detect but perfect for AI to help with. Then you could go in with something like CRISPR one day and fix that mutation and fix the problem. So a combination of things like AlphaGenome and CRISPR could be incredibly powerful, and hopefully one day we'll be collaborating with the likes of Jennifer on that.
Last year you said something to The Guardian that I found really interesting. You said that if you'd had your way, you would have left AI in the lab for longer. And the quote is: \"done more things like AlphaFold, maybe cured cancer or something like that.\" From the outside, it looks like the story goes: you founded DeepMind with the mission to solve intelligence and use it to solve everything else. Then you sell to Google specifically because they will allow the freedom to explore science in this way. For a long time, that's your exclusive focus. And then ChatGPT comes out. Google goes Code Red, and you become the head of all Google AI, including the consumer products that you weren't spending as much time on before. And it feels to me like watching that from afar, it mirrors somewhat the larger experience of AI, which is just this incredible change. In the last couple of years, what was gained and what was lost in that change?
Yeah, I think that's exactly right. What you describe is how it felt from the inside, too. For me, as I mentioned earlier, the best use case of AI was to improve human health and accelerate scientific discovery. In fact, I got into AI in the first place because I was interested in all the big questions in the world—the nature of reality, the nature of consciousness, these kinds of things. I felt we needed a tool to help even the best scientists make sense of the amount of data and information out there and find insights in that, and that's happening, which is amazing. Obviously, AlphaFold was our first and so far best expression of that, and I always had many other problems like that on my mind.
It would have been great, given how important AGI is and how transformative a technology it is—maybe the most transformative one in human history—to approach the latter stages of building it, which we're in now, using the scientific method very carefully, precisely, thoughtfully, and rigorously. In my ideal world, the best scientists would be collaborating in a kind of CERN-like effort on making sure each step we understood as we got to the final goal of building AGI. That would make the most sense with a technology like this. And that might take a lot longer, maybe a decade or even two decades longer, but I think that would make sense given the enormity of what we're dealing with.
My other idea was that we don't have to wait until AGI arrives to start getting the benefits of AI. We could use more specialized systems that maybe make use of the general algorithms we're developing for AGI, but are not in themselves general intelligences. They're narrow AIs, like AlphaFold, which does a specific purpose and only that purpose. We could have created many types of AlphaFolds and Isomorphics while we're building AGI in this careful scientific way, and then humanity could benefit from the proceeds of that, like cures for cancer or maybe new energy sources or new materials.
Looking at this from 20, 30 years ago when I started out on all of this, that would have been the ideal way for it to play out in my opinion. Now, it didn't happen like that because technology is unpredictable. In fact, it turns out that things like language were a lot easier than we were all expecting, even those of us who were optimists about the technology. It seems funny to think of it now, but language and concepts and abstractions, things that the current foundation models like Gemini do incredibly well—we thought that maybe there would be one or two or three more breakthroughs needed before we could get there.
But it turned out transformers, which my Google colleagues invented, and some reinforcement learning on top was enough to crack things like language. We were playing around with that, as were the other leading labs. But of course, with ChatGPT—and fair play to OpenAI—they scaled it and put it out there. I think even they say it was a kind of research experiment; they didn't realize it would go so viral, and I think none of us did.
When you're building that technology, you are so close to it that you're very aware of the things it can't do, the flaws it has, and you don't realize that people out there would find use even though it was hallucinating and doing other things that we're all still trying to improve on now. But there's still interesting use cases like summarizing things or brainstorming. The downside of it is that we're in this ferocious commercial pressure race that everyone's locked into currently. And then on top of that, there's geopolitical issues like the US-China race and so on.
There's multiple levels of pressure to move fast. The benefit of that, of course, is you get faster progress. The progress is just at lightning speed these days. So that's good for all the good use cases. The second benefit is that everyone—all the viewers out there—is getting to use the most cutting-edge AI technology, perhaps only three to six months behind what is in the labs. That's kind of mind-blowing. It's also great because it democratizes AI. It gives everyone a feeling for what it's like to interact with cutting-edge AI and what it can and can't do. I think that's good for society, to start normalizing itself to what is going to be an enormous change. It's probably better that we get to sample that in incremental steps rather than it being a shock to the system: no AGI, and then here's AGI one day.
The final thing that's on the benefit side is that you can't really fully understand your systems until they're stress-tested by millions of people. It doesn't matter how good your in-house testing is; millions of smart people trying things out and you seeing what bubbles to the top is really important for building more robust and better systems. So there are positives and negatives about the way it's gone. It's not the way I dreamed about years ago where we would be contemplating this philosophically and carefully considering each next step. We're not in that world, and although I'm a scientist first and foremost, I'm also a pragmatic engineer. We have to deal with the world as we find it and make the best of that. We try to do that by advancing the frontier but also trying to be as responsible as we can as we deploy these very powerful technologies like Gemini and AlphaFold.
There's another story happening at the same time as this, and I want to get back to your concerns and how you weight those concerns and the cost. In order to understand that, I think we need to tell a story about AI being very creative, unexpectedly creative. And that story begins... let me find my Jenga block... that story begins here. So, let's go back to March 10, 2016. There's a very famous Go player that sits down to play against a system that you designed.
At this point, computers have beat humans at all kinds of games, but Go is really interesting because there are more potential moves in Go than atoms in the universe. They're going back and forth, and then your system makes a move that is so surprising because it is incredibly unlikely that a human would figure out a move like that. Move 37. And you see Lee Sedol sitting there—he's just got this shock on his face, he's got his head in his hands. I think people like yourself would find that very different than the systems that we've talked about so far. There's a category where you're giving a huge amount of data and asking to make new predictions. But then there's a category where you're not giving data—you're giving rules, like with math or physics or games like Go, and it has this incredible opportunity for creativity. Where were you when that moment happened, and what future did you see ahead?
Yeah, it was an incredible moment. It's almost exactly 10 years ago now, which feels like a century ago. But I think in many ways it was the dawn of the modern AI era because until that point there were many AI programs that could beat world champions at games, things like chess, but they were expert systems. A team of smart programmers with a team of smart chess grandmasters tried to distill the grandmasters' knowledge into a set of rules and heuristics, and then the programmers would build a brute-force system that would use a lot of compute, like IBM did with Deep Blue to beat Garry Kasparov.
They would encapsulate the rules they were given by the chess experts and then the system would sort of dumbly execute those rules and do millions and millions of searches. For me, that was not satisfactory when I saw that in the '90s. I was doing my undergrad at the time, and I didn't feel like that was proper AI. Deep Blue is world champion level at chess, but it can't do anything else. Not only can't it do language or robotics, it can't even play a strictly simpler game like tic-tac-toe.
Something's obviously not quite right about the definition of intelligence if a human grandmaster couldn't learn how to play tic-tac-toe. There's something wrong about its generalization capability and the fact that it didn't learn—it was just given the answer. Where did the intelligence reside in a system like Deep Blue? It wasn't in the system; it was in the minds of the chess grandmasters and the programmers. They solved the problem of chess and then implemented the solution.
Go, as you mentioned, is the final frontier for games. It's the most complex game humans have ever invented. In Asia, it occupies that intellectual echelon, but it's a much more intuitive, artistic game. You play patterns that look beautiful and turn out to be really strong, which is why the game has a mystical element to it—the top Go players would say it encapsulates the mysteries of the universe. Ancient Chinese thought about it that way.
And just its raw complexity—10^170 possible positions—means there's no way you can brute force it like we did with chess. Furthermore, because the game's so intuitive, there aren't really rules that you can encapsulate easily. When you talk to a Go master, unlike a chess master, they'll say things like, \"It felt right.\" A chess player will never say that; they'll tell you the calculation. That intuitive feeling is very hard to encapsulate in a system.
So it was the perfect proving ground for these new techniques we were pioneering—deep reinforcement learning. Can you build systems that learn for themselves directly from experience? AlphaGo started by looking at all the human games on the internet and learning the types of moves humans would do, but then we overlaid it with a Monte Carlo tree search that allowed it to discover new branches of the tree of knowledge in Go, starting with what humans knew and then going beyond that.
The amazing thing about that match, which was watched by 200 million people, was that not only did we win 4-1, but in game two specifically, it played Move 37—this creative move on the fifth line of the board. It's a big no-no to do that in Go; a Go master would slap your wrist because it's regarded as a bad move. But not only was it a great move, it ended up winning the game for AlphaGo. 100 moves, 200 moves later, it was in the right place as if it presciently put the stone there. It was the critical move to decide the game.
Obviously, it's changed the way all Go players play. But for me, it was the moment I'd been waiting for—building a learning system that could achieve something no other system could, this Mount Everest of games AI. Not only did it win the match, but it was how it won, with these creative new ideas like Move 37. That for me was the signal that we were ready to turn it to scientific problems like AlphaFold.
To say this back to you, the reason why Move 37 is important is because the implication is that if DeepMind can build a system that can do that, it can also perhaps build a system that can play any game. It can perhaps build systems that can figure out the best solution in quantum computing or in nuclear fusion or in matrix multiplication or chip design. Could you tell me about the cutting edge here? Pick one of these systems: what is the Move 37 of these surprising creative elements?
I think AlphaZero is very interesting to talk about, which was the evolution of AlphaGo. After we won and showed it could come up with new ideas in Go, we then generalized it further to a system called AlphaZero, which I think is going to turn out to be a very important system for today as well. With AlphaGo, we started with all the human games we could find on the internet, and there were a few things specific about Go built into the system, like the symmetry of the board.
We wanted to get rid of all of those assumptions completely and start from scratch, as if the algorithm didn't know anything about what it was trying to do. That's what the \"Zero\" refers to—removing any human-crafted knowledge, both in the data and in the heuristics. AlphaZero starts like a tabula rasa, almost. It has a neural network, we set up the parameters, but we didn't give it any domain-specific knowledge about Go or any other game.
We tested whether AlphaZero could learn Go from scratch and then beat AlphaGo. It takes 17 evolutions of the program. AlphaZero starts off random; it only has the rules of the game. It plays randomly and is terrible, but it creates its own dataset by playing 100,000 games against itself. It can see which moves won or lost. Even though it's playing more or less randomly to begin with, there will be some moves that are slightly better than other moves.
Now we train a new version of itself—Version 2—on that new data. Version 2 is slightly better than Version 1. It's not random anymore, but it's not great; it's playing \"okay\" moves. Then those \"okay\" moves end up being better, a Version 3 gets trained, a Version 4... each time that new system gets played against the old system to see if it's significantly better. It turns out that in Go and chess, around 16 or 17 generations of that is enough to go from random to better than the world champion.
In the case of chess, I once watched it happen live—it starts in the morning random; by lunchtime I could still just about compete with it myself; by teatime it's better than all grandmasters; and by dinnertime it's better than the world champion. You've just seen the entire evolution from scratch. And it's playing interesting new chess that even computers like Stockfish, with their expert and brute-force methods, haven't discovered.
AlphaZero was the full generalization of the AlphaGo ideas. And interestingly, I think we need these types of ideas back now with our foundation models like Gemini, which you can think of as generalized models of language and the world around us. We still need this ability to search and think and reason on top of those models. Sometimes we call those \"world models,\" and it still hasn't fully been cracked yet how to do that—bringing back some of these AlphaGo ideas, but now applying it to the whole world, science, material design, chip design, quantum computers... so many projects.
This is the dream: I love every branch of science, and I get to indulge myself in all these different areas because AI is such a general tool it can make a huge difference. For example, designing new materials—if we want a material with a special type of property, can we go beyond what is currently known in material science? I think \"Alpha-like\" processes could be very useful there.
And the equivalent of a Move 37 would be something like AlphaTensor finding a new algorithm that makes matrix multiplication faster?
Exactly. You can apply it in algorithmic space, which is quite exciting because then the algorithm itself gets faster. If you make matrix multiplication—the basis of all neural networks—just 5% faster, that's a huge cost-saving on the tens of billions being spent on training. These are good examples of ideas, and I think we're still early. Things like the design of chips on a die—making it as efficient as possible is an NP-hard problem, like the Traveling Salesman problem. AlphaChip and programs like that are really good, better in some cases than human chip designers. I think we're just scratching the surface of what's going to be possible in the next few years by combining today's general systems with these types of ideas from AlphaGo and AlphaZero. These two categories—the story that starts with AlphaFold and the story that starts with AlphaGo—these are the kinds of AI that make me feel really optimistic.
I also think that being really optimistic involves fully thinking through the ways in which something can go wrong and what we can do to prevent that. I want to insert one other in here—this one. This is a real-time war game [StarCraft II]. In the videos where this system [AlphaStar] is absolutely crushing humans, you can see the engineers cheering. But of course, as someone who didn't build the system, I'm thinking to myself, \"What if that's real?\" We're speaking at a time when the debate about militaries and governments using AI is a huge topic. I want this conversation to be useful for 10 years, so I don't want to talk about specific companies or terms of service. Bigger picture, governments are going to use AI. If you could wave your magic wand, what would you hope that they use it for?
Look, I think governments should be using AI, and we want to support all democratically elected governments. I would love to see them use it for things like improving public health and education. All of these things need to be rethought; the efficiency gains and the amount of good we can do for citizens could be incredible. I think some countries like Singapore and UAE are leaning into these types of use cases. I would love to see it being used for energy, like optimizing energy grids. We did that with our data centers and saved 30% of the energy used for the cooling systems. There's enormous societal gain from applying AI at scale to these types of areas.
Of course, the geopolitics of the world is very complicated right now, and these are dual-purpose technologies. I worry about a couple of things that can go wrong with AI. Big picture, there's two things to worry about. One is bad actors—whether individuals or nation states—repurposing these technologies we're trying to build for good for harmful ends, whether inadvertently or intentionally.
The second branch of things I worry about is the AI itself going rogue or going off the rails as systems get more powerful. We're entering the \"agentic era\" now, with systems capable of completing entire tasks on their own. We want those because they'll be very useful assistants, but they'll be increasingly capable and autonomous. How do we make sure the guardrails are in place, that they do exactly what they've been told to do, that the goals have been specified clearly enough, and there's no way of them circumventing that? That's an incredibly hard technical challenge as these systems eventually get smarter and more capable.
I tend to worry about those... you could call them medium-term now, even though three or four years is not really medium-term. Those are the things I think people are perhaps not paying enough attention to at the moment, and they will be the biggest issues we're going to have to contend with if we're going to get through the AGI moment in a way that's beneficial for humanity.
One of the biggest questions I came in with was: how do I weight the concerns that we're all going to have over the next 30 years? What are the things people are worrying too much about, and what are the things they are not worrying enough about?
I think the two things I just mentioned are the things the average person is not worrying enough about. There are other things we need to worry about too, like deepfakes and misinformation. We work on a system called SynthID, which is a watermarking system that uses AI to digitally watermark any generated image. All of our technologies have this watermarking technology so we can detect and flag to the user or government that these are fake. I would advocate that all companies working on generative AI should build in something like that.
But that still pales as a small issue compared to some of these bigger issues around AGI itself becoming very capable. How do we make sure guardrails are in place? We need a lot more research and effort into that from everyone. I would love to see international cooperation among the leading labs around these safety issues, including with the AI Safety Institutes and academia, to help work out how we navigate that next step, because it's unprecedented to create technology like that.
If we play this out, what's the limit here? What are the things that you think AI cannot do that humans can do? You've called this the central question of your life.
Yes, it is. It's very related to some of my all-time heroes like Alan Turing. He described Turing machines—theoretical constructs that all modern computers are—that are able to compute anything that's computable as an algorithm. I think the systems we're building are approximate Turing machines, and potentially the brain is an approximate Turing machine. Some friends of mine like Roger Penrose believe there might be some quantum effect in the brain, but so far neuroscience hasn't found any. It looks like most of what's going on in the brain is classical computation, so therefore it's not clear what the limit would be in terms of what an AI system could mimic.
I think that's an empirical question. I don't think consciousness is very well defined, but we all intuit what it is. This journey of building an intelligent artifact will provide a controlled study comparison to the human mind, and we'll see what the differences are and what's unique about the mind. I'm very open-minded about that; there could be unique connections between humans that will never be replicated. But I think things like long-term planning, reasoning, and maybe some forms of creativity—eventually AI systems will be able to do those.
I want to be honest about what's happening in my mind right now: I am doing exactly the thing that humans have done throughout history. I am trying to find the reason why we are special. We have to be at the center of the universe—oh wait, we're not. We must be the ones that are emotionally attuned—oh wait, elephants have funerals. We must be the ones that can be creative—oh wait, Gemini can do that. Do you find yourself doing that as well?
I think we are special. There are a lot of deep mysteries about how the universe works, including in our minds and in physics. I decided from a very young age to do AI because I was obsessed with the big questions. Physics was my favorite subject at school because that's the subject you're supposed to study if you're interested in big questions. But I realized as a young teenager that although we discovered a lot, there's so much we don't know. We don't know what time is—this is insane to me. We don't understand quantum effects properly, or gravity, or consciousness.
Most people distract themselves with TV shows and games, but these deep mysteries play on my mind all the time. I'm quite open-minded about what the answers might be regarding the nature of reality. Ultimately, I want to use AI as a tool to help us understand the nature of reality. I'm a true scientist in that sense; I don't have any preconceived notion of what the answer should be. I just want to know the answer.
Me too. One way to describe what you're trying to do is to create AGI—artificial general intelligence—that would be good at it all. I know you're a fan of sci-fi; I am, too. Could you play out for me the plot of the sci-fi movie in your head where you succeed?
I read too much sci-fi when I was a kid. One of my favorite series was the *Culture* series by Iain M. Banks. It paints a really interesting post-AGI world. I think even in 50 years some of this could happen where we've built AGI safely and it's helpful for society. We've used it to crack what I call \"root node problems\" in science—AlphaFold was one of those, unlocking a whole branch of new research. Other things like fusion, or room-temperature superconductors at atmospheric pressure combined with optimal batteries... there will be a solution to the energy problem.
Free, renewable, clean energy would unlock us to really travel the stars. Elon does amazing work with SpaceX, but the main cost is still the fuel—the energy cost. If that's zero because we've cracked fusion and can make infinite rocket fuel out of seawater, then that really unlocks space. We can mine asteroids and get a lot more resources. All of these things, the purview of science fiction, become very plausible in the next 50 years. Dyson spheres around the sun... Mercury is conveniently in the right place, made with the right material. This should hopefully lead to maximum human flourishing—curing all these terrible diseases so we live longer, healthier lives, and traveling to the stars, bringing consciousness to the rest of the galaxy. That would be an amazing outcome.
I believe you when you're saying these things.
That's what I'm trying to do, at least.
This is my last question. If I were a fly on the wall at my own funeral, after they said she loved her husband and family, I would hope they would say she spent her life trying to help people see optimistic futures so that they can be part of making them happen. My last question for you is: what do you hope that they say about you?
I would hope they would say that my life was of benefit and service to humanity. That's what I'm trying to do. That would be the best thing.
Thank you so much for your time.
Thank you. Really appreciate it.
Really fun. Awesome. If you want to play Jenga anytime—you seem pretty good at this version of Jenga. I can't believe how many projects we've done.
Silly, crazy. When I saw the bricks... they've all got our projects on them. Did you memorize where everything was? Of course.
So, the game is you pull it out... and we were playing this. It's unfair to play with you, but you have to say what that project was and you don't get the point if you get it wrong. For example, this is Material Science.
Yeah. It's a little bit unfair on you. I mean, I would hope I would win this game, although you're probably way better at Jenga than me. Let's see... let's do this one.
There you go.
Okay, AlphaCode.
Yeah, that one's clearer—Codeforces.
Yeah.
This is genetics, but the 2% that codes for proteins.
Yes, we have to do this now. I've got time; I can push back my meeting.
Wait, I have one more question. AlphaEvolve?
AlphaEvolve is coding... it can be used for coding, programming. It's combining genetic algorithms with Gemini. This is one attempt at doing AlphaGo-style stuff beyond what is currently known.
So I wouldn't get the point for that one.
No, half a point.
Okay, one more question for you while I have you: What did I not ask you that you think is important for people to know?
I think we covered a lot. GraphCast—this is weather prediction. Oh yeah, we didn't cover that. Solving Navier-Stokes... completely forgot about solving that whole bunch of things. One interesting thing is simulations—we didn't talk much about that, or Genie, which is the role of simulations. DQN, of course, started it all off with the Atari stuff. Simulations to help you understand areas of science or social science like economics that are very hard or expensive to run experiments in. I've always loved simulation. Oh yeah, ISO... there you go.
We're both very competitive, I think, so this is going to be quite serious. In Jenga, if you touch it, do you have to move it?
We are playing a looser version. Also, because we were doing a creative thing where you're allowed to push them together. You can use two hands, also.
Okay, you're not allowed to normally do that, right? I'll just take this one. I'm going to cheat with AlphaCode.
One of the questions I think people will have for you is if they're watching this and they are very optimistic—Gemini, everybody—how would you advise them to participate in the future?
When I do talks at universities and schools, I say they've got to just go with the flow of the direction. I would immerse myself in every tool available and just become almost like superpowered. Even at the frontier labs, so much work goes into just making the next versions of these frontier models and adjacent models like VEO and Gemini that we can only explore a fraction of the applications you could make with it. That gap's getting bigger and bigger—the overhang of capabilities on the latest models. The opportunity space is getting huge for people who are really expert at using those tools and applying them to some new domain. A kid these days could probably start a multi-billion dollar business using these tools in some new way that no one had thought about. I think OpenClaw is a good example of that.
Yeah. Maybe we should call it a draw because I don't think either of us could bear to lose.
It's your move. We can end on your move.
I'll try... in 2016, you had a sticky note on your board that said, \"Solve protein folding,\" with a smiley face. What is on the board now in your proverbial sticky?
Oh my gosh, I've got a pile of about a hundred sticky notes on my desk. AlphaChip... it would be a list of about 30 things that need to be done by this evening, so I better probably get to them.
I'm going to keep going until you stop.
Okay, I'll do one more move. But now we're kind of cheating—we're using the pieces that are already... I'm going to go ambitious in our last move.
If I get this one, I get another question.
Yeah, okay, that seems fair.
God, how is that going to balance? Surely not... no... yes! All right, thank you so much.
That was awesome, thanks.
That was a great idea to have that.