We effectively you can think of as 50% of our effort is on scaling, 50% of it is on innovation. My betting is you're going to need both to get to AGI. I've always felt this that if we build AGI and then use that as a simulation of the mind and then compare that to the real mind, we will then see what the differences are and potentially what's special and remaining about the human mind, right?
Maybe that's creativity, maybe it's emotions, maybe it's dreaming. There's a lot of consciousness. There's a lot of hypotheses out there about what may or may not be computable. And this comes back to the Turing machine question of like: what is the limit of a Turing machine?
So there's nothing that cannot be done within the sort of computational?
Well, no one's put it this way. Nobody's found anything in the universe that's non-computable—
So far.
So far.
Welcome to Google DeepMind: The Podcast with me, Professor Hannah Fry. It has been an extraordinary year for AI. We have seen the center of gravity shift from large language models to agentic AI. We've seen AI accelerate drug discovery and multimodal models integrated into robotics and driverless cars. Now, these are all topics that we've explored in detail on this podcast.
But for the final episode of this year, we wanted to take a broader view, something beyond the headlines and product launches to consider a much bigger question: Where is all this heading really? What are the scientific and technological questions that will define the next phase? And someone who spends quite a lot of their time thinking about that is Demis, CEO and co-founder of Google DeepMind. Welcome back to the podcast, Demis.
Great to be back.
Quite a lot's happened in the last year.
Yes.
What's sort of the biggest shift do you think?
Oh wow. It's just so much has happened as you said. It feels like we've packed in 10 years in one year. I think a lot's happened. Certainly for us the progress of the models—we've just released Gemini 3 which we're really happy with. The multimodal capabilities—all of those things have just advanced really well. And then probably the thing over the summer that I'm very excited about is world models being advanced. I'm sure we're going to talk about that.
Yeah, absolutely. We will get onto all of that stuff in a bit more detail in a moment. I remember the very first time that I interviewed you for this podcast and you were talking about the "root node" problems—about this idea that you can use AI to kind of unlock these downstream benefits. And you've made pretty good on your promise, I have to say.
Do you want to give us an update on where we are with those? What are the things that are just around the corner and the things that you've sort of solved or near solved?
Yeah. Well, of course, obviously the big proof point was AlphaFold and sort of crazy to think we're coming up to like the 5-year anniversary of AlphaFold 2 being announced to the world. So that was the proof, I guess, that it was possible to do these root node type of problems.
And we're exploring all the other ones now. I think material science—I'd love to do a room temperature superconductor and better batteries, these kinds of things. I think that's on the cards. Better materials of all sorts. We're also working on fusion—
Because there's a new partnership that's been announced for fusion.
Yeah, we've just announced a partnership with Commonwealth Fusion Systems. We already were collaborating with them, but it's a much deeper one now. I think they are probably the best startup working on at least traditional Tokamak reactors. So they're probably closest to having something viable and we want to help accelerate that—helping them contain the plasma in the magnets and maybe even some material design there as well.
So that's exciting. And then we're collaborating also with our quantum colleagues who are doing amazing work at the Quantum AI team at Google. We're helping them with error correction codes where we're using our machine learning to help them, and then maybe one day they'll help us.
That's perfect. Exactly. The fusion one is particularly—the difference that would make to the world that would be unlocked by that is gigantic.
Yeah. Fusion's always been the holy grail. Of course I think solar is very promising too, right? Effectively using the fusion reactor in the sky. But I think if we could have modular fusion reactors, this promise of almost unlimited renewable clean energy would obviously transform everything. And that's the holy grail. Of course, that's one of the ways we could help with climate—
Does make a lot of our existing problems sort of disappear if we can—
Definitely. It opens up many... this is why we think of it as a root node. Of course it helps directly with energy and pollution and so on and helps with the climate crisis. But also, if energy really was renewable and clean and super cheap or almost free, then many other things would become viable.
Like water access, because we could have desalination plants pretty much everywhere. Even making rocket fuel—it's just there's lots of seawater that contains hydrogen and oxygen. That's rocket fuel, but it just takes a lot of energy to split it out. But if energy is cheap and renewable and clean, then why not do that? You could have that producing 24/7.
You're also seeing a lot of change in the AI that is applying itself to mathematics, right? Winning medals in the International Math Olympiad, and yet at the same time, these models can make quite basic mistakes in high school math. Why is there that paradox?
Yeah, I think it's fascinating. One of the most fascinating things and probably that needs to be fixed as one of the key things while we're not at AGI yet. As you said, we've had a lot of success on getting like gold medals at the International Math Olympiad. You look at those questions and they're super hard questions that only the top students in the world can do.
And on the other hand, if you pose a question in a certain way, we've all seen that with experimenting with chatbots ourselves in our daily lives, that it can make some fairly trivial mistakes on logic problems. They can't really play decent games of chess yet, which is surprising.
So there's something missing still from these systems in terms of their consistency. And I think that's one of the things that you would expect from a general intelligence—from an AGI system—is that it would be consistent across the board. And so sometimes people call it "jagged intelligences."
So they're really good at certain things, maybe even like PhD level, but then other things they're like not even high school level. So it's very uneven still, the performances of these systems. They're very impressive in certain dimensions, but they're still pretty basic in others and we've got to close those gaps.
And there are theoretical reasons as to why. Depending on the situation it could even be the way that an image is perceived and tokenized. So sometimes it doesn't even get all the letters right when you count letters in words; it sometimes gets that wrong because it may not be seeing each individual letter. So there's sort of different reasons for some of these things, and each one of those can be fixed and then you can see what's left.
But I think consistency is key. I think another thing is reasoning and thinking. So we have thinking systems now that at inference time they spend more time thinking and they're better at outputting their answers. But it's not super consistent yet in terms of: is it using that thinking time in a useful way to double check and use tools to double check what it's outputting? I think we're on the way, but maybe we're only 50% of the way there.
I also wonder about that story of AlphaGo and then AlphaZero where you sort of took away all of the human experience and found that the model improved. Is there a scientific or a maths version of that in the models that you're creating?
I think what we're trying to build today is more like AlphaGo. You know, effectively these large language models, these foundation models, they're starting with all of human knowledge—what we put on the internet, which is pretty much everything these days—and compressing that into some useful artifact which they can look up and generalize from.
But I do think we're still in the early days of having this search or thinking on top. Like AlphaGo had to use that model to direct in useful reasoning traces and useful planning ideas, and then come up with the best solution to whatever the problem is at that point in time.
So I don't feel like we're constrained at the moment with the limit of human knowledge like the internet. I think the main issue at the moment is we don't know how to use those systems in a fully reliable way yet in the way we did with AlphaGo. But of course that was a lot easier because it was just a game.
I think once you have an AlphaGo-like system, you could go back just like we did with the Alpha series and do an AlphaZero where it starts discovering knowledge for itself. I think that would be the next step, but that's obviously harder. So I think it's good to try and create the first step first with some kind of AlphaGo-like system and then we can think about an AlphaZero-like system.
But that is also one of the things missing from today's systems: the ability to online learn and continually learn. So we train these systems, we post-train them, and then they're out in the world but they don't continue to learn out in the world like we would. I think that's another critical missing piece that will be needed before AGI.
In terms of all of those missing pieces, I know that there's this big race at the moment to release commercial products, but I also know that Google DeepMind's roots really lie in that idea of scientific research. I found a quote from you where you recently said: "If I had my way, we would have left AI in the lab for longer and done more things like AlphaFold, maybe cured cancer or something like that." Do you think that we lost something by not taking that slower route?
I think we lost and gained something. I feel like that would have been the more pure scientific approach. At least that was my original plan, say 15-20 years ago when almost no one was working on AI. We were just about to start DeepMind. People thought it was a crazy thing to work on, but we believed in it.
And I think that the idea was if we would make progress, we would continue to sort of incrementally build towards AGI—be very careful about what each step was and the safety aspects of it, analyze what the system was doing and so on. But in the meantime, you wouldn't have to wait until AGI arrived before it was useful. You could branch off that technology and use it in really beneficial ways to society, namely advancing science and medicine.
Exactly what we did with AlphaFold, which is not a foundation model itself—not a general model—but it uses the same techniques, like transformers, and then blends it with more specific things to that domain. So I imagined a whole bunch of those things getting done which would be released to the world just like we did with AlphaFold, and indeed do things like cure cancer.
Whilst we were working on the AGI track in the lab. Now it's turned out that chatbots were possible at scale and people find them useful, and they've now morphed into these foundation models that can do more than chat and text, obviously including Gemini. They can do images and video and all sorts of things, and that's also been very successful commercially and in terms of a product.
And I love that too. Like I've always dreamed of having the ultimate assistant that would help you in everyday life, make it more productive, maybe even protect your brain space a bit from distractions so that you can focus and be in flow. Because today with social media it's just noise, and I think AI that works for you could help us with that.
So I think that's good, but it has created this pretty crazy race condition where there's many commercial organizations and even nation-states all rushing to improve and overtake each other. And that makes it hard to do sort of rigorous science at the same time. We try to do both and I think we're getting that balance right.
On the other hand, there are lots of pros of the way it's happened which is of course there's a lot more resources coming into the area. So that's definitely accelerated progress. And also I think the general public are interestingly only a couple of months behind the absolute frontier in terms of what they can use. So everyone gets the chance to sort of feel for themselves what AI is going to be like. I think that's a good thing, and governments are understanding this better.
The thing that's strange is that this time last year I think there was a lot of talk about scaling eventually hitting a wall—about us running out of data. And yet we're recording now, Gemini 3 has just been released, and it's leading on this whole range of different benchmarks. How has that been possible? Wasn't there supposed to be a problem with scaling hitting a wall?
I think a lot of people thought that, especially as other companies have sort of had slower progress, should we say. But I think we've never really seen any wall as such. What I would say is maybe there's diminishing returns, and when I say that people think only in binary: "Oh, so there's no returns?" Like it's zero or one, either exponential or asymptotic. No, there's a lot of room between those two regimes and I think we're in between those.
So it's not like you're going to double the performance on all the benchmarks every time you release a new iteration. Maybe that's what was happening in the very early days, three or four years ago. But you are getting significant improvements like we've seen with Gemini 3 that are well worth the investment. I haven't seen any slowdown on that.
There are issues like: are we running out of available data? But there are ways to get around that—synthetic data, and these systems are good enough they can start generating their own data, especially in certain domains like coding and math where you can verify the answer. In some sense you could produce unlimited data.
So all of these things though are research questions, and I think that's the advantage that we've always had: we've always been research-first. I think we have the broadest and deepest research bench. If you look back at the last decade of advances, whether that's transformers or AlphaZero, they all came out of Google or DeepMind. So I've always said: if more scientific innovations are needed, then I would back us to be the place to do it.
I really like it when the terrain gets harder, because then it's not just world-class engineering you need, but you have to ally that with world-class research and science, which is what we specialize in. On top of that we also have the advantage of world-class infrastructure with our TPUs.
So that combination I think allows us to be at the frontier of the innovations as well as the scaling part. You can think of it as effectively 50% of our effort is on scaling and 50% is on innovation. I think my betting is you're going to need both to get to AGI.
One thing that we are still seeing even in Gemini 3, which is an exceptional model, is this idea of hallucinations. I think there was one metric that said it can still give an answer when it should decline. Could you build a system where Gemini gives a confidence score in the same way that AlphaFold does?
Yeah, I think so. And I think we need that. I think that's sort of one of the missing things. I think we're getting close. I think the better the models get, the more they know about what they know, if that makes sense. And I think the more we could rely on them to introspect in some way or do more thinking and realize for themselves that they're uncertain.
Then we've got to work out how to train it in a way where it can output that as a reasonable answer. We're getting better at it, but it still sometimes forces itself to answer when it probably shouldn't, and then that can lead to a hallucination. So I think a lot of the hallucinations are of that type currently. There's a missing piece there that has to be solved. And you're right, we did solve it with AlphaFold, but in obviously a much more limited way.
Because presumably behind the scenes there is some sort of measure of probability of whatever the next token might be?
Yes, there is a probability of the next token. That's how it all works. But that doesn't tell you the overarching piece, which is: how confident are you about this entire fact or this entire statement? And I think that's why you'll need this. I think we'll need to use the thinking steps and the planning steps to go back over what you just output.
At the moment, it's a little bit like talking to a person who, when they're having a bad day, just literally tells you the first thing that comes to their mind. Most of the time that would be okay. But then sometimes when it's a very difficult thing, you'd want to stop, pause for a moment, and maybe go over what you were about to say and adjust it. These models need to do that better.
I also really want to talk to you about the simulated worlds and putting agents in them, because we got to talk to your Genie team earlier today. Tell me why you care about simulation. What can a world model do that a language model can't?
Well, look, it's probably my longest-standing passion—world models and simulations—in addition to AI. And of course it's all coming together in our most recent work like Genie. I think language models are able to understand more than we expected, because language is probably richer than we thought—it contains more about the world than maybe even linguists imagined. That's proven now.
But there's still a lot about the spatial dynamics of the world—how spatial awareness and the physical context we're in and how that works mechanically—that is hard to describe in words and isn't generally described in corpuses of words. A lot of this is allied to learning from experience. There's a lot of things which you can't really describe; you have to just experience it. The sensors are very hard to put into words—whether that's motor angles and smell and these kinds of sensors.
So I think if we want robotics to work, or a universal assistant that maybe comes along with you in your daily life on glasses or on your phone, you're going to need this kind of world understanding. World models are at the core of that. What we mean by a "world model" is a model that understands the cause-and-effect of the mechanics of the world—intuitive physics, how things move, how things behave.
We're seeing a lot of that in our video models. One way to test that kind of understanding: can you generate realistic worlds? Because if you can generate it, then in a sense you must have understood the system. That's why Genie and VEO and these interactive world models are really impressive but also important steps towards showing we have generalized models. Hopefully at some point we can apply it to robotics and universal assistance.
And then of course, one of my favorite things I'm definitely going to have to do at some point is reapplying it back to games and game simulations to create the ultimate games, which of course was maybe always my subconscious plan.
All of this?
Yeah. All of the time. Exactly.
What about science too, though? Could you use it in that domain?
Yes, you could. Again, building models of scientifically complex domains—whether that's materials on an atomic level or in biology, but also physical things like weather. One way to understand those systems is to learn simulations of those systems from the raw data.
Let's say it's about the weather—obviously we have some amazing weather projects going on—and then you have a model that learns those dynamics and can recreate those dynamics more efficiently than doing it by brute force. So I think there's huge potential for simulations and specialized world models for aspects of science and mathematics.
But then also you can drop an agent into that simulated world too, right?
Yes.
Your Genie 3 team had this really lovely quote: "Almost no prerequisite to any major invention was made with that invention in mind." And they were talking about dropping agents into these simulated environments and allowing them to explore with curiosity being their main motivator, right?
Right. And that's another really exciting use of these world models. We have another project called SIMA—we just released SIMA 2. You have an avatar or an agent and you put it down into a virtual world. It can be an actual commercial game, a very complex one like *No Man's Sky*, an open-world space game. You can instruct it because it's got Gemini under the hood—you can just talk to the agent and give it tasks.
But then we thought: wouldn't it be fun if we plug Genie into SIMA and sort of drop a SIMA agent into another AI that was creating the world on the fly? So now the two AIs are kind of interacting in the minds of each other. The SIMA agent is trying to navigate this world, and Genie is just generating the world around whatever the agent is trying to do.
It's kind of amazing to see them both interacting together. And I think this could be the beginning of an interesting training loop where you almost have infinite training examples, because whatever the SIMA agent is trying to learn, Genie can create on the fly. You could imagine a whole world of setting and solving millions of tasks automatically, and they're just getting increasingly more difficult. Those SIMA agents could also be great as game companions and some of the things they learn could be useful for robotics.
Yeah. The end of boring NPCs.
Exactly. It's going to be amazing for these games.
Those worlds that you're creating though, how do you make sure that they really are realistic? How do you ensure that you don't end up with physics that looks plausible but is wrong?
Yeah, that's a great question and can be an issue. It's hallucinations again. Some hallucinations are good because it means you might create something interesting and new. If you're trying to do creative things, a bit of hallucination might be good, but you want it to be intentional—you switch on the creative exploration.
But yes, when you're trying to train a SIMA agent, you don't want Genie hallucinating physics that are wrong. So what we're doing now is we're almost creating a physics benchmark where we can use game engines—which are very accurate with physics—to create lots of fairly simple labs, like rolling little balls down different tracks and seeing how fast they go.
We're really teasing apart on a very basic level: has the model encapsulated Newton's three laws of motion 100% accurately? Right now they're kind of approximations; they look realistic when you casually look at them, but they're not accurate enough yet to rely on for, say, robotics.
So now we've got these really interesting models. And with physics, I think that's going to probably involve generating loads and loads of ground truth—simple videos of pendulums, but then very quickly you get to like three-body problems which are not solvable anyway.
But what's amazing already is when you look at the video models like VEO and just the way it treats reflections and liquids—it's pretty unbelievably accurate already to the naked eye. So the next step is going beyond what a human amateur can perceive and seeing if it would really hold up to a proper physics-grade experiment.
I know you've been thinking about these simulated worlds for a really long time. I went back to the transcript of our first interview and in it you said that you really like the theory that consciousness was this consequence of evolution—
—that at some point in our evolutionary past there was an advantage to understanding the internal state of another, and then we sort of turned it in on ourselves. Does that make you curious about running an agent in evolution inside of a simulation?
Sure. I'd love to run that experiment at some point. Rerun evolution, rerun almost social dynamics as well. The Santa Fe Institute used to run lots of cool experiments on little grid worlds. I used to love some of these—they were mostly economists trying to run little artificial societies, and they found that all sorts of interesting things got invented, like markets and banks, if you let agents run around for long enough with the right incentive structures.
So I think it would be really cool just to understand the origin of life and the origin of consciousness. I think you're going to need these kinds of tools to really understand where we came from and what these phenomena are. Simulations is one of the most powerful tools to do that because you can then do it statistically—run the simulation millions of times with slightly different initial conditions and understand the slight differences in a controlled way. That is very difficult to do in the real world for any of the really interesting questions we want to answer. Accurate simulations will be an unbelievable boon to science.
Given what we've discovered about sort of emergent properties of these models—having sort of conceptual understanding that we weren't expecting—do you also have to be quite careful about running this sort of simulation?
I think you would have to be, yes. But that's the other nice thing about simulations: you can run them in pretty safe sandboxes—maybe eventually you want to air-gap them—and you can of course monitor what's happening 24/7.
We may need AI tools to help us monitor the simulations because they'll be so complex. If you imagine loads of AIs running around in a simulation, it will be hard for any human scientist to keep up with. We could probably use other AI systems to help us analyze and flag anything interesting or worrying in those simulations automatically.
I guess we're still talking sort of medium to long term in terms of this stuff. So just going back to the trajectory that we're on at the moment, I also want to talk to you about the impact that AI and AGI are going to have on wider society. Last time we spoke, you said that you thought AI was overhyped in the short term but underhyped in the long term. And I know that this year there's been a lot of chatter about an AI bubble.
Yes.
What happens if there is a bubble and it bursts? What happens?
Well, look, I still subscribe to that: it's overhyped in the short term and still underappreciated in the medium to long term in terms of how transformative it's going to be. Yeah, there is a lot of talk right now about AI bubbles. In my view, it's not one binary thing. I think there are parts of the AI ecosystem that are probably in bubbles.
One example would be seed rounds for startups that haven't even got going yet and they're raising at tens of billions of dollars valuations just out of the gate. It's sort of interesting to see: how can that be sustainable? My guess is probably not, at least not in general. So there's that area.
Then there's the big tech valuations—I think there's a lot of real business underlying that. But it remains to be seen. I think for any new transformative and profound technology—of which AI is probably the most profound—you're going to get this overcorrection in a way. When we started DeepMind no one believed in it. No one thought it was possible. People were wondering what AI was for anyway.
Now fast forward 10-15 years and obviously it seems to be the only thing people talk about in business. It's almost an overreaction to the underreaction. I think that's natural. We saw that with the internet, we saw it with mobile, and we're seeing or going to see it again with AI.
I don't worry too much about are we in a bubble or not, because from my perspective leading Google DeepMind and alphabet as a whole, my job is to make sure either way we come out of it very strong. And I think we are tremendously well-positioned either way. If it continues going like it is now—fantastic. We'll carry on all of these great things and progress towards AGI.
If there's a retrenchment, fine. Then also, I think we're in a great position because we have our own stack with TPUs. We also have all these incredible Google products and the profits they make to plug our AI into. We're doing that with Search—totally revolutionized by AI overviews with Gemini under the hood. We're looking at Workspace, email, YouTube, and Chrome.
A lot of these amazing things are low-hanging fruit to apply Gemini to, as well as the Gemini app which is doing really well, and the idea of a universal assistant. So there are new products and I think they will in the fullness of time be super valuable, but we don't have to rely on that—we can just power up our existing ecosystem. That's what's happened over the last year; we've got that really efficient now.
In terms of the AI that people have access to at the moment, I know you said recently how important it is not to build AI to maximize user engagement, just so we don't repeat the mistakes of social media. But I also wonder whether we are already seeing this in a way—people spending so much time talking to their chatbots that they end up kind of spiraling into self-radicalizing.
How do you stop that? How do you build AI that puts users at the center of their own universe, which is sort of the point of this in a lot of ways, but without creating echo chambers of one?
Yeah, it's a very careful balance that I think is one of the most important things that we as an industry have got to get right. I think we've seen what happens with some systems that were overly sycophantic, and then you get these sort of echo chamber reinforcements that are really bad for the person.
I think part of it is what we want to build with Gemini. I'm really pleased with the Gemini 3 persona that we had a great team working on. It's almost like a scientific personality—it's warm, it's helpful, it's light, but it's succinct and to the point, and it will push back on things in a friendly way that don't make sense.
Rather than trying to reinforce the idea that the earth's flat just because you said it. I don't think that's good in general for society. But you've got to balance it with what people want, because people want these systems to be supportive and helpful with their ideas and their brainstorming. So you've got to get that balance right.
I think we're sort of developing a science of personality and persona—like how to measure what it's doing and where we want it to be on humor and authenticity. And then you can imagine there's a base personality that it ships with, and then everyone has their own preferences. Do you want it to be more humorous or more succinct? People like different things. So you add that additional personalization layer on it as well.
But there's still the core base personality that everyone gets, right? Which is trying to adhere to the scientific method. We want people to use these for science and for medicine and health issues. So I think it's part of the science of getting these large language models right. I'm quite happy with the direction we're going in currently.
We got to talk to Shane Legg a couple weeks ago about AGI in particular. Across everything that's happening in AI at the moment—the language models, the world models—what's closest to your vision of AGI?
I think the combination of... obviously there's Gemini 3 which I think is very capable, but the "Nano Banana Pro" system we also launched last week, which is an advanced version of our image creation tool—what's really amazing about that is it also has Gemini under the hood. So it can understand not just images, it sort of understands what's going on semantically in those images.
People have been playing with it for a week now, but I've seen so much cool stuff on social media about what people are using it for. For example, you can give it a picture of a complex plane and it can label all the different parts and even visualize it with all the different parts sort of exposed.
So it has some kind of deep understanding of mechanics and what makes up objects and materials. It's sort of getting towards a kind of AGI for imaging—a general-purpose system that can do anything across images. I think that's very exciting. And then the advances in world models like Genie and SIMA. Eventually we've got to converge all of those different projects into one big model, and then that might start becoming a candidate for Proto-AGI.
I know you've been reading quite a lot about the industrial revolution recently. Are there things that we can learn from what happened there to try and mitigate against some of the disruption that we can expect when AGI comes?
I think there's a lot we can learn. It's something you study in school at least in Britain, but on a very superficial level. It was really interesting for me to look into how it all happened. The textile industry—the first computers were really the sewing machines, right? And then they became punch cards for the early mainframes.
For a while it was very successful; Britain became the center of the textile world because they could make these amazingly high-quality things for very cheap because of the automated systems. And then obviously the steam engines and all of those things came in. I think there's a lot of incredible advances that came out of the industrial revolution. Child mortality went down, all of modern medicine and sanitary conditions, the work-life split—all of that was worked out then.
But it also came with a lot of challenges. It took quite a long time—roughly a century—and different parts of the labor force were dislocated. New organizations like unions had to be created in order to rebalance that. It was fascinating to see the whole of society sort of had to over time adapt.
There were pros and cons, but no one would want to go back to pre-industrial revolution if you think about the abundance of food and modern medicine. Maybe we can figure out ahead of time by learning from it what those dislocations were and maybe mitigate those earlier or more effectively this time. And we're probably going to have to, because the difference this time is that it's probably going to be 10 times bigger and 10 times faster—more like unfolding over a decade than a century.
One of the things that Shane told us was that the current economic system—where you exchange your labor for resources effectively—it just won't function the same way in a post-AGI society. Do you have a vision of how society should be reconfigured?
Yeah, I'm spending more time thinking about this now and Shane's leading an effort here on that. I think society in general needs to spend more time thinking about that—economists and social scientists and governments. Just like the working world and working week got changed from agriculture, I think that level of change is going to happen again.
I would not be surprised if we needed new economic systems to help with that transformation and make sure the benefits are widely distributed. Maybe things like universal basic income are part of the solution, but I think that's just what we can model out now.
I think there might be way better systems—more like direct democracy type systems where you can vote with a certain amount of credits for what you want to see. It happens on a local community level: "Here's a bunch of money. Do you want a playground or an extra classroom?" And then you let the community vote for it.
Maybe you could even measure the outcomes, and the people that sort of consistently vote for the things that end up being more well-received have proportionally more influence for the next vote. I hear economist friends of mine brainstorming this.
And then there's the philosophical side of it: okay, so jobs will change and we'll have fusion solved, so we have abundant free energy and we're post-scarcity—so what happens to money? Maybe everyone's better off, but then what happens to purpose? A lot of people get their purpose from providing for their families, which is a very noble purpose. So some of these questions blend from economic questions into almost philosophical questions.
Do you worry that people don't seem to be paying attention or moving as quickly as you'd like to see? What would it take for people to sort of recognize that we need international collaboration on this?
I am worried about that. In an ideal world there would have been a lot more collaboration already—international specifically—and a lot more exploration and discussion. I'm pretty surprised there isn't more of that being discussed given that even our timelines are 5 to 10 years, which is not long for institutions to be built to handle this.
One of the worries I have is that the institutions that do exist seem to be very fragmented and not very influential. So it may be that there aren't the right institutions to deal with this currently. And then of course if you add in the geopolitical tensions, it seems like cooperation is harder than ever. Just look at climate change and how hard it is to get any agreement.
We'll see. I think as the stakes get higher and as these systems get more powerful, maybe one of the benefits of them being in products is the everyday person will get to feel the increase in the power and the capability, and that will then reach government and then maybe they'll see sense as we get closer to AGI.
Do you think it will take a moment—an incident—for everyone to sort of sit up and pay attention?
I don't know. I hope not. Most of the main labs are pretty responsible. We try to be as responsible as possible—that's always been at the heart of everything we do. Doesn't mean we'll get everything right, but we try to be as thoughtful and as scientific in our approach as possible.
Also, there's good commercial pressure to be responsible. If you think about agents and you're renting an agent to another company, that company is going to want to know what the limits and guardrails are on those agents in terms of what they might do. I think the more cowboy operations won't get the business because the enterprises won't choose them.
So I think the capitalist system will be useful here to reinforce responsible behavior. But then there will be rogue actors—maybe rogue nations or rogue organizations, or people building on top of open source. Obviously it's very difficult to stop that. Then something may go wrong, and hopefully it's just sort of medium-sized and then that will be a warning shot across the bow to humanity. That might be the moment to advocate for international standards or collaboration at least on the high-level basic standards we would want and agree to. I'm hopeful that will be possible.
In the long term, so beyond AGI and towards ASI—artificial superintelligence—do you think that there are some things that humans can do that machines will ever be able to manage?
Well, I think that's the big question. I've always felt this: that if we build AGI and then use that as a simulation of the mind and then compare that to the real mind, we will then see what the differences are and potentially what's special and remaining about the human mind, right? Maybe that's creativity, maybe it's emotions, maybe it's dreaming. There's a lot of consciousness.
There's a lot of hypotheses out there about what may or may not be computable. And this comes back to the Turing machine question of: what is the limit of a Turing machine? I think that's the central question of my life ever since I found out about Turing and Turing machines. I fell in love with that. That's my core passion.
And I think everything we've been doing has been pushing the notion of what a Turing machine can do to the limit, including folding proteins, right? And so it turns out I'm not sure what the limit is—maybe there isn't one. My quantum computing friends would say there are limits and you need quantum computers to do quantum systems, but I'm really not so sure.
I've discussed that with some of the quantum folks, and it may be that we need data from these quantum systems in order to create a classical simulation. And then that comes back to the mind: is it all classical computation or is there something else going on?
Roger Penrose believes there are quantum effects in the brain. If there are, then machines will never have that—at least the classical machines. But if there isn't, then there may not be any limit. Maybe in the universe everything is computationally tractable if you look at it in the right way, and therefore Turing machines might be able to model everything in the universe. If you were to make me guess, I would guess that, and I'm working on that basis until physics shows me otherwise.
So there's nothing that cannot be done within these sort of computational?
Well, no one's put it this way. Nobody's found anything in the universe that's non-computable so far.
So far.
Right? And I think we've already shown you can go way beyond the usual complexity theorist P=NP view of what a classical computer could do today—things like protein folding and Go. So I don't think anyone knows what that limit is. And that's really what I'm trying to do: find that limit.
But then in the limit of that idea—that we're sitting here, there's like the warmth of the lights on our face, the whir of the machine in the background, the feel of the desk under our hands—all of that could be replicable by a classical computer?
Yes. Well, I think in the end, my view on this—and why I love Kant as well—is all of my two favorite philosophy is a construct of the mind. I think that's true. So all of those things you mentioned, they're coming into our sensory apparatus and they feel different, but in the end, it's all information.
And we're information processing systems. I think that's what biology is. That's how I think we'll end up curing all diseases—by thinking about biology as an information processing system. And I think in the end that's going to be... and I'm working in my two minutes of spare time on physics theories about things like information being the most fundamental unit of the universe—not energy, not matter, but information.
And so it may be that these are all interchangeable in the end, but we just sense it in a different way. But as far as we know, all these amazing sensors that we have are still computable by a Turing machine.
But this is why your simulated world is so important, right?
Yes. Exactly. Because that would be one of the ways to get to it. What's the limits of what we can simulate? Because if you can simulate it, then in some sense, you've understood it.
I wanted to finish with some personal reflections of what it's like to be at the forefront of this. Does the emotional weight of this ever sort of weigh you down? Does it ever feel quite isolating?
Yes. Look, I don't sleep very much, partly because it's too much work, but also I have trouble sleeping. It's very complex emotions to deal with because it's unbelievably exciting. I'm doing everything I ever dreamed of. We're at the absolute frontier of science in so many ways—applied science as well as machine learning.
That's exhilarating, as all scientists know—that feeling of being at the frontier and discovering something for the first time. And that's happening almost on a monthly basis for us, which is amazing. But then of course Shane and I and others who've been doing this for a long time, we understand the enormity of what's coming better than anybody.
And this thing about it still being under-appreciated—what's going to happen in more of a 10-year timescale, including things like what it means to be human. All of these questions are going to come up, and so it's a big responsibility. But we have an amazing team thinking about these things. Also it's something I guess I've trained for my whole life. Ever since my early days playing chess and then working on computers and simulations and neuroscience—it's all been for this kind of moment. And it's roughly what I imagined it was going to be. So that's partly how I cope with it—just training.
Are there parts of it that have hit you harder than you expected though?
Yes, for sure. On the way—even the AlphaGo match, right? Just seeing how we managed to crack Go, but Go was this beautiful mystery and it changed it. And so that was interesting and kind of bittersweet.
I think even the more recent things like language and imaging—what does it mean for creativity? I have huge respect and passion for the creative arts and having done game design myself. I talk to film directors and it's an interesting dual moment for them too.
On one hand they've got these amazing tools that speed up prototyping ideas by 10x, but on the other hand is it replacing certain creative skills? So I think there are sort of these trade-offs going on all over the place which I think is inevitable with a technology as powerful and as transformative as AI is—as in the past electricity was and the internet.
The story of humanity is that we are tool-making animals and that's what we love to do. And for some reason we also have a brain that can understand science and do science, which is amazing, but also sort of insatiably curious. I think that's the heart of what it means to be human. I think I've just had that bug from the beginning, and my expression of trying to answer that is to build AI.
When you and the other AI leaders are in a room together, is there a sense of solidarity between you—that this is a group of people who all know the stakes—or does the competition kind of keep you apart from one another?
Well, we all know each other. I get on with pretty much all of them. Some of the others don't get on with each other. And it's hard because we're also in the most ferocious capitalist competition there's ever been probably. Investor friends who were around in the dotcom era say this is like 10x more ferocious and intense than that was.
In many ways, I love that. I live for competition—I've always loved that since my chess days. But stepping back I understand—I hope everyone understands—that there's a much bigger thing at stake than just company successes.
When it comes to the next decade, when you think about it, are there big moments coming up that you're personally most apprehensive about?
I think right now the systems are what I call "passive systems"—you put the energy in as the user and then these systems provide you with some summary or answer. So very much it's human-directed. The next stage is agent-based systems, which I think we're going to start seeing.
In the next couple of years, I think we'll start seeing some really impressive reliable ones. And I think those will be incredibly useful if you think about them as an assistant, but also they'll be more autonomous. So I think the risks go up as well. I'm quite worried about what those sorts of systems will be able to do maybe in two or three years time. So we're working on cyber defense in preparation for a world like that where maybe there's millions of agents roaming around on the internet.
And what about what you're most looking forward to? Is there a day when you'll be able to retire sort of knowing that your work is done, or is there more than a lifetime's worth of work left to do?
Yeah, I always... well, I could definitely do with a sabbatical and I would spend it doing stuff. A week off or even a day would be good. But look, I think my mission has always been to help the world steward AGI safely over the line for all of humanity.
So I think when we get to that point, of course, then there's superintelligence and post-AGI and all the economic and societal stuff we were discussing. Maybe I can help in some way there. But I think that core part of my life mission will be done—just get that over the line. I think it's going to require collaboration like we talked earlier, and I'm quite a collaborative person. So I hope I can help with that from the position that I have.
And then you get to have a holiday?
And then I'll have a well-earned sabbatical.
Yeah, absolutely. Demis, thank you so much. Helpful as always.
Well, that is it for this season of Google DeepMind: The Podcast with me, Professor Hannah Fry. But be sure to subscribe so you will be among the first to hear about our return in 2026. And in the meantime, why not revisit our vast episode library because we have covered so much this year—from driverless cars to robotics, world models to drug discovery. Plenty to keep you occupied. See you soon.