Versolexis
Zanny Minton Beddoes

Welcome everybody, and welcome to those of you joining us on live stream to this conversation that I have to say I have been looking forward to for months. I was lucky enough to moderate a conversation between Dario Amodei and Demis Hassabis last year in Paris, which I'm afraid got most attention for the fact that you two were squashed on a very small love seat while I sat on an enormous sofa, which was probably my screw up. But I said at that point that this was for me like chairing a conversation between the Beatles and the Rolling Stones. And you have not had a conversation on stage since. So this is the sequel, the bands get together again.

Zanny Minton Beddoes

I'm delighted. You need no introduction. The title of our conversation is "The Day After AGI," which I think is perhaps slightly getting ahead of ourselves because we should probably talk about how quickly and easily we will get there. I want to do a bit of a sort of update on that and then talk about the consequences. So firstly on the timeline: Dario, you last year in Paris said we'll have a model that can do everything a human could do at the level of a Nobel laureate across many fields by 2026 or 2027. We're in 2026. Do you still stand by that timeline?

Dario Amodei

So it's always hard to know exactly when something will happen, but I don't think that's going to turn out to be that far off. The mechanism whereby I imagined it would happen is that we would make models that were good at coding and good at AI research, and we would use that to produce the next generation of model and speed it up to create a loop that would increase the speed of model development.

Dario Amodei

We are now in terms of the models that write code—I have engineers within Anthropic who say, "I don't write any code anymore. I just let the model write the code. I edit it. I do the things around it." I think, I don't know, we might be 6 to 12 months away from when the model is doing most, maybe all, of what software engineers do end-to-end. And then it's a question of how fast does that loop close. Not every part of that loop is something that can be sped up by AI, right? There's like chips, there's manufacture of chips, there's training time for the model.

Dario Amodei

So I think there's a lot of uncertainty. It's easy to see how this could take a few years. I find it very hard to see how it could take longer than that. But if I had to guess, I would guess that this goes faster than people imagine. And that key element of code and increasingly research going faster than we imagine—that's going to be the key driver. It's really hard to predict again how much that exponential is going to speed us up, but something fast is going to happen.

Zanny Minton Beddoes

So you, Demis, were a little more cautious last year. You said a 50% chance of a system that can exhibit all the cognitive capabilities humans can by the end of the decade. Clearly in coding, as Dario says, it's been remarkable. What is your sense of... do you stand by your prediction and what's changed in the past year?

Demis Hassabis

Yeah, look, I think I'm still on the same kind of timeline. And I think there has been remarkable progress. But I think some areas of kind of engineering work, coding, or you could say mathematics, are a little bit easier to see how they would be automated partly because they're verifiable—what the output is. Some areas of natural science are much harder to do than that. You won't necessarily know if the chemical compound you've built or this prediction about physics is correct. You may have to test it experimentally and that will all take longer.

Demis Hassabis

So I also think there are some missing capabilities at the moment in terms of, like, not just solving existing conjectures or existing problems, but coming up with the question in the first place or coming up with the theory or the hypothesis. I think that's much harder, and I think that's the highest level of scientific creativity. It's not clear. I think we will have those systems—I don't think it's impossible—but I think there may be one or two missing ingredients.

Demis Hassabis

It remains to be seen how, first of all, can this self-improvement loop that we're all working on close without a human in the loop. I think there are also risks to that kind of system, by the way, which we should discuss and I'm sure we will. But that could speed things up if that kind of system does work.

Zanny Minton Beddoes

We'll get to the risks in a minute. But one other change I think of the past year has been a kind of change in the pecking order of the race, if you will. This time a year ago, we just had the DeepSeek moment and everyone was incredibly excited about what happened there, and there was still a sense, you know, that Google DeepMind was kind of lagging OpenAI. I would say that now it's looking quite different. They've declared "Code Red," right? It's been quite a year. So, talk me through what specifically you've been surprised by and how well you've done this year and whether you think... and then I'm going to ask you about the lineup.

Demis Hassabis

Well, look, I think I was always very confident we would get back to sort of the top of the leaderboards and the SOTA type of models across the board because I think we've always had like the deepest and broadest research bench. It was about kind of marshalling that all together and getting the intensity and focus and the kind of startup mentality back to the whole organization. It's been a lot of work, but I think you can start seeing the progress that's been made in both the models with Gemini 3, but also on the product side with the Gemini app getting increasing market share. So I feel like we're making great progress but there's a ton more work to do. We're bringing to bear Google DeepMind's kind of like the engine room of Google where we're getting used to shipping our models more and more quickly into the product surfaces.

Zanny Minton Beddoes

One question for you, Dario, on this aspect of it because you are in the process of a new round at an extraordinary valuation too. But you are, unlike Demis, let's call it an independent model maker, and there is, I think, an increasing concern that the independent model makers will not be able to continue for long enough until you get to where the revenues come in. It's made very openly about OpenAI, but talk me through how you think about that and then we'll get to the AGI itself.

Dario Amodei

Yeah, I think how we think about that is as we've built better and better models, there's been a kind of exponential relationship not only between how much compute you put into the model and how cognitively capable it is, but between how cognitively capable it is and how much revenue it's able to generate. So our revenue's grown 10x in the last three years: from 0 to 100 million in 2023, 100 million to a billion in 2024, and 1 billion to 10 billion in 2025.

Dario Amodei

And so those revenue numbers—I don't know if that curve will literally continue, it would be crazy if it did—but those numbers are starting to get not too far from the scale of the largest companies in the world. So there's always uncertainty. We're trying to bootstrap this from nothing. It's a crazy thing, but I have confidence that if we're able to produce the best models in the things that we focus on, then I think things will go well.

Dario Amodei

And I will generally say I think it's been a good year for both Google and Anthropic. And I think the thing we have in common is that they're both kind of companies, or the research part of the company, that are led by researchers who focus on the models, who focus on solving important problems in the world, right? Who have these kind of hard scientific problems as a North Star. And I think those are the kind of companies that are going to succeed going forward, and I think we share that between us.

Zanny Minton Beddoes

Very much. I'm going to resist the temptation to ask you what will happen to the companies that are not led by researchers because I know you won't answer it. But let's then go on to the predictions area now. We are supposed to be talking about the day after AGI, but let's talk about closing the loop. The odds that you will get models that will close the loop and be able to power themselves, if you will, because that's really the crux for the "winner takes all" threshold approach. Do you still believe that we are likely to see that, or is this going to be much more of a normal technology where followers and catch-up can compete?

Demis Hassabis

Well, look, I definitely don't think it's going to be a normal technology. So there are aspects already, as Dario mentioned, that it's already helping with our coding and some aspects of research. The full closing of the loop, though, I think is an unknown. I think it's possible to do. You may need AGI itself to be able to do that in some domains where there's more messiness around them, where it's not so easy to verify your answer very quickly—there's kind of NP-hard domains. So as soon as you start getting more... and I also include by the way for AGI, physical AI, robotics working, all of these kind of things. And then you've got hardware in the loop that may limit how fast the self-improvement systems can work. But I think in coding and mathematics and these kind areas, I can definitely see that working. And then the question is more a theoretical one: what is the limit of engineering and maths to solve the natural sciences?

Zanny Minton Beddoes

Dario, you last year, I think it was last year that you published "Machines of Loving Grace," which was a very, I would say, upbeat essay about the potential that you were going to see unfold. You were talking about a "genius of data at a country data center." I'm told that you are working on an update to this, a new essay. So wait for it, guys, it's not out yet but it is coming out. But perhaps you can give us a sort of a sneak preview of what, a year later, your big take is going to be.

Dario Amodei

Yes. So my take has not changed. It has always been my view that AI is going to be incredibly powerful. I think Demis and I kind of agree on that; it's just a question of exactly when. And because it's incredibly powerful, it will do all these wonderful things like the ones I talked about in "Machines of Loving Grace." It will help us cure cancer. It may help us to eradicate tropical diseases. It will help us understand the universe. But that there are these immense and grave risks—not that we can't address them, I'm not a doomer—but that we need to think about them and we need to address them.

Dario Amodei

And I wrote "Machines of Loving Grace" first. I'd love to give some sophisticated reason why I wrote that first, but it was just that the positive essay was easier and more fun to write than the negative essay. So I finally spent some time on vacation and I was able to write an essay about the risks. Even when I'm writing about the risks, I try... I'm like an optimistic person, right? So even as I'm writing about these risks, I wrote about it in a way that was like: how do we overcome these risks? How do we have a battle plan to fight them?

Dario Amodei

And the way I framed it was... there's this scene from Carl Sagan's "Contact," the movie version of it, where they kind of discover alien life and there's this international panel that's interviewing people to be humanity's representative to meet the alien. And one of the questions they ask one of the candidates is: if you could ask the aliens any one question, what would it be? And one of the characters says, "I would ask: How did you do it? How did you manage to get through this technological adolescence without destroying yourselves? How did you make it through?"

Dario Amodei

And ever since I saw it—it was like 20 years ago, I think I saw that movie—it's kind of stuck with me. And that's the frame that I use, which is that we are knocking on the door of these incredible capabilities, right? The ability to build machines out of sand. I think it was inevitable the instant we started working with fire. But how we handle it is not inevitable. And so I think the next few years we're going to be dealing with: how do we keep these systems under control that are highly autonomous and smarter than any human? How do we make sure that individuals don't misuse them? I have worries about things like bioterrorism. How do we make sure that nation-states don't misuse them? That's why I've been so concerned about the CCP, other authoritarian governments. What are the economic impacts? I've talked about labor displacement a lot. And what haven't we thought of, which in many cases may be the hardest thing to deal with at all?

Dario Amodei

So I'm thinking through how to address those risks. For each of these, it's a mixture of things that we individually need to do as leaders of the companies and that we can do working together. And then there's going to need to be some role for wider societal institutions like the government in addressing all of these. But I just feel this urgency that every day there's all kinds of crazy stuff going on in the outside world, outside AI, right? But my view is this is happening so fast and is such a crisis we should be devoting almost all of our effort to thinking about how to get through this.

Zanny Minton Beddoes

So I can't decide whether I'm more surprised that you (a) take a vacation, (b) when you take a vacation you think about the risks of AI, and (c) that your essay is framed in terms of "are we going to get through the technological adolescence of this technology without destroying ourselves." So my head is slightly spinning, but I can't wait to read it. You mentioned several areas that can guide the rest of our conversation. Let's start with jobs because you have been very outspoken about that and I think you said that half of entry-level white-collar jobs could be gone within the next one to five years.

Zanny Minton Beddoes

But I'm going to turn to you Demis because so far we haven't seen any discernable impact on the labor market. Yes, unemployment has ticked up in the US, but all of the kind of economic studies I've looked at suggest that this is overhiring post-pandemic, that it's really not AI-driven. If anything, people are hiring to build out AI capability. Do you think that this will be, as economists have always argued, not a "lump of labor fallacy"—that there will be new jobs created? Because so far the evidence seems to suggest that.

Demis Hassabis

Yeah, I think in the near term that is what will happen. The kind of normal evolution when a breakthrough technology arrives. So some jobs will get disrupted, but I think new, even more valuable, perhaps more meaningful jobs will get created. I think we're going to see this year the beginnings of maybe impacting the junior-level, entry-level type of jobs—internships, this type of thing. And I think there is some evidence I can feel myself, maybe like a slowdown in hiring in that.

Demis Hassabis

But I think that can be more than compensated by the fact there are these amazing creative tools out there pretty much available for everyone almost for free. If I was to talk to a class of undergrads right now, I would be telling them to get really unbelievably proficient with these tools. I think to the extent that even those of us building it, we're so busy building it, it's hard to have time to really explore the almost "capability overhang" even today's models and products have, let alone tomorrow's. And I think that can be maybe better than a traditional internship would have been in terms of you sort of leapfrogging yourself to be useful in a profession. So I think that's what I see happening probably in the next five years. Maybe we again slightly differ on timescales on that, but I think what happens after AGI arrives—that's a different question because I think really we would be in uncharted territory at that point.

Zanny Minton Beddoes

Do you think it's going to take longer than you thought last year when you said half of all white-collar jobs?

Dario Amodei

I have about the same view. I agree with you and with Demis that at the time I made the comment there was no impact on the labor market. I wasn't saying there was an impact on the labor market at that moment. Now I think maybe we're starting to see just the little beginnings of it in software, in coding. I even see it within Anthropic where I can look forward to a time where on the more junior end, and then on the more intermediate end, we need fewer and not more people. And we're thinking about how to deal with that within Anthropic in a sensible way.

Dario Amodei

One to five years—as of six months ago, I would stick with that. If you connect this to what I said before, which is we might have AI that's better than humans at everything in maybe one to two years, maybe a little longer than that—those don't seem to line up. The reason is that there's this lag and there's this replacement thing, right? I know the labor market is adaptable, right? Just like 80% of people used to do farming, farming got automated and then they became factory workers and then knowledge workers. So there is some level of adaptability here as well. We should be economically sophisticated about how the labor market works, but my worry is as this exponential keeps compounding—and I don't think it's going to take that long—somewhere between a year and five years it will overwhelm our ability to adapt. I think I may be saying the same thing Demis is, just factored out of that difference we have about timelines, which I think ultimately comes down to how fast you close the loop on code.

Zanny Minton Beddoes

How much confidence do you have that governments get the scale of this and are beginning to think about what policy responses they need to have?

Demis Hassabis

I don't think that there's anywhere near enough work going on about this. I'm constantly surprised even when I meet economists at places like this that there aren't more professional economist professors thinking about what happens—and not just sort of on the way to AGI, but even if we get all the technical things right that Dario was talking about. The job displacement is one question we're worried about the economics of that, but maybe there are ways to distribute this new productivity, this new wealth, more fairly. I don't know if we have the right institutions to do that, but that's what should happen at that point—there should be... we may be in a post-scarcity world.

Demis Hassabis

But then there are even the things that keep me up right now. There are even bigger questions than that at that point to do with meaning and purpose and a lot of the things that we get from our jobs, not just economically. That's one question. But I think that may be easier to solve, strangely, than what happens to the human condition and humanity as a whole. And I think I'm also optimistic we'll come up with new answers there. We do a lot of things today from extreme sports to art that aren't necessarily directly to do with economic gain. So I think we will find meaning and maybe there'll be even more sort of sophisticated versions of those activities. Plus, I think we'll be exploring the stars. So there'll be all of that to factor in as well in terms of purpose. But I think it's really worth thinking now even on my timelines of like five to 10 years away—that isn't a lot of time before this comes.

Zanny Minton Beddoes

How big do you think is the risk of a popular backlash against AI that will somehow kind of cause governments to do what from your perspective might be stupid things? Because I'm just thinking back to the era of globalization in the 1990s when there was indeed some displacement of jobs. Governments didn't do enough. The public backlash was such that we've ended up sort of where we are now. Do you think that there is a risk that there will be a growing antipathy towards what you are doing and your companies in the kind of body politic?

Demis Hassabis

I think there's definitely a risk. I think that's kind of reasonable. There's fear and there's worries about these things like jobs and livelihoods. I think there's a couple of things that are going to be very complicated the next few years, I think geopolitically but also the various factors here. We want to—and we're trying to do this with AlphaFold and our science work and Isomorphic, our spin-out company—solve all disease, cure diseases, come up with new energy sources. I think as a society it's clear we'd want that. I think maybe the balance of what the industry is doing is not enough balanced towards those types of activities.

Demis Hassabis

I think we should have a lot more examples—I know Dario agrees with me—of like AlphaFold-like things that help sort of unequivocal good in the world. And I think it's incumbent on the industry and all of us leading players to show that more, demonstrate that, not just talk about it but demonstrate that. But then it's going to come with these other unintended disruptions. But I think the other issue is the geopolitical competition. There's obviously competition between the companies, but also US and China primarily. Unless there's an international cooperation or understanding around this, which I think would be good in terms of things like minimum safety standards for deployment—I think Dario would agree on that as well—I think it's vitally needed.

Demis Hassabis

This technology is going to be cross-border. It's going to affect everyone. It's going to affect all of humanity. "Contact" is one of my favorite films as well, so funny enough, I didn't realize it was yours too, Dario. But I think those kind of things need to be worked through. And if we can, maybe it would be good to have a slightly slower pace than we're currently predicting—even my timelines—so that we can get this right as a society. But that would require some coordination that is... I prefer your timelines. Yes, I think I battle-concede.

Zanny Minton Beddoes

But Dario, let's turn to this now because the one thing since we last spoke in Paris, the geopolitical environment has, if anything, complicated, mad, crazy—whatever phrase you want to use. Secondly, the US has a very different approach now towards China. It's much more of a "no holds barred, go as fast as we can, but then sell chips to China" approach. And that is... so you've got a different attitude towards the United States. You've got a very strange relationship between the United States and Europe right now geopolitically. Against that, I hear you talk about how it would be nice to have a CERN-like organization—it's a million years from where we are in the real world. So in the real world, have the geopolitical risks increased and what if anything do you think should be done about that, given the administration seems to be doing the opposite of what you were suggesting?

Dario Amodei

Yeah look, we're just trying to do the best we can. We're just one company and we're trying to operate in the environment that exists, no matter how crazy it is. But I think at least my policy recommendations haven't changed: that not selling chips is one of the biggest things we can do to make sure that we have the time to handle this. I said before, you know, I prefer Demis' timeline. I wish we had 5 to 10 years. It's possible he's just right and I'm just wrong, but assume I'm right and it can be done in one to two years. Why can't we slow down to Demis' timeline?

Zanny Minton Beddoes

Well, you could just slow down.

Dario Amodei

Well, no. The reason we can't do that is because we have geopolitical adversaries building the same technology at a similar pace. It's very hard to have an enforceable agreement where they slow down and we slow down. And so if we can just not sell the chips, then this isn't a question of competition between the US and China. This is a question of competition between me and Demis, which I'm very confident that we can work out.

Zanny Minton Beddoes

And what do you make of the logic of the administration which, as I understand it, is we need to sell them chips because we need to bind them into US supply chains?

Dario Amodei

So I think it's a question not just of timescale but of the significance of the technology. Right, if this was telecom or something, then all this stuff about proliferating the US stack and wanting to build our chips around the world to make sure that these random countries build data centers that have NVIDIA chips instead of Huawei chips... I think of this more as like a decision: Are we going to sell nuclear weapons to North Korea because that produces some profit for Boeing? Where we can say, "Okay, yeah, these cases were made by Boeing, like the US is winning." I just hope that analogy makes clear how I see this trade-off—that I just don't think it makes sense. And we've done a lot of more aggressive stuff towards China and other players that I think is much less effective than this one measure.

Zanny Minton Beddoes

One more area for me and then I hope we'll have time for a question or two. The other area of potential risk that doomers worry about is a kind of all-powerful malign AI. And I think you've both been somewhat skeptical of the doomer approach, but in the last year we have seen these models showing themselves to be capable of deception, duplicity. Do you think differently about that risk now than you did a year ago? And is there something about the way the models are evolving that we should put a little bit more concern on that?

Dario Amodei

Yeah, since the beginning of Anthropic, we've kind of thought about this risk. Our research at the beginning of it was very theoretical, right? We pioneered this idea of "mechanistic interpretability," which is looking inside the model and trying to understand its brain, trying to understand why it does what it does as human neuroscientists—which we both have background in—try to understand the brain. And I think as time has gone on, we've increasingly documented the bad behaviors of the models when they emerge and are now working on trying to address them with mechanistic interpretability.

Dario Amodei

So I think I've always been concerned about these risks. I've talked to Demis many times; I think he has also been concerned about these risks. I think I have definitely been—and I would guess Demis as well, although I'll let him speak for himself—skeptical of "doomerism," which is the idea that we're doomed and there's nothing we can do or this is the most likely outcome. I think this is a risk. This is a risk that if we all work together, we can address. We can learn through science to properly control and direct these creations that we're building. But if we build them poorly, if we're all racing and we go so fast that there are no guardrails, then I think there is a risk of something going wrong.

Zanny Minton Beddoes

So I'm going to give you a chance to answer that in the context of a slightly broader question which is: over the past year have you grown more confident of the upside potential of the technology—science, all of the areas that you have talked about a lot—or are you more worried about the risks that we've been discussing?

Demis Hassabis

I've been working on this for 20-plus years, so we already knew... look, the reason I've spent my whole career on AI is the upsides of solving the ultimate tool for science and understanding the universe around us. I've sort of been obsessed with that since a kid, and building AI should be the ultimate tool for that if we do it in the right way. The risks also we've been thinking about since the start—at least the start of DeepMind 15 years ago—and we kind of foresaw that if you got the upsides, it's a dual-purpose technology, so it could be repurposed by, say, bad actors for harmful ends.

Demis Hassabis

So we've needed to think about that all the way through, but I'm a big believer in human ingenuity. But the question is having the time and the focus and all the best minds collaborating on it to solve these problems. I'm sure if we had that, we would solve the technical risk problem. It may be we don't have that and then that will introduce risk because we'll be sort of fragmented. There'll be different projects and people will be racing each other, then it's much harder to make sure these systems that we produce will be technically safe. But I feel like that's a very tractable problem if we have the time and space.

Zanny Minton Beddoes

I want to make sure there's one question, gentlemen. Keep it very short because we've got literally two minutes.

Philip

Thanks very much. I'm Philip, co-founder of StarCloud building data centers in space. I wanted to ask a slightly philosophical question. The strongest argument for doomerism to me is the Fermi Paradox—the idea that we don't see intelligent life in our galaxy. I was wondering if you guys have any thoughts.

Demis Hassabis

Yeah, I've thought a lot about that. That can't be the reason because we should see all the AIs that have... So, just so everyone knows, the idea is well, it's sort of unclear why that would happen, right? So if the reason there's a Fermi Paradox is because there are no aliens because they get taken out by their own technology, we should be seeing "paper clips" coming towards us from some part of the galaxy. And apparently, we don't. We don't see any structures—Dyson spheres, nothing—whether they're AI or natural or biological. So to me, there has to be a different answer to the Fermi Paradox. I have my own theories about that, but it's out of scope for the next minute.

Demis Hassabis

But I just feel like that... my prediction, my feeling, is that we're past the "Great Filter." It was probably multicellular life if I would have to guess. It was incredibly hard for biology to evolve that. So we're on... there isn't a comfort of like what's going to happen next. I think it's for us to write as humanity what's going to happen next.

Zanny Minton Beddoes

This could be a great discussion but is out of scope for the next 30 seconds. But what isn't: 15 seconds each, when we meet again—I hope next year, the three of us, which I would love—what will have changed by then?

Dario Amodei

I think the biggest thing to watch is this issue of AI systems building AI systems. How that goes—whether that goes one way or another—that will determine whether it's a few more years until we get there, or if we have wonders and a great emergency in front of us that we have to face.

Zanny Minton Beddoes

AI systems building AI systems.

Demis Hassabis

I agree on that. So we're keeping in close touch about that. But also I think outside of that, I think there are other interesting ideas being researched like world models, continual learning—these are the things I think that will need to be cracked. If self-improvement doesn't sort of deliver the goods on its own, then we'll need these other things to work. And then I think things like robotics may have its sort of breakout moment.

Zanny Minton Beddoes

But maybe on the basis of what you've just said, we should all be hoping that it does take you a little bit longer and indeed everybody else to give us...

Dario Amodei

I would prefer that. I think that would be better for the world.

Zanny Minton Beddoes

Well, you guys could do something about that. Thank you both very much.

Automatically generated transcript. May contain errors.