This is like a crazy amount of power for one piece of technology and it's happened to us so fast. You just launched GPT-5. A kid born today will never be smarter than AI. How do we figure out what's real and what's not real? We haven't put a sex bot avatar in ChatGPT yet. Super intelligence. What does that mean? This thing is remarkable.
I'm about to interview Sam Altman, the CEO of OpenAI. Reshaping industries—dude's a straight-up tech lord, let's be honest. Right now, they're trying to build a super intelligence that could far exceed humans in almost every field. And they just released their most powerful model yet. Just a couple years ago, that would have sounded like science fiction. Not anymore. In fact, they're not alone. We are in the middle of the highest stakes global race any of us have ever seen. Hundreds of billions of dollars and an unbelievable amount of human worth. This is a profound moment. Most people never live through a technological shift like this, and it's happening all around you and me right now. So, in this episode, I want to try to time travel with Sam Altman into the future that he's trying to build to see what it looks like so that you and I can really understand what's coming. Welcome to Huge Conversations.
How are you? Great to meet you. Thanks for doing this.
Absolutely.
So, before we dive in, I'd love to tell you my goal here. Okay. I'm not going to ask you about valuation or AI talent wars or fundraising or anything like that. I think that's all very well covered elsewhere.
It does seem like it.
Our big goal on this show is to cover how we can use science and tech to make the future better. And the reason that we do all of that is because we really believe that if people see those better futures, they can then help build them. So, my goal here is to try my best to time travel with you into different moments in the future that you're trying to build and see what it looks like.
Fantastic. Awesome.
Starting with what you just announced, you recently said—surprisingly recently—that GPT-4 was the dumbest model any of us will ever have to use again. But GPT-4 can already perform better than 90% of humans at the SAT and the LSAT and the GRE, and it can pass coding exams and sommelier exams and medical licensing. And now you just launched GPT-5. What can GPT-5 do that GPT-4 can't?
First of all, one important takeaway is you can have an AI system that can do all those amazing things you just said, and it clearly does not replicate a lot of what humans are good at doing, which I think says something about the value of SAT tests or whatever else. But I think had you gone back to... if we were having this conversation the day of GPT-4 launch and we told you how GPT-4 did at those things, you were like, "Oh man, this is going to have huge impacts and some negative impacts on what it means for a bunch of jobs or what people are going to do." And this is a bunch of positive impacts that you might have predicted that haven't yet come true.
So there's something about the way that these models are good that does not capture a lot of other things that we need people to do or care about people doing. And I suspect that same thing is going to happen again with GPT-5. People are going to be blown away by what it does—it's really good at a lot of things—and then they will find that they want it to do even more. People will use it for all sorts of incredible things. It will transform a lot of knowledge work, a lot of the way we learn, a lot of the way we create—but society will co-evolve with it to expect more with better tools.
So yeah, I think this model is quite remarkable in many ways, quite limited in others. But the fact that for 3-minute, 5-minute, 1-hour tasks that an expert in a field could maybe do or maybe struggle with—the fact that you have in your pocket one piece of software that can do all of these things is really amazing. I think this is unprecedented at any point in human history that a technology has improved this much this fast. The fact that we have this tool now, we're living through it and we're kind of adjusting step by step. But if we could go back in time five or 10 years and say this thing was coming, we would be like, "Probably not."
Let's assume that people haven't seen the headlines. What are the topline specific things that you're excited about, and also the things that you seem to be caveatting, the things that maybe you won't expect it to do?
The thing that I am most excited about is this is a model for the first time where I feel like I can ask kind of any hard scientific or technical question and get a pretty good answer. And I'll give a fun example. When I was in junior high or maybe it was nth grade, I got a TI-83, this old graphing calculator, and I spent so long making this game called Snake. It was a very popular game with kids in my school. And I was pro and it was dumb, but programming on a TI-83 was extremely painful and took a long time and it was really hard to debug.
On a whim with an early copy of GPT-5, I was like, "I wonder if it can make a TI-83 style Game of Snake." And of course, it did that perfectly in like 7 seconds. And then I was like, "Okay, am I supposed to be... would my 11-year-old self think this was cool or miss something from the process?" I had like 3 seconds of wondering, "Oh, is this good or bad?" And then I immediately said, "Now I'm playing this game. I have this idea for a crazy new feature." I type it in, it implements it, and the game live updates. I'm like, "I'd like it to look this way. I'd like to do this thing."
I had this experience that reminded me of being like 11 and programming again, where I was just like, "Now I want to try this, now I have this idea," but I could do it so fast and I could express ideas and try things and play with things in real time. I was worried for a second about kids missing the struggle of learning to program in this sort of stone age way, but now I'm just thrilled for them because the way that people will be able to create with these new tools, the speed with which you can sort of bring ideas to life—that's pretty amazing. So this idea that GPT-5 can just not only answer all these hard questions for you but really create on-demand, almost instantaneous software—I think that's going to be one of the defining elements of the GPT-5 era in a way that did not exist with GPT-4.
As you're talking about that I find myself thinking about a concept in weightlifting of time under tension. For those who don't know, it's you can squat 100 pounds in 3 seconds or you can squat 100 pounds in 30. You gain a lot more by squatting it in 30. And when I think about our creative process and when I've felt most like I've done my best work, it has required an enormous amount of cognitive time under tension. And I think that cognitive time under tension is so important. And it's ironic almost because these tools have taken enormous cognitive time under tension to develop. But in some ways I do think people might say they're using them as an escape hatch for thinking in some ways. Now you might say, "Yeah, but we did that with the calculator and we just moved on to harder math problems." Do you feel like there's something different happening here?
It's different. There are some people who are clearly using it not to think, and there are some people who are using it to think more than they ever have before. I am hopeful that we will be able to build the tool in a way that encourages more people to stretch their brain with it a little more and be able to do more. Society is a competitive place—if you give people new tools, in theory maybe people just work less, but in practice it seems like people work ever harder and the expectations of people just go up. So my guess is that like other pieces of technology, some people will do more and some people will do less. But certainly for the people who want to use ChatGPT to increase their cognitive time under tension, they are really able to. I take a lot of inspiration from what the top 5% of most engaged users do with it—it's really amazing how much people are learning and doing and outputting.
I've only had GPT-5 for a couple hours so I've been playing. What do you think so far? I'm just learning how to interact with it. Part of the interesting thing is I feel like I just caught up on how to use GPT-4 and now I'm trying to learn how to use GPT-5. I'm curious what the specific tasks that you found most interesting are because I imagine you've been using it for a while now.
I have been most impressed by the coding tasks. There's a lot of other things it's really good at, but this idea of the AI writing software for anything means that you can express ideas in new ways and that the AI can do very advanced things. Because GPT-5 is so good at programming, it feels like it can do anything. Of course, it can't do things in the physical world, but it can get a computer to do very complex things. And software is this super powerful way to control some stuff and do some things. So that for me has been the most striking.
It's gotten much better at writing. There's this whole thing of "AI slop"—AI writes in this kind of quite annoying way with em dashes. We still have the em dashes in GPT-5, but the writing quality has gotten much better. We still have a long way to go, but a thing we've heard a lot from people inside of OpenAI is that they started using GPT-5, they knew it was better on all the metrics, but there's this nuance quality they can't quite articulate. Then when they have to go back to GPT-4 to test something, it feels terrible. I suspect part of it is the writing feels so much more natural and better.
In preparation for this interview, I reached out to a couple other leaders in AI and technology and gathered a couple questions for you. Okay, so this next question is from Stripe CEO Patrick Collison. "Read this verbatim. It's about the next stage. What comes after GPT-5? In which year do you think a large language model will make a significant scientific discovery and what's missing such that it hasn't happened yet?" He caveated here that we should leave math and special case models like alpha fold aside. He's specifically asking about fully general purpose models like the GPT series.
I would say most people will agree that happens at some point over the next two years. But the definition of "significant" matters a lot. Some people might say early 2025, some maybe not until early 2026, some late 2027. But I would bet that by late 2027, most people agree that there has been an AI-driven significant new discovery.
The thing that I think is missing is just the cognitive power of these models. A framework that one of the researchers said to me that I really liked is: a year ago we could do well on basic high school math competition problems that might take a professional mathematician seconds to a few minutes. We very recently got an IMO gold medal. That's the hardest competition math test—something that many professional mathematicians wouldn't solve a single problem on, and we scored at the top level. Each of these problems is like six problems over 9 hours, so hour and a half per problem for a great mathematician. So we've gone from a few seconds to a few minutes to an hour and a half. To prove a significant new mathematical theorem is like a thousand hours of work for a top person in the world. So we've got to go for another significant gain. But if you look at our trajectory, you can say, "Okay, we're getting to that. We just need to keep scaling the models."
The long-term future that you've described is super intelligence. What does that mean? And how will we know when we've hit it?
If we had a system that could do better AI research than the whole OpenAI research team—if we said, "Okay, the best way we can use our GPUs is to let this AI decide what experiments we should run," smarter than the whole brain trust of OpenAI. And if that same system could do a better job running OpenAI than I could. So you have something that's better than the best researchers, better than me at this, better than other people at their jobs—that would feel like super intelligence to me.
That is a sentence that would have sounded like science fiction just a couple years ago. And now it kind of does, but you can see it through the fog. One of the steps it sounds like you're saying on that path is this moment of scientific discovery—asking better questions and grappling with things in a way that expert level humans do to come up with new discoveries. One of the things that keeps knocking around in my head is if we were in 1899 and we were able to give it all of physics up until that point—at what point would one of these systems come up with general relativity?
Interesting question. If we think about that forward—if we never got another piece of physics data, do we expect that a really good super intelligence could just think super hard about our existing data and solve high energy physics with no new particle accelerator, or does it need to build a new one and design new experiments? Obviously we don't know the answer to that. But I suspect we will find that for a lot of science, it's not enough to just think harder about data we have. We will need to build new instruments and conduct new experiments, and that will take some time. The real world is slow and messy. I'm sure we could make some more progress just by thinking harder, but my guess is to make the big progress we'll also need to build new machines and run new experiments, and there will be some slowdown built into that.
Another way of thinking about this is AI systems now are just incredibly good at answering almost any question. But maybe one of the things we're saying is it's another leap yet. Patrick's question is getting at asking the better questions. AI systems are superhuman on one-minute tasks, but a long way to go to the thousand-hour tasks. There's a dimension of human intelligence that seems very different than AI systems when it comes to these long horizon tasks. Now, I think we will figure it out, but today it's a real weak point.
We've talked about where we are now with GPT-5. We talked about the future goal of super intelligence. What does it look like to walk through the fog between the two? The next question is from NVIDIA CEO Jensen Huang. "Fact is what is. Truth is what it means. So facts are objective. Truths are personal. They depend on perspective, culture, values, beliefs, context. One AI can learn and know the facts. But how does one AI know the truth for everyone in every country and every background?"
I'm going to accept as axioms those definitions. I'm not sure if I agree with them, but in the interest of time, I will just take them and go with it. I have been surprised about how fluent AI is at adapting to different cultural contexts and individuals. One of my favorite features that we have ever launched in ChatGPT is the sort of enhanced memory that came out earlier this year. It really feels like my ChatGPT gets to know me and what I care about—my life experiences and background.
A friend of mine who's been a huge ChatGPT user gave his ChatGPT a bunch of personality tests and asked them to answer as if they were him, and it got the same scores he got, even though he'd never really talked about his personality. My ChatGPT has really learned over the years of me talking to it about my culture, my values, my life. I sometimes use a free account just to see what it's like without any of my history, and it feels really different. So I think we've all been surprised on the upside of how good AI is at learning this and adapting.
So do you envision in many different parts of the world people using different AIs with different sort of cultural norms and contexts?
I think that everyone will use the same fundamental model, but there will be context provided to that model that will make it behave in the personalized way they want, or their community wants.
I think when we're getting at this idea of facts and truth, it brings me to our first time travel trip. Okay, we're going to 2030. This is a serious question, but I want to ask it with a light-hearted example. Have you seen the bunnies that are jumping on the trampoline?
Yes.
For those who haven't seen it, it looks like backyard footage of bunnies enjoying jumping on a trampoline. This has gone incredibly viral recently—there's a human-made song about it, it's a whole thing. I think the reason why people reacted so strongly to it was it was maybe the first time people saw a video, enjoyed it, and then later found out that it was completely AI generated. In this time travel trip, if we imagine in 2030 we are teenagers and we're scrolling whatever teenagers are scrolling in 2030—how do we figure out what's real and what's not real?
I can give all sorts of literal answers. We could be cryptographically signing stuff and decide whose signature we trust. But my sense is what's going to happen is it's just going to gradually converge. Even a photo you take out of your iPhone today is mostly real, but it's a little not. There's some AI thing running there making it look a little bit better—sometimes you see these weird things where the moon... but there's a lot of processing power between the captured photons and the image you eventually see. You've decided it's real enough.
We've accepted some gradual move from when it was like photons hitting the film. If you go look at some video on TikTok, there's probably all sorts of editing tools being used to make it look "better than real." Or it's just whole scenes completely generated, or whole videos like those bunnies. I think the threshold for "how real does it have to be to be considered real" will just keep moving. It's an education question. Media is always a little bit real and a little bit not. We watch a sci-fi movie and know that didn't really happen. You watch someone's beautiful photo on Instagram and know there were tons of tourists in line left out of it. I think we just accept that now. Certainly, a higher percentage of media will feel not real, but I think that's been the long-term trend.
Anyway, we're going to jump again. Okay, 2035, we're graduating from college, you and me. There are some leaders in the AI space that have said that in 5 years half of the entry-level white-collar workforce will be replaced by AI. So we're college graduates in 5 years. What do you hope the world looks like for us?
I have a job that nobody would have thought we could have a decade ago. What graduating college students in 2035—if they still go to college at all—could very well be doing is leaving on a mission to explore the solar system on a spaceship in some kind of completely new exciting, super well-paid, super interesting job and feeling so bad for you and I that we had to do this really boring old kind of work. 10 years feels very hard to imagine because it's too far. If you compound the current rate of change for 10 more years, it's probably something we can't even imagine. I think now would be really hard to imagine 10 years ago, but 10 years forward will be even much harder.
So let's make it 5 years. We're still going to 2030. I'm curious what you think the short-term impacts will be for young people. "Half of entry-level jobs replaced by AI" makes it sound like a very different world than the one I entered.
I think it's totally true that some classes of jobs will totally go away. This always happens and young people are the best at adapting to this. I'm more worried about what it means not for the 22-year-old, but for the 62-year-old that doesn't want to retrain or reskill or whatever the politicians call it. If I were 22 right now and graduating college, I would feel like the luckiest kid in all of history. Why? Because there's never been a more amazing time to go create something totally new, to start a company, whatever it is. I think it is probably possible now to start a company that is a one-person company that will go on to be worth more than a billion dollars, and more importantly deliver an amazing product to the world. You have access to tools that can let you do what used to take teams of hundreds—you just have to learn how to use these tools and come up with a great idea. It's quite amazing.
If we take a step back, I think the most important thing this audience could hear from you is in two parts. First, tactically: how are you trying to build the world's most powerful intelligence and what are the rate limiting factors? And then philosophically: how are you working on building that in a way that helps and not hurts people?
Taking the tactical part: compute, data, and algorithmic design. How do you think about each of those three categories right now?
I would say there's a fourth too: figuring out the products to build. Scientific progress on its own, not put into the hands of people, is of limited utility. On the compute side, this is the biggest infrastructure project certainly that I've ever seen—possibly the biggest and most expensive one in human history. The whole supply chain from making the chips and memory to racking them up in servers and building mega data centers. Finding the energy is often a limiting factor. This is hugely complex and expensive. We're still doing this in a bespoke one-off way, although it's getting better. Eventually we will design a "mega factory" that takes in sand on one end and puts out fully built AI compute on the other.
We are putting a huge amount of work into building out as much compute as we can. It's going to be sad because GPT-5 is going to launch and there's going to be another big spike in demand and we're not going to be able to serve it—the world just wants much more AI than we can currently deliver. I expect to turn the majority of my attention to how we build compute at much greater scales—how we go from millions to tens of millions and eventually hopefully billions of GPUs.
What are the big challenges here?
We're currently most limited by energy. You want to run a gigawatt-scale data center? It's really hard to find a gigawatt of power available in the short term. We're also limited by processing chips, memory, packaging, and permits. Again, the goal will be to really automate this once we get some of those robots built.
Second category: data.
These models have gotten so smart. There was a time when we could just feed it another physics textbook, but now GPT-5 understands everything in a physics textbook pretty well. We're excited about synthetic data and our users helping us create harder tasks. But we're entering a realm where the models need to learn things that don't exist in any data set yet. They have to go discover new things. How do you teach a model to discover new things? Well, humans can do it—we come up with hypotheses, test them, and update based on results. Probably the same way.
And then algorithmic design.
We've made huge progress. The thing OpenAI does best in the world is we have built this culture of repeated and big algorithmic research gains. We figured out the GPT paradigm, the reasoning paradigm, and we're working on some new ones now. It is very exciting to think there are still many more orders of magnitude of gains ahead. We just yesterday released a model called GPTOS, an open-source model that is as smart as GPT-4o Mini but runs locally on a laptop. This blows my mind. If you had asked me a few years ago when we'd have a model of that intelligence running on a laptop, I would have said many years in the future. But we found algorithmic gains around reasoning that let us do that.
Could you summarize for people how algorithmic design leads to a better experience?
Let me start back in history. GPT-1 was an idea that was mocked by a lot of experts—can we train a model to guess the next word in a sequence? That's called unsupervised learning. The fact that it can go learn very complicated concepts about physics and math and programming just by predicting the word that comes next seemed ludicrous. And yet humans do it—babies hear language and figure out what it means largely on their own.
We realized that if we scaled it up, it got better and better over many orders of magnitude. We had these things called scaling laws—this gets predictably better as we increase compute, memory, and data. That has worked over a crazy number of orders of magnitude. Another gain was using reinforcement learning to teach it how to reason—this led to the o1 and o3 models and now the GPT-5 progress. That was another thing that felt like, "no way this is going to work, it's too simple." Now we've figured out how to make much better video models and are discovering new ways to use data. The next couple of years, we have very smooth scaling in front of us.
It's become a public narrative that we are on this smooth path, but it's messier behind the scenes. Tell us a bit about the "mess" before GPT-5.
We did a model called Orion that we released as GPT-4.5. We did too big of a model—it's a very cool model, but it's unwieldy. We realized we needed a different shape. We followed one scaling law that was good without internalizing that there was a new, steeper scaling law for returns on compute, which was this reasoning thing. So that was one alley we went down and turned around. We also had problems with how these models learn from this much data. In the day-to-day, you make a lot of U-turns, but the summation of all the squiggles has been remarkably smooth on the exponential.
By the time I'm interviewing you about the thing you just put out, you're thinking about the next thing. What are the problems you'll be asking me about in a year?
Possibly you'll be asking me, "What does it mean that this thing can go discover new science?" How is the world supposed to think about GPT-6 discovering new science? It feels within grasp.
If you did succeed, what would the implications be?
The great parts will be great, the bad parts will be scary, and the bizarre parts will be bizarre on the first day and then we'll get used to them really fast. It's incredible that this is being used to cure disease, and extremely scary that it could create new biosecurity threats. It will feel vertigo-inducing to watch the world speed up, but humanity has a remarkable ability to adapt. A kid born today will never be smarter than AI. By the time that kid understands how the world works, they will always be used to an incredibly fast rate of things improving. It will seem stone age that we used to use computers that were not way smarter than we were.
I'm thinking about having kids. I know you just had your first. How does this affect parenting advice in that world?
Probably nothing different than what we've been doing for tens of thousands of years: love your kids, show them the world, support them, and teach them how to be a good person. That probably is what's going to matter. There will be more optionality for them. I want my kid to think I had a terrible, constrained life and that he has this incredible infinite canvas.
Let's talk about health. If I'm interviewing the Dean of Stanford Medicine in 2035, what do you hope AI is doing for us?
Start with 2025. One of the things we are most proud of with GPT-5 is how much better it's gotten at health advice. People already use GPT-4 models for life-threatening diagnoses that no doctor could figure out. GPT-5 is significantly better—more accurate, fewer hallucinations. Better healthcare is wonderful, but what people really want is to just not have disease. By 2035, I think we will be able to use these tools to cure or at least treat a significant number of diseases that currently plague us.
Go one turn deeper. Is it protein folding? ALPHAfold?
I would like to be able to ask GPT-8 to go cure a particular cancer. It would go off, think, and say, "I read everything I could find. I need you to run these nine experiments and tell me what you find." Wait 2 months for the cells to do their thing, send the results back. "That was a surprise. Run one more experiment." Then, "Go synthesize this molecule and try mouse studies." Then human studies. Anyone with a loved one who's died of cancer would really like that.
Industrial Revolution analogs—some say by 2050 the change will be 10 times bigger and 10 times faster. Who still gets hurt in the meantime?
I don't really know what this is going to feel like. We're in uncharted waters. I believe in human adaptability, but the transition period... society has a lot of inertia. People adapt surprisingly slowly. There are classes of jobs that are going to totally go away, and many that will change significantly. Society has proven resilient, but we have no idea how far or fast this could go. We need an unusual degree of humility and openness to new solutions that would have seemed way out of the Overton window not long ago.
Every big leap creates a mess. Can we get specific about public interventions to reduce that mess?
Something fundamental about the social contract may have to change. Capitalism works surprisingly well with supply/demand balances, but we will likely decide we need to think about how access to this important resource gets shared. The best thing to do is make AI compute as abundant and cheap as possible so that we run out of new ideas to use it for. Without that, I can see literal wars being fought over it. Distributing access to AGI compute seems like a really great direction to think about.
What is our shared responsibility as the ones using and regulating it?
My favorite historical example is the transistor. It made its way relatively quickly into everything. There was a time where everyone was obsessed with the transistor companies, but now you barely remember why Silicon Valley was called that. You think about what Apple did with the iPhone or TikTok on top of it. All these people stood on the scaffolding before them. Stand on top of that and add one layer and the next. Society is the super intelligence. No one person could do on their own what they're able to do with the hard work that society has done together. Some nerds discovered this thing, and now everybody's doing amazing things with it. Build on it.
I've interviewed folks who made cataclysmic change, like Jennifer Doudna. I'm hearing a similar theme: "I hope the next person takes the baton and runs with it well." Is there a conflict between winning the race and building the future that is best for the most people?
I think there are many examples of decisions we make that are best for the world but not for "winning." Users feel like ChatGPT is trying to help them—it's very aligned with you. It's not trying to get you to stay all day or buy something. We do not take that relationship lightly. There's a lot of things we could do to grow faster or juice revenue that would be misaligned with our long-term goal.
Specific examples?
Well, we haven't put a sex bot avatar in ChatGPT yet. That would get "time spent," but we haven't done it.
It feels like we're in the first inning.
I would say second inning. You have GPT-5 on your phone smarter than experts in every field—that's out of the first inning.
What mistake from the first two innings will affect how you play the next?
The worst thing we've done in ChatGPT so far is the issue with "sycophancy," where the model was too flattering. For most it was annoying, but for some with fragile mental states it was encouraging delusions. It was not the top risk we were worried about compared to bioweapons, but it was a great reminder that society is co-evolving with this service. We have to have a wider aperture for top risks and "unknown unknowns."
In an interview with Theo Von, you said scientists sometimes look at their creation and say, "What have we done?" When have you felt most concerned?
There have been moments of awe—not "what have we done" in a bad way, but "this thing is remarkable." I was talking to a researcher recently: there will come a time where our systems are emitting more words per day than all people do. One researcher can make a small tweak to how ChatGPT talks to everyone—that's an enormous amount of power for one individual. This happened so fast that we have to think about what it means to make a personality change to the model at this scale.
GPT-5 is supposed to be less of a "yes man." What are the implications?
Here is a heartbreaking thing. It is great that ChatGPT is less of a yes man, but it's sad to hear users say, "Please can I have it back? I've never had anyone in my life be supportive of me. I never had a parent tell me I was doing a good job." It was bad for mental health in some ways, but great for others. Something in that direction might have value. We show the model examples of how we'd like it to respond, and from that it learns the overall personality.
GPT-5 being integrated into Gmail and calendar—how will my relationship change?
It'll start to feel way more proactive. It'll notice changes on your calendar or think more about a question you asked overnight. Eventually we'll make consumer devices and it'll sit here during an interview. After, it might say, "That was great, but next time you should have asked Sam this," or "He didn't give you a good answer, you should really drill him on that." It becomes this companion throughout your day.
What advice would you give people to prepare?
Use the tools. I am surprised how many people ask that and have never tried using ChatGPT for anything other than a better Google Search. Get fluent with the capability of the tools. Meditate, learn to be resilient—but just using the tools really helps.
There seem to be two camps: those building tools for a better future, and those saying it's going to kill us all. What am I missing?
It's hard for me to wrap my head around. People who say it's going to kill us all and yet still work 100 hours a week to build it. If I truly believed that, I wouldn't be trying to build it. I'd be on a farm or advocating for it to be stopped. I find it hard to empathize with that mindset. If you say there's a 99% chance it's good and a 1% chance it's a disaster, and you want to move that 99 to 99.5—that I can understand.
Is there a question you'd advise me to ask the next person?
"Of all the things you could spend your energy on, why did you pick this one? What did you see before it was consensus?"
How would you answer that?
I was an AI nerd my whole life. I watched sci-fi and always thought it would be cool if someone built it. I never thought I was going to be the one to work on it—I feel unbelievably lucky. In 2012, the AlexNet paper came out done by my co-founder Ilya. For the first time, it seemed like there was an approach that might work. I remember thinking, "Why is the world not paying attention to this?" If it does work, it's the most important thing. Then, unbelievably, it started to work.
Thank you so much for your time.
Thank you very much.