I think it's extremely clear that we are going to have AGI within the next couple years in a way that is still going to be jagged, but that the floor of task will just be almost for any intellectual task of how you use your computer. The AI will be able to do that. The scariest moment at OpenAI was after we launched ChatGPT and I remember being at the holiday party and just feeling this vibe of we won and I have never felt that. I was like, "No, that we are the underdog and we always have been."
From the moment we launched ChatGPT, I remember talking with my team having this exact conversation where they said, "How much compute should we buy?" I said, "All of it." I said, "No, no, no. Really, how much compute should we buy?" I said, "No matter how much we try to build, I know we're not going to be able to keep up with the demand."
OpenAI co-founder and president Greg Brockman joins us to talk about AI's most promising opportunities, how OpenAI plans to capitalize on them, and what the Super App is all about. And Greg is with us here in studio today. Greg, great to see you.
Thank you for having me.
Well, we're speaking at a time where OpenAI is shutting down video generation and focusing its energies on a super app which is going to combine business and coding use cases. And I think from the outside those of us watching this are—including myself—OpenAI is winning in consumer and now it's shifting its resources. What is happening?
Well, the way I would think about this is that we have been in a world where we're developing this technology, deep learning, to really see can it have the positive impact that we have always pictured. Can it be used to build applications that help people, that help them in their lives? And we've separately had an arm that's saying, "Let's try to deploy this technology," whether that's to help sustain the business, to start getting some practice with getting real-world impact—those kinds of things for the time when this technology comes to fruition, that it becomes the everything that we've imagined that we started this company to try to have.
And I think that we're at a moment now where we've really seen this technology—it's going to work. And that we're moving out of testing on benchmarks and sort of these almost cerebral demonstrations of capability to it being the case that for us to develop it further, we need to see it in the real world and get feedback from how people are using it in knowledge work in various applications. And so the way I'd think about it is that this is a bigger strategic shift because of the phase of the technology. It's not so much that we're saying we're moving from consumer to B2B; it's really what we're saying is that what are the most important applications that we can focus on because we can't focus on everything, right? But what are the things that we can bring to life that will synergize together as we build them and that will deliver meaningful impact and help elevate everyone.
And when we look at the list, so there's consumer—you can think of it as many things—but there's a personal assistant, right? Something that knows you, that's aligned with your goals, that's going to help you achieve whatever it is that you want in your life. There's also creative expression and entertainment and many other applications. On the business side, maybe you can—if you zoom out—it looks more like one thing of just you have a hard task, can AI go do it? Does it have all the context to do all these things? And for us, it's very clear that the stack rank includes two things at the top: one is the personal assistant, the other is the AI that can go and solve hard problems for you.
And when we look at the compute we have, we are not even going to have enough compute to fund those two things. And then once we start adding in many other applications, many other things that AI is going to be very useful for and is going to help people with, we just can't possibly get to all of them. And so I think that this is a recognition of the maturation of the technology and the incredible impact it's going to have very quickly, and our need to prioritize and to pick the set of applications that we want to shine and to really bring to the world.
And when I've heard you talk about OpenAI's various bets, one of the ways that you described it is that OpenAI can be a version of Disney—or like Disney—where you have this core compelling advantage at the center and then you farm it out in different ways. So Disney has Mickey Mouse, and then it can do the movies and the theme park and Disney Plus. And for OpenAI, it's the model, and you can do video generation and be this assistant and then help with enterprise and work. So, is it no longer possible then to have that sort of central advantage and then be able to farm it out in all sorts of ways? Like, have you decided, have you come to this realization that like it's time to pick or choose?
Well, I think that in some ways that story is even more true than it's been. But the thing that's important to realize is technologically that the Sora models—which are incredible models, by the way—are a different branch of the tech tree than the core reasoning GPT series. They're just built in a very different way and to some extent we're really saying that pursuing both branches is very hard for us to do for these applications.
Now we are continuing the Sora research program in the context of robotics, which I think is very clearly going to be a transformative application which is still a little bit in the research phase; robotics is not really yet mature and deployed in the way that we're going to see this real takeoff of this technology in knowledge work over the next year. And so it's a recognition of, for this moment, we really need to put the primary focus on developing the GPT series. And that doesn't just mean text. It doesn't just mean cerebral things—for example, bidirectional communication, having a great speech-to-speech interface. That is something that also is going to make this technology very usable and very useful, but it's not a different branch of the tech tree.
It's all kind of one model and we just sort of tweak that in slightly different ways kind of like you describe. And so I think there's something about if you branch too far and you have two different artifacts, that is very hard to sustain in a world where there is limited compute. And the reason there's limited compute is because there's so much demand. There's so much people want to do with every single model that we create.
Okay. So talk a little bit then about why your bet is not on this—it seems like—world model version where the video understands where things go. It's obviously useful for robotics. Why is your bet on the GPT reasoning model tree as opposed to this area where you had been seeing real progress with Sora? To see the progress of video generation 1, 2, 3 was enormous, so why is your bet where it is?
So the problem in this field is too much opportunity, right? It's the thing that we observed very early on in OpenAI is that everything we could imagine works. Now, there's different levels of friction associated with it, different amounts of engineering effort, different compute requirements—all those things—but every single different idea, as long as it's kind of mathematically sound, you can start getting some pretty good results. And I think that shows you the power of the underlying technology of deep learning: the ability to really take any sort of problem and to get to the meat of it, to have an AI that really understands the underlying rules that generated the data.
So it's not about the data itself; it's about understanding the underlying process and being able to apply it to new contexts. So you can do that in world models, you can do that in scientific discovery, you can do that in coding. And I think that where we are as we think about the rollout of this technology is, again, that there's been this debate of "How far will the text models go? How far can text intelligence go? Can you have a real conception of how the world operates?" And I think that we have definitively answered that question of: it is going to go to AGI. Like, we see line of sight.
And at this point we have line of sight to these much better models that are coming this year. And the amount of pain within OpenAI that we've had to decide how to allocate compute—that goes up, not down, over time. And so I think that maybe the core of it is that it's about sequencing and timing, and that in this moment the kinds of applications that we've always dreamed of are starting to come into reach. Like, for example, solving unsolved physics problems, right? We had this result recently where a physicist had been working on a problem for some time. He gave it to our model. 12 hours later we have a solution.
And he said this is the first time he's seen a model where he felt like it was thinking—that it felt like this is a problem that maybe humanity would never solve and our AI solved it. But you see something like that—you have to double down. You have to triple down because we can really unlock all of this potential for humanity. And so I think for me it's not about relative importance of these things. It's more about what is OpenAI's mission of delivering AGI to the world, our vision of how it can benefit everyone, and the fact that we have a tech tree that we see how to just push it, how to do the engineering, do the further science and research to then have that come to fruition.
Okay. Okay, so I do want to come back to the next line of models that you're anticipating, but I want to press you on this for a moment. I was speaking with Demis Hassabis from Google DeepMind earlier this year. And interestingly, he said that the thing that feels closest to AGI for him was Nano Banana, the image generator that they have. And the reason is because for an image generator or a video generator to create the images and the videos that it makes, it does have to understand the interaction between objects and have at least some conception of how the world works. So is this a potential—it's a big bet—but does OpenAI potentially miss something by doubling down on the other tree if that's the case?
So two answers. One is absolutely. Right? In this field you do have to make choices; you have to make a bet. And that's where OpenAI started—we really said, "What is the path to AGI that we believe in?" and really focused hard on that. The sum of random vectors is zero, but if you align your vectors, then you can go in a direction. But the second point is that image generation is something that has been very popular within ChatGPT and that's something we're continuing to invest in, continuing to prioritize. And the reason we're able to do that is because it's not on the world model/diffusion model tech branch; it's based on the GPT architecture.
And so there, even though it's a different data distribution, the actual core technology at the core stack—it's all one thing. And that is the pretty wild thing about what AGI is: that sometimes these very different looking applications—between speech-to-speech, image generation, text (and text is, by the way, itself many-faceted of like science and coding and personal wellness information)—all of that you can do in one technological envelope. And so a lot of what I'm looking at, and what we as a company are looking at from a technological perspective, is how to have as much unification of our efforts because we really see this technology as being something that's going to uplift and power the whole economy. The whole economy is a massive thing, and so we can't possibly do all of it, but we can do our part—that's the "general" part in Artificial General Intelligence.
That's the G.
It really is.
Speaking of unifying things, what is this Super App going to be? So, the way I think about the Super App is: it's going to bring together coding, browser, and ChatGPT.
That's right. So, what we want is to build an endpoint application for you that really lets you experience the power of AGI—the generality. If you think about what Chat is today, I think Chat is really going to become your personal assistant, your personal AGI—an AI that's looking out for you, that knows a lot about you, that's aligned with your goals, that's trustworthy, that kind of represents you in this digital world. You can think of it as—right now it's been a tool that we built for software engineers, but it's becoming Codex for everyone—that anyone who wants to can use Codex to get the computer to go do the thing that they want.
And it's not just about the actual software anymore. It's really about almost the use of computer, whether it's to set up—like I use it to set settings on my laptop. I forget how to set up the hot corners; I ask Codex to do it. It just does it, right? That's what computers were always supposed to be: contort to the human rather than me contort to them. And so imagine one application that anything you want your computer to do, you can ask it. And so there's computer use browsing built in for an AI to be able to use a web browser, and for you to be able to oversee what the AI is doing; that all of your conversations, regardless of application—whether it's for Chat or whether it's for code, whether it's for general knowledge work—that's all unified in one way that the AI has memory, knows about you.
So that is what we are building. But it's really an iceberg because that's the tip; what to me is much more important is the technological unification. And we talked about it a little bit in the case of the underlying models. But the thing that's really changed over the past couple years has been that it's no longer just about the model. It's about the harness. It's about: "How does the model get context? How is it connected to the world? What actions can it take? As you get new context, how does the loop of interacting with the model work?" All of that was something that we had multiple implementations of that were slightly different, and we're converging it.
We're going to have one version of that and almost end up with this AI layer that can be pointed at specific applications in a very thin way. So you can build a little plugin, a little skill, a little UI if you really want something that's great for finance, if you want something that's great for legal—but you generally won't have to because this one Super App will be very broad.
This app is for business use cases, personal use cases?
So both. And that is really the core: that just like a computer, like your laptop—is it for personal, is it for business?
Both.
Both. And it's for you. It's your personal machine that gives you an interface to this digital world, and that's what we want to build.
So just talk a little bit about, from a non-business standpoint: I'm using the Super App in my personal life. What am I using it for? How does my life change?
So I would think of it as: personal life, just the way that you use ChatGPT, right? How do you use ChatGPT right now? And people use it for such a diversity of really amazing applications. Sometimes that's just asking for, "I'm going to give a speech at a wedding. Can you help me with drafting it? Can you give me some feedback on this idea that I have? I'm working on a small business, can you give me some ideas there?" (which maybe starts the bridge between personal and work). Any of those questions should be things that you can go to the Super App for and it answers.
But if you think about what ChatGPT has been, it's already been evolving. It used to not have any memory, right? It's just the same AI for everyone starting from scratch—it's almost like talking to a stranger. It's way more powerful if it remembers. It remembers the interactions you've had. It's way more powerful if it has access to context, right? That if it's hooked up to your email and to your calendar and really knows your preferences and has this almost deeper set of past experiences with you that it's able to leverage to achieve your goals.
You look at things like Pulse—it's a feature in ChatGPT right now where every day it surfaces for you things that you might be interested in based on what ChatGPT knows about you. So I'd say that in the personal capacity, that the Super App will be doing all of that and will be doing it in a much deeper and richer way.
When do you plan to ship it?
So the way to think about it is we're taking incremental steps to get there. Over the next couple months, we should have shipped the complete vision of what we're talking about here. But it's going to come in pieces. And the place that we're starting is with, for example, the Codex app today—which is really two things in one: it's a general agent harness that can use tools, and it's also an agent that knows how to write software.
That general agent harness can be used for so many different things. You hook it up to spreadsheets, you hook it up to Word documents; it's able to help you with knowledge work. And so we're going to make the Codex app just so much more usable for general knowledge work because we've already seen within OpenAI all this organic adoption of people using it for that. So that'll be the first step, and there are many to come.
I was speaking with one of your colleagues yesterday taking a look at Codex and he mentioned that someone using Codex had instructed Codex to help him with video editing. It built a plugin for Adobe Premiere, started separating it into chapters, and started the edit.
That's what we're looking at. I love hearing that. That's exactly the kinds of things that we want this system to be useful for. And it's been really interesting seeing like the Codex app itself was originally built for software engineers, and the current usability of it for non-software engineers is quite low because there's a bunch of little things where when you set things up you run into some error that a developer knows what it means, knows how to fix it—it's just kind of what we're used to.
But if you're not a developer, you're like, "What is this?" This is not something that I've encountered before. And despite that, we are seeing people start to use this who have never programmed before to be able to build websites, to be able to do exactly the kinds of things you said: to be able to automate different interactions with different pieces of software to be able to get lots of leverage. Someone on our communications team uses it—it's hooked up to Slack and to their email—they're able to go through a bunch of feedback and be able to synthesize it very well. So these kinds of tasks—people who are very motivated can jump through the hoops and then get great return from it.
So to some extent we did the super hard part: an AI that is really smart, capable, can accomplish your task. Now we have to do the much easier part in some sense: make it broadly useful and to remove these barriers to entry.
And just looking at the competitive landscape—I mean, Anthropic, they have the Claude app. You can use Claude the chatbot, Claude Co-work, Claude Code—so they have a version of a Super App of their own. I'm curious what you think Anthropic saw that got them to this position earlier, and what do you think your chances are of catching up there?
Well, I think that if you rewind 12, 18 months, we have always been focused on coding as a domain. We always had the best numbers on different programming competitions—these very cerebral things. But the thing that we didn't invest in as much was that last mile of usability, of really trying to think about, "Okay, this AI is so smart. It can solve all these great programming competitions, but it's never seen someone's real-world codebase, which is messy and not quite as pristine as the world that it has experienced."
And I think that is something that we were behind on. But about maybe mid last year is when we got very serious about that and we had a team very focused on: "What are all the gaps? What are all the kinds of messiness of the real world we haven't encountered? How do we get training data? How do we build training environments that let the AI experience what it's like to do software engineering, be interrupted in weird ways—all those things?" And I'd say at this point we are caught up. When people go head-to-head for us versus competitors, people tend to prefer us. We're diving in front end; we're going to fix that.
But this is the general motion that we've been taking: to say that usability of thinking about the product end-to-end—not just a model and then build a separate thing, right?—really think of it as one product. When we're doing the research, we're thinking about how it will be used. That has been a motion that we've been changing within OpenAI. And so I think that the way I would look at it is that we have incredible step-up models coming like this whole year. I look at the roadmap; it's truly inspiring what will be possible. And then we've been really focusing now on: "Let's also get the last mile usability."
Since 2022, OpenAI has been like the undisputed leader and obviously now the competition is intense. You just used the phrase "we're caught up." Is there a different vibe within the company whereas like now, instead of the one that's like far ahead on something like ChatGPT, you're in a real fight? You're seeing it come out of some of the reporting on what's happening within the company—the fact that there are no more "side quests" at OpenAI, it's all focus on this. How's the environment or the vibe changed here?
Well, I would say that for me personally, yeah, the scariest moment at OpenAI was after we launched ChatGPT. I remember being at the holiday party and just feeling this vibe of "we won," and I have never felt that. I was like, "No, that we are the underdog and we always have been." The competitors in this space—established companies that have just sort of much more capital, much more human resources, data, the whole thing. Why is OpenAI able to compete at all? And to some extent the answer is only because we never feel complacent, right? We always feel like we are the challenger, and it for me has been a very healthy thing to see us start to see that in the marketplace, to see other competitors emerge and do a good job.
And that is—in my mind, you can never fixate on your competitors. If you focus on where they are, then you'll be where they are and they'll already have moved. I think that that's what's been happening in the other direction, right? A lot of people have been focused on exactly where we are, and we get to move. And I think that it almost gives us this alignment, this unification of the company. And I kind of described how we almost thought of research and deployment as separate things, and now we really want to integrate them. Like, that to me is such a wonderful thing.
And so I'd say that the world that we're in is one where I've never felt like we were—you're never as good as they say you are, you're never as bad as they say you are. I think it's just been very steady. And the core of the model production—that is something where I feel extremely confident in our roadmap, the research investments we've made. And I think on the product side, we have such great energy that's all coming together to deliver this to the world.
You foreshadowed a couple times already that you have some good models on the way. What is Spud? The information said that you finished pre-training Spud and Sam Altman, the CEO at OpenAI, has told the staff that they should expect to have a very strong model in a few weeks. This was a few weeks ago, and the team believes it can really accelerate the economy and things are moving faster than many of us expected. So what's a "good model"?
I think that it's really not about any one model. The way that our development process works is: you have pre-training, so you produce a new base model that then is the foundation that we build further improvements on top of. And that is always a huge effort across many people in the company. And that's where I've been spending most of my efforts over the past 18 months: really focused on our GPU infrastructure, on supporting the teams that do all of the training frameworks to scale up at these big runs.
But then there's a reinforcement learning process: you take this AI that has learned lots of things about the world and it applies that knowledge, and then we do a post-training process where you really say, "Okay, now you know how to solve problems. You practice it in all these different contexts and then here's kind of the last mile of behavior and usability." So I think of Spud as a new base, as a new pre-train, and we have had—I'd say it's like we have maybe two years' worth of research that is coming to fruition in this model. It's going to be very exciting.
And I think that the way that the world will experience it is just improved capabilities. For me, it's never about any one release because as soon as we have this one release, it'll be an early version of what we have coming; we'll do much more of each of these steps of the improvement process. And so I think that where we're going is almost just we have this engine of progress that just moves faster and faster, and that Spud is just one step along the way.
So what do you think it'll be able to do that today's models can't?
I think it's going to be able to solve both much harder problems and I think it will be much more nuanced. It'll understand instructions better. It'll understand the context much better. There's this thing called "big model smell" that people talk about where it's just like—there's something about when these models are just much smarter, much more capable, that they bend to you much more and you feel it. When you ask a question and the AI doesn't quite get it, it's always so disappointing, right? We have to like explain and you're just like, "You really should be able to figure this out."
And so I would just think of it as, in some ways, qualitatively there will be—but quantitatively lots of shifts. Qualitatively there will just be new things where you would be frustrated before, you never use an AI for it, and now you just use it without thinking very much. And I think that that is what we're going to see across the board. I'm super excited to see how it raises the ceiling. We've already seen these physics applications, things like that. And I think we will be able to just solve way more open-ended problems, way longer time horizons. And then also very excited to see how it raises the floor where just for anything you want to do, it's just so much more useful for you.
It can be kind of tough for everyday users to really feel the change. Like there was talk about a lot of buildup before GPT-5 came out, and then it came out and the initial reaction was somewhat disappointment among the public, but then I think people realize that for certain tasks it was really good. With these next series of models, do you expect that it'll really be felt sort of in the trenches in certain occupations, or do you think it will be a broadly tangible improvement for everyone?
I think that it will be a similar story where, when you release it, there will be people who will try it and be like, "This is night and day different than anything I've seen." And then there will be some applications where we weren't necessarily intelligence-bottlenecked, and so if you have a model that's more intelligent, maybe you won't feel it right there. But I think over time that you will feel it because the fundamental thing that shifts is how much do you rely on the system.
If you think about the way we all interact with AI, we have some mental model for what we think it can do, and that mental model shifts fairly slowly, right? As you get more experience, it does something magical for you and you're like, "Oh, wow. It can do that. I never imagined that." And we see this, for example, in applications like access to health information. I have a friend who used ChatGPT to understand different treatments for his cancer. He was told by doctors that he was terminal, that there was nothing they could do for him. He used ChatGPT to research a bunch of different ideas and he was able to get treatment that way.
And that's something where you need to have some level of belief that the AI is going to be helpful in that application for you to really put in the effort to get something out of the machine. And I think what we're going to see is that for any application like that, it's going to become so much more evident to everyone that the AI can help you. And so I think it's a little bit of the technology getting better, but it's also our understanding of the technology shifting and catching up to that.
And you'll be relying on it more inside OpenAI. You have an automated AI researcher in the works. It's supposed to come out this fall. What is that?
So the direction of travel right now—we are in this early phase of takeoff of this technology. What does takeoff mean? Takeoff is as the AI gets better and better on this exponential, and in part because we can use the AI to make the AI better, so our development process speeds up. But I also think when I think of takeoff it's also about real-world impact. In some ways, every technology is an S-curve, or if you zoom out, a sum of S-curves that end up being an exponential.
And I think that's what we're encountering right now. So the technology development is moving with increasing speed and it's this engine that's picking up momentum. But it's also in the world: there's all of these tailwinds because there's chip developers that are getting more resourcing into their programs. There's this economy of people who are building on top of it, trying to figure out how it fits into every different application. And all of that energy is just accumulating more and more into this takeoff phase of the AI becoming just a kind of sideshow to being the main driver of economic growth.
And I think that is something that—it's not just about what we're doing in these walls. It's about how the whole world, the whole economy comes together in order to push forward this technology and its usefulness together.
And the researcher will then... what will it do exactly?
Well, so the researcher will be a moment where the AI—which we're building that right now—it's taking a larger percentage of tasks that we should be able to let it run autonomously. And I think there's a lot of thought that goes into what that means, and that it doesn't necessarily mean that we just let it off on its own and then come back later and see if it does something good. I think that we are going to be very involved in managing it. Just like right now if you have a junior researcher: if you leave them on their own too long, they're probably going to go down a path that's not very useful.
But if you have a senior researcher or someone who has a vision, they don't even necessarily need to know the mechanical skills. They will be able to provide feedback, review the plots that the intern's producing, and provide direction in terms of the vision of what is it that I want you to accomplish. And so I think of this as a system that we're going to build that will massively accelerate our ability to produce models, to make new research breakthroughs happen, to be able to make these models more useful and usable in the real world, and to do that at increasing speed.
So sorry, what's it going to do? Are you going to say, "Go find AGI," and it will just try to...
I think the way I think of it is something like that, to first order. Okay. And at a practical level, I think I would view it as taking the full end-to-end of what one of our research scientists does and be able to do that in silicon.
Another way to think about takeoff is that progress in AI goes from incremental to gathering momentum and then sort of this unstoppable march to an intelligence that's smarter than humans. Do you worry that just as there's possibilities for things to go right on that front, there's also possibilities for that process to go wrong?
I think that's absolutely yes. I think that the way to get the benefits of this technology is also to really think about the risks. And if you look at how we've approached technology development from a technical perspective, we invest a lot in safety and security. A good example of this is prompt injections, right? If you're going to have an AI that is very smart, very capable, hooked up to lots of tools, you want to make sure that it can't be subverted by someone giving it a weird instruction. And that's something that we've invested in quite a lot and I think have really incredible results, have an incredible team working on.
And it's interesting to think about some of these problems where you can make analogies to humans. Like humans are also susceptible to phishing attacks, to being deceived in different ways, to not really understanding the full context of what they're working on. And we bring those analogies into our development process and think about this whenever we release a model, develop a model: "How do we ensure that it's going to be aligned with people and be able to be helpful?" And that is something that we care quite a lot about.
I think that there are bigger questions about the world, the economy, how does everything change, how does everyone benefit from this technology—that are not purely technical, not purely something that OpenAI on our own will be able to solve. But yes, I think quite a lot about not just pushing forward the technology, but also really about how do we ensure that we have the positive impact that is its potential.
The worry though is that this is a race, and what's being done within these walls at OpenAI headquarters is also being copied by many of the open source players which have much fewer boundaries and barriers and protection on the safety side of things. And I think you said this once: that it takes people getting a lot of things right to be creative, and sort of one person with bad intent to be destructive. And that's sort of where the concern lies for me, at least: is just when this is... it's clearly a race. It's going fast. Many of your counterparts have said if everybody agrees to stop it, we'll stop it. And it doesn't seem like it's going to slow at all. So is the reward worth the risk?
I think the reward is worth the risk, but I think that is too coarse-grained of an answer in some sense. The way that I think about it is that we've asked from the beginning of OpenAI: "How... what does a great future look like? How can this technology really be something that uplifts everyone?" And you can think of there almost being two different angles. One is the centralization view of saying that the way to make this technology safe is that you have only one actor building it. And so then you don't have any pressures, right? You can really think about getting it right and then figure out how to roll it out to everyone when it's ready—those kinds of things.
That's a pretty tough pill in some ways. And I think that there's a lot of properties that you can instead think about approaching differently, which we refer to as resilience: to think of it as this open system where there's lots of players who are developing the technology. But it's not just about the technology; it's about building societal infrastructure that helps this technology really go well.
If you think about how electricity has developed, that's something where lots of people produce it, it has dangers and risks, but we also build our safety infrastructure in a diversity of different ways: around safety standards for electricity, around different ways of harnessing it, about how you scale it—there's regulations when you're at these massive scales that lots of people are able to use in a democratized fashion. There's inspectors—there's a whole system that's been built around the needs of that technology, the proclivities of that specific technology.
And I think that one thing that we have really seen with AI is that it is something where we need this broad conversation. We need lots of people to be aware if the technology is going to come and change everything for everyone. People need to participate in that. It can't be something that's done off in secret by just one sort of centralized group. And so this has been to me a very core question to how this technology should play out, and something we really believe in is this resilience ecosystem that should emerge around the development of this technology.
So you said we're in takeoff, in the middle of a takeoff process, and we—I guess all of humanity—are experiencing this. Nvidia's CEO Jensen Huang said recently that he believes AGI has been achieved. Do you agree?
I think that AGI has a different definition to many people, and I think that there are many people who would say that what we have right now is AGI. I think you can debate it. But I think that maybe the thing that's interesting is that the technology we have right now is very jagged. Like it is absolutely superhuman at many tasks. When it comes to writing code, those kinds of things, the AI can just do it, right? And it really removes a lot of the friction to creating things. But there's some very basic tasks that a human can do that our AI still struggle with.
And so it's almost to say that where do you draw the cut line? It's a little bit more of a vibe and a feeling than it is science at the moment. And so I think for myself, we're definitely going through that moment. If you were to show me five years ago the systems we have today, I'd go, "Oh yeah, that's what we're talking about." But it's just different. It's so different from anything we ever pictured. And so I think we need to adjust our mental models appropriately.
So you're not there yet.
I think that I'd say I'm like 70, 80% there. So I think we're quite close. I think it's extremely clear that we are going to have AGI within the next couple years in a way that is still going to be jagged, but that the floor of task will just be almost for any intellectual task of how you use your computer, the AI will be able to do that. And I think that, yeah, right now I have to give a little bit of an uncertain answer because there's some... it's almost like an uncertainty principle kind of thing that you can debate it. For my own personal definition, I think we're almost there and with maybe a little bit more we will absolutely be.
Okay. Well, we've got to go to a break, but as long as we're on the way to the break, I want to let folks watching at home know that you and I are going to be talking again June 18th here in San Francisco at SFJazz. So, I will put some information if you want to come join that conversation in the show notes, and I do hope you sign up. All right, we'll be back right after this.
And we're back here on Big Technology Podcast with OpenAI co-founder and president Greg Brockman. Greg, let me just ask you: what happened in December 2025? Because it seems like it was an inflection point where all this idea of letting the machine code for hours uninterrupted went from theory to a moment where everyone said, "I think I can trust this to keep going for a while." So what exactly happened?
So, new model releases really went from the AI being able to do like 20% of your tasks to like 80%. And that was this massive shift because it went from being kind of a, "Yeah, it's a nice thing to do," to: you absolutely need to retool your workflow around these AIs. And for myself, I've very much had this moment where I have a test prompt that I've been using for years of "build a website for me." I'd built this website back when I was learning to code—took me months. Used to be over the course of '25 that, you know, it'd take like four hours, bunch of different prompts to get it right. In December: one shot. Just ask the AI one time and it produced it and did a great job.
So how did those models make the leap?
Well, a lot of it is about the better base models. That one thing about OpenAI is that we've been working on improving our pre-training technology for quite some time. And that in that moment, we got to see a little taste of what is going to be coming for the rest of this year. But it's also really about not any one thing. It's about: we're constantly pushing on every single axis of innovation. And the thing that's very interesting about these models is in some ways you get these leaps, and in some ways it's all continuous, right? It didn't go from 0% to 80%; it went from 20% to 80%.
And so in some ways it just got better. And I think that we've seen this improvement continue with every single point release that we've had, like between 5.2 and 5.3. One of my engineers I work with very closely went from—he couldn't get it to do the like low-level hardcore systems engineering he does to it absolutely being creative. He gives it a design doc; it implements it, adds metrics, observability, runs the profiler, improves it to the point that it's the exact thing that he was hoping to produce. And so I think that the way to think about it is it's almost a sort of "slowly, slowly, all at once." But it is all indicated by what's kind of working right now; certainly within a year, sometimes much sooner, it is going to be incredibly reliable.
And it surprised you because I heard you talking on an interview not long ago about how Codex—this autonomous coder—was just for software developers. And earlier this conversation you said everyone can use this stuff.
Yes.
What led to the fact that you sort of changed your perspective on that?
Well, I think I'd been focusing on Codex—it's got the "code" in it, right?—as really being for coders and thinking about people within OpenAI because many of us are software engineers building for ourselves. It's very natural to think that way. But as this technology has been progressing, we've started to realize that the underlying technology we produced is mostly not about code at all. It's mostly about solving problems. It's mostly about being able to manage context and harnesses and think about how an AI should integrate and do work.
And that's something that becomes—both, even for code, suddenly anyone can have access because you can manage something that's going to go do work, right? If you have a vision, you have something you want to accomplish, you can describe your intent, the AI can execute, can get that done. But then it also starts to be: why am I just focused on coding? Like, there's so much just very mechanical skill associated with Excel spreadsheets, with presentations. And if the AI has the context, it has the raw intelligence now to be able to do these things at a great level. So if we can just make it more accessible, it suddenly goes from "Codex is for coders" to "Codex is for everyone."
And soon after this moment where we saw all this improvement, there was another somewhat phenomenon in Silicon Valley which was OpenClaw, right? Which is—and maybe it's the broader tech community—where people started to trust it in ways that you suggested: giving an AI bot access to their desktop, or getting a Mac Mini and giving it access to like their mail and calendar there and their files and then just kind of letting it go run their life. And then OpenAI brought the founder of OpenClaw in-house. So you talked a little bit more about the AI as something that will help run your life for you in a way. Is that the vision by bringing the OpenClaw team in-house?
Well, I'd say that the core thing about this technology is that figuring out how it's useful, how people want to use it, what is the vision for agents, how is it going to slot in people's lives—that is a hard problem. And that one thing I've seen across many generations of this technology is the people who really lean in, who have a lot of curiosity, who have a lot of vision—that's a real skill and that's an emerging, very valuable skill in this new economy that is emerging.
And Peter, who is the OpenClaw founder, is I think someone who's got incredible vision, incredible creativity. And so to some extent it's about the specific technology, but to some extent it's not at all. It's really about the "how": how do we take these capabilities and figure out how those slot in people's lives? And so I think as a technologist it's very exciting, but as someone who is focused on bringing utility to people, that's something that we are doubling down on and investing quite a lot.
You had a pretty interesting quote about this recently talking about getting these autonomous AI agents to work on your behalf. You said you become, when you do it, you become this CEO of a fleet of hundreds of thousands of agents that are completing your objectives, your goals, your vision, and you're not in the weeds on exactly how different things are solved. And in some ways, this new way of work can make you feel like you're losing your pulse on the problem. Is that good?
I think that there's a mixed bag. And so I think that what we need to do is acknowledge the strengths of what these tools can deliver and mitigate the weaknesses. So giving people leverage, agency, making it so that if you have a vision, something you want to accomplish, that you can have a fleet of agents that will go do it for you. But if you think about how the world works, that at the end of the day, there's an accountable party, right? If you're trying to build a website and your agent messes it up and your user is affected, it's not really the agent's fault—it's your fault. And so, you need to care.
And I think that for people to use these tools, you need to realize that human agency, human accountability—that's a core part of the system. How the human uses the AI—that's something that is deeply fundamental. And so I think the important thing is that as a user of these agents—and we do this within OpenAI—you cannot abdicate responsibility. You cannot just say, "Ah, the AI is just going to do stuff."
Of course, but you said "feel you're losing your pulse on the problem itself." That's different than an accountability layer to it.
Well, to me they are linked together because the point is that if you're a CEO and you're too far from the details—if you're running this company, you're running this team and that you've lost your finger on the pulse—that is something that's not going to lead to great results. And so the point that I was trying to make there is not that it's a desirable thing for humans to not have to know about what's going on.
There's some details that—because you can trust—like if you are working with a team like a general contractor to build a house, there's a bunch of details there that you probably don't need to worry about because you can trust that they'll be taken care of. But at the end of the day, if there are details that are wrong, you should care about it. You should be aware. And so this is, I think, an important nuance: you cannot just blindly say, "I'm okay with losing my finger on the pulse." We need to lean in and say, "I need to keep it there to really understand the strengths and weaknesses." And as you disengage from some of these details, these lower-level mechanical things, you should do it because you have built trust with a system that it will do a good job.
One last question about the models. You've talked a little bit about the evolutions that the models have gone through: pre-training and fine-tuning, reinforcement learning that gets it more equipped to solve problems step by step and go out on the internet and do things. And now we're in this moment where the models have learned through that process to use tools. And correct me if I'm wrong on this one: what is next in that progression?
Well, I think that the world that we're in is one of this increasing capability and depth of what the machine can do. And some of this is about—we've got this tool use, but now we also need to build really great tools. You think about something like "computer use": an AI that can use a desktop, then it is really able to do anything that you can do. But we also have to build a little bit for the machine to think about: how does credentialing work in the enterprise? How do audit trails and observability work?
So there's a lot of technology to build to catch up with what the core model capability is. I think the overall direction of travel includes things like a really great speech interface, so you can just talk to your computer naturally—just as natural as this conversation—and it understands you. It does what you need. It has good advice. It's able to surface that: "I've been working on this thing. I have a problem." You wake up in the morning and it says, "Here's your daily report of how much progress your agents made overnight."
Maybe it's running a business for you, which I think is going to be a huge application of this technology. The democratization of entrepreneurship is absolutely coming. It'll say, "Here's these problems. There's this customer that's upset; they want to talk to a real human. You should go talk to them." All of that's going to happen. And then I think that the raising of the ceiling of ambition, of challenges humanity can solve, that is also a next step for this technology and we're seeing the leading edges of it.
The thing that I am just very excited to see is almost—if you remember AlphaGo move 37, right? This move that no human ever would have come up with—was creative.
Creative.
Creative. And it changed humanity's understanding of the game. That is going to happen in every single domain. It will happen in science, in math, in physics, in chemistry. It's going to happen in material science. It's going to happen in biology. It's going to happen in healthcare, drug discovery. But it may also even happen in literature, in poetry, in a bunch of other fields. They're going to unlock human creative understanding and ideation in ways we can't imagine right now.
Why do you think that hasn't happened yet, given how strong you say the models are?
Well, I think that there is an overhang of what the models are capable of and how people are using them. So the application... it's almost our understanding of what is in these models. That's something that I think is still emerging. So I think that even with no further progress, there's still a massive shift that will happen. The economy being powered by compute and AI is still going to happen.
But I think there's also something where what we've gotten very good at is training models on tasks that could be measured. And so what we started with was math problems, programming problems, where you have a perfect verifier. And a lot of what the progress has been in bringing us to more open-ended problems has been expanding the space of what can be created. And the AI itself can really help with that. If the AI is smart and understands things, you give it a rubric for how well a task goes.
And of course, for things like creative writing—like, "Is this a good poem?"—that's a much harder thing to grade.
That's a much harder thing to grade. And so we've had less ability to teach the AI and for it to experience and try things out. But all of that is changing and something that we have line of sight for.
It's interesting reflecting on that—Peter Thiel has mentioned, pretty sure that's what he said, that if you're a math person, you're probably in deeper trouble in terms of these models coming for what you do than if you're a words person. And you were a member of math club back in the day. Are you not concerned about that?
Well, I think that it's much easier to see what we lose than what we gain, right? Because we have a deep understanding of: "I used to do things this way. I used to do this math competition." Now the AI can do the math competition. But it was never really about the math competition. That's not really the thing that drives humanity. And if you think about the way that we do work right now of there's a box, something typed behind a box—we weren't doing that 100 years ago. That's not natural. That's not this digital world that we all got kind of sucked into.
That's not really what being human's about. Being a human is about being here, being present, connecting with other humans. And I think that what we're going to see is that AI is going to free up so much time to increase human connection, to build more bonds across people. And that's something I'm extremely excited about.
Okay. And then as you shift—well, as you shift, really—to these more agentic use cases, there's been discussion about whether the bigger training runs really need to happen, and especially if you get the model good enough then you could sort of let it go out in the world and then you can effectively get much of the uplift in areas that aren't the pre-training, which is what these big data centers were needed for before. So, you worked on—you work on scaling here, lead that process. What do you think about that argument?
Well, I think it misses something very important for how the technology development goes because it is absolutely the case that every single step of the model production pipeline multiplies, and so you want to improve all of them. And the thing that we see is: we improve the pre-training, it makes all the other steps much easier. And it makes sense because a model is able to learn faster. It's a model that is—because it already is more capable to start—when it's trying out different ideas and learning from its own mistakes, that process just is faster. It needs to make fewer mistakes.
And so I think that the big shift has been from thinking of it as just: you're just training this cerebral system on its own and you just make it bigger and bigger—to: it's also about trying things out. It's also about understanding how people are using it in the real world and connecting that back into your training. But it doesn't remove the value and the importance of continuing that research. And the thing that I think has also shifted is: we used to really just focus on the raw pre-training capability but not think as much about the inference ability.
And that's been a big change over the past 24 months, to realize that it's a balance between: you can have this model that has all those great properties in the base, but then you really need it to be able to be inferencible because you need the reinforcement learning, you need to serve it to the world. And that means that you don't necessarily go as big as you possibly could, because you also really think about: "There's going to be all this downstream use," and you really want the thing that has the best intelligence times that cost, and to optimize those two things together.
Do you still need the Nvidia GPU if things move mostly to inference?
We absolutely do. Yes.
Why?
Well, because there's multiple reasons. But one is that even as the balance of how much inference versus training changes, that you cannot get massive scale training through any other way besides this concentration of compute on one problem. And so I think that the thing that I think will happen is: there's some amount of the deployment footprint goes up quite a lot, but that sometimes there will be—you have a particular massive pre-training run and you really want to concentrate a bunch of compute in there. I also think that the NVIDIA team is just incredible and does really amazing work, and so, yeah, we partner very closely with them.
Isn't there going to be a time where people just say, "We've pre-trained enough, the models are smart enough"?
I think that's a little bit like once humanity has solved all problems in front of us, then maybe we can say that. But I think that the ceiling of what we want to accomplish... I think that there's just so much ambition that maybe we've over the past 50 years or so just sort of backed off from. You think about even problems that seem very clear, like: "Can we have healthcare for everyone that is not just preventative, not just targeting when people have a problem, but really think about the lifestyle and how to really help people early and detect potential diseases before they happen?"
Like, that's a problem that I think we can achieve through more intelligent models. And there's probably some level where you can totally solve that problem, and then you say, "Well, do I need a model that's two times smarter?" But there's other problems that are going to demand that.
Let's talk about the math about building these data centers. It raised $110 billion earlier this year. What's the math behind that? Does that money go right into data centers? How do you think about how you're going to return that money to investors? Talk about those calculations.
Yeah. So, I think it's as simple as: the massive expense we see in front of us is compute. But you can think of compute not as a cost center, but as a revenue center. Think of it a little bit like hiring salespeople, right? How many salespeople do you want to hire? As long as you can sell your product, as long as you have a scalable way to sell that product, then the more salespeople you have, the more revenue you will make. And I think the world that we're in is: we have continually found we cannot build compute fast enough to keep up with demand.
And I see this very concretely, right? Right now, we have to make very painful decisions about what we're launching, about where the compute goes, and I think we're going to experience this more broadly within the economy as we shift to this AI-powered economy. The question will be: what problems are going to get that massive compute? How do you scale so everyone can have a personal agent running for them? How can everyone be using systems like Codex? Like, there just isn't enough compute in the world to be able to do that. And so we're trying to get ahead of that problem, but it is a new category, right?
So you're doing it with real confidence—sums of money the world has never seen put towards a project like this. When you're building a new category, how do you do it with certainty that it's going to work out?
Well, I think there's several components that go into it. So, the first is: there is historical precedent at this point. From the moment we launched ChatGPT, I remember talking with my team having this exact conversation where they said, "How much compute should we buy?" I said, "All of it." They said, "No, no, no. Really, how much compute should we buy?" I said, "No matter how much we try to build, I know we're not going to be able to keep up with the demand." And that has been true. And that has been true every year since then.
And the challenge is that these compute purchases—you have to lock them in 18 months, sometimes 24 months, sometimes longer in advance of them being delivered, which means you really need to project forward. And I think that the world that we're moving towards is one where, to date, most of our revenue has come from consumer subscriptions and that will always be very important. There's other revenue streams we have emerging as well. But the opportunity that clearly is emerging now is knowledge work.
And we're seeing this very concretely across every single enterprise realizing this technology really works and, to be competitive, they need to adopt it. And you can see this organic energy of all these software engineers using it. And then we're starting to see the percolation of people using it for various knowledge work inside of the enterprises. And the willingness to pay and the revenue growth that you're seeing in this industry is very clear, right? It's very clearly happening right now. And you just project that forward.
And we look—like, one thing we get to see that maybe the world doesn't is the line of sight to how these models will improve. And all of this together says that the economy—which is a massive thing, right? The economy is just so large it's almost incomprehensible—all of the growth, like, the highest-order bit on how this economy grows from here will be about AI, how well you can leverage AI and the computational power you have available to power it.
You said consumer subscriptions are your biggest source of revenue right now. Is the projection that will flip and that business will be the biggest source?
Well, I think that it is very clear how quickly the enterprise—it's not just enterprise because I think enterprise is also changing what it means—so really people using it for productive knowledge work, for those kinds of things. And I think that as we think about pricing, one thing—if you look at how Codex works right now—is: if you have a ChatGPT consumer subscription, you can use Codex. And so I think it's not going to be as well-defined as, "There's this category, that category." I think it will really be about: you as a user are going to have, just again like your laptop, this portal to the digital world. And that is what the revenue fundamentally will come from.
Dario [Amodei] said—I think about you—there are some players who are yoloing, who pull the wrist dial too far and I'm very concerned. I think he's referencing your infrastructure bets there. What do you think about that?
No, I just disagree. I think we've been very thoughtful and very much seeing what is coming. And I think that we will see even this year how everyone who is participating is going to be compute-strapped. And I think we have been the most forward in realizing that this is coming and building in anticipation of how this technology is playing out. And I think that what we have seen is that for other players, they kind of realized that probably late last year and started scrambling to see what compute is available, and there really wasn't any.
And so I think that even as people—it's very easy to make statements like that, but I think that everyone has kind of realized that this technology, it's working, it's here, it's real. Software engineering is just the first example of it, and that we are fundamentally limited by the computational power available.
And he said that also that if he's off by with his prediction by a little bit then his... the company could potentially go bankrupt. Is that the same case for you?
I think that, look, I think that there's more degrees of offramp here. If you start to worry about the downside case—which I think is a very reasonable question, right?—but to some extent, what I think the bet is on isn't about any one company. It's really about the sector. It's really about: do you believe this technology can be produced and can deliver this massive amount of value that we see coming?
And again, I'll point to proof points: that software engineering... just the degree to which, if you're not a software engineer you haven't tried Codex, the degree to which it is different... it's just hard to describe. And I think that people will experience it very quickly. Six months ago, I think that for us, we saw this internally but there were fewer proof points out there. Now there's proof points out there. Six months from now, I think that everyone will feel it. And I think that we will all feel the pain of: "There's an awesome model and there's just no availability because there's not enough compute."
Yeah. But as we were looking at our predictions for 2026 on this show, we had a conversation towards the end of the year last year where Ron John Roy, who was on with us, was like, "2026 is going to be the year where everybody uses agents." And I said, "Yeah, well, I'll believe that when I see it." And I'm using the agents. So, here we are. Here we go. What do you use it for?
I use it to build tools internally for the people who I work with to sort of get on the same page about when videos are coming and what the thumbnails need to look like. And I'm also integrating things from YouTube and so we can then rank how the videos are doing based off of thumbnail—like a custom-built piece of software that I never would have paid for. And that's one of the things that I think is interesting about this moment, I guess, is that software... it scales, it's used by the masses, but when you use it, therefore there's going to be so many things that are not made for you. And maybe what this does is it allows us to interact with software in a way that's much more natural.
I think that is the key. And again, I just think a lot about the fact that the way we've built computers has really pulled us into this digital world. You think about how much time you just spend scrolling through your phone.
Yep.
The amount of time that you spend clicking different buttons and trying to like connect this thing to that thing—why? Why do you have to do that? And instead, the AI being about bringing the machine closer to you, personalizing to you, understanding what you're trying to accomplish. And that we have all this pop culture of just computers you can talk to and that they go and do stuff for you—and it's starting to become real. It's starting to become the thing that you can do. And I think that the amazingness of that is something where you just have to try it to really understand. So I definitely think it's a very special moment we're in.
Yeah. Then I want to know: why is AI so unpopular with the public? YouGov, for instance, says three times as many Americans expect the effects of AI on society to be negative as they expect it to be positive. What do you think the reasoning is behind that? And are you concerned about AI's brand?
Well, I think that there is something that we need to show the country of: why AI is good for them, not just for the broad economy, for growing the GDP and things like that, but how does it help them in their lives? And I think there are many very concrete stories that I hear every day.
For example, there's a family where their child was having some headache, some medical issues, was denied an MRI, and they researched the symptoms with ChatGPT and realized that they could make an argument to insurance to get the MRI. They did that. Turns out he had a brain tumor. They were able to save his life because they used ChatGPT to get access to the right information. And that's just one story. There are so many more just like that of people who have been deeply, profoundly... their lives have been improved or saved through their use of this technology and through partnering with the technology in a real way.
And so that is a story I don't think gets out there. I think that this is happening in so many people's lives but somehow the story is not yet told. And one thing I notice is that there's certainly a lot of pop culture from the '90s, from the historical context that we have, that's very negative on AI, that worries about what could go wrong. But when people use AI, they find utility in it, they find value in it. And so I think that I am definitely very concerned about us not having successfully helped people understand why this technology wave is something that will improve their lives, that will help improve human connection.
And that is something that's a big focus in my mind. And if you think about the opportunity here and why AI is so important, I think this will be the source of economic and national security going forward. I think it's going to be about national competitiveness, and that there are other countries like China where AI pulls in the exact opposite direction. And so, yes, I think it's very important that we acknowledge that and we really understand how to get the benefits for everyone.
But we also are in a time that's like politically unstable. There's concerns about work. People are—every time I speak with someone about AI, they're like, "How long do I have left to work in my job?" And then when I think about the data centers, the polling is even worse than AI in general. This is from Pew: far more people say data centers are mostly bad than good for the environment, home energy costs, and quality of life of those nearby. So we are at this moment where good jobs are tough to come by and people see these data centers come into their communities and they say, "Not good for the environment, home energy, and quality of life." Are they wrong?
Well, I think there's definitely a lot of misinformation about data centers. A good example is water usage. If you look at our Abilene facility, which is one of, if not the biggest supercomputers in the world, the amount of water it uses is the same as a household over the course of a year, right? So, it's really negligible water usage. And yet, there's a lot of misinformation that these data centers consume a lot.
And so, similar on power: we have a commitment that we are going to pay our own way to not drive up energy prices for people. That's something that, as an industry, now people are making these commitments because it is very important that we improve local communities. And when we build data centers, we really try to go into those local communities, understand what's happening on the ground, how we can help. There's tax revenue that is associated with these data centers, and I think that there's jobs that they create. There's a lot of benefits that come from them. And so I think that's one thing where it is about how we show up, and that's a responsibility that we take very seriously.
Okay. But also, like, if their power costs are not going to go up, you have to bring in power, which means potentially more pollution. Is that not a concern?
Well, I think that there's much more nuance in terms of not driving up energy costs. If you look at how the grid works today, there's a lot of just stranded power—power that is there that is not being utilized and that you need to upgrade the transmission systems. And again, that's something where putting that on us rather than putting that on the ratepayers is very important, right? There's lots of places where they have clean power that is being underutilized and just being kind of thrown away.
And so there's a lot of benefit that comes from having real reasons for the grid—which is aging and obsolete in many places—to upgrade. And that's something that has real benefits to the community. Like, we've seen for example in North Dakota that people's rates have gone down because a data center has shown up and has helped with improving the utilities for everyone.
All right, one last question on the politics. You gave $25 million to MAGA Inc., which is a PAC that's pro-Trump. You spoke with Wired about it and you said, "Anything I can do to support this technology benefiting everyone is a thing I will do." And, you know, if that makes you a one-issue voter or one-issue political supporter... I'll share the one thing I always wonder when it comes to just this, the one-issue camp: ultimately, doesn't a stronger country make your goals much more feasible, even if a candidate isn't fully in support of what you're doing? Shouldn't a stronger country, no matter what, be the North Star of any political activity? And if that's the case, then is that part of the donation?
So the way I look at this: my wife and I made that donation; we've donated to bipartisan super PACs as well. I think this technology is one where it's coming quickly, within the next couple years, really going to transform everything, going to be the underpinning of the economy. And it's not popular, and we really want to support politicians who really lean into this technology, really engage with it. And so I think that, certainly, this technology is about uplifting us as a country. I am a one-issue donor—this is something where I feel like I have a unique contribution to make—but it's really about just expressing support for this technology is something that we should be leaning into as a country.
What would you tell someone who's scared of AI? If you have a moment here where you can speak directly to them—they might think, "It's going to take my job. It's going to pollute my community. It will change the world too fast." What's your message to them?
Number one thing is: try the tools. Because to really understand what it can do for you is something that only by experiencing the AI that exists now will that really hit home. And we see so much opportunity and potential and empowerment coming from this technology today. You talked a little bit about what you can build now: people who have never built a website before can build a website. If you want to start a small business and you're thinking about all the backend processing and how to manage it, all those things—the AI can help you with that right now.
And so I think that in your life, thinking about how it can help you with your health, how it can help your loved ones, how it can help you make money, how it can help you save money—these are all going to be on the table. And I think it is much easier to see what's going to change than it is to see what you're going to gain. But I think that it's worth giving it a fair shot of really trying to understand both sides of the equation.
That's the one thing that doesn't get talked about in the polling data, by the way. It's the people that have seen it used but haven't tried it themselves, or the people that have never tried AI, are much more negative. And then you get to the power users, or even people that use it casually, and they're generally pretty positive about the technology.
Yeah. For myself, we've been thinking about this technology for a long time. What I see playing out in front of us is more amazing, more beneficial, and going to really have a much more just positive impact than we ever imagined.
So, last one for you. How would you advise someone to prepare themselves for the future? And it has to be more than just getting the tools. I have friends who come to me and they say, "I don't know what's going to happen with my job or the world, and I just need to know what to do with this."
I do think that the number one thing is about understanding the technology. One thing we've seen is: the people who get the most out of the technology, you have to approach it with a curiosity, trying to really try it in your workflows, really be able to get over that initial hump of: "You have a blank box. What do I do with a blank box?" To really develop this sense of agency, this sense of: "I can be the manager. I can set the direction. I can delegate. I can provide oversight."
And to really develop that skill, because that is something that's going to be fundamental. We're building this technology for humans to help humans, foster more human connection, for humans to be able to spend more time doing what they want. And so the question is: "Well, what do you want?" And really trying to crystallize that and trying to realize that with the help of this technology is going to be the most important thing.
Greg, thanks so much for coming on the show.
Thank you for having me.
All right, thank you everybody for listening and watching, and we'll see you next time on Big Technology podcast.