100% of my code is written by Claude Code. I have not edited a single line by hand since November. Every day I ship 10, 20, 30 PRs. So at the moment, I have like five agents running while we're recording this.
Yeah. Do you miss writing code?
I have never enjoyed coding as much as I do today because I don't have to deal with all the minutiae. Productivity per engineer has increased 200%.
There's always this question: should I learn to code? In a year or two, it's not going to matter. Coding is largely solved. I imagine a world where everyone is able to program. Anyone can just build software anytime. What's the next big shift to how software is written?
Claude is starting to come up with ideas. It's looking through feedback, it's looking at bug reports, it's looking at telemetry for bug fixes and things to ship—a little more like a co-worker or something like that.
A lot of people listening to this are product managers, and they're probably sweating. I think by the end of the year, everyone's going to be a product manager and everyone codes. The title "software engineer" is going to start to go away. It's just going to be replaced by "builder," and it's going to be painful for a lot of people.
Today my guest is Boris Cherny, Head of Claude Code at Anthropic. It is hard to describe the impact that Claude Code has had on the world. Around the time this episode comes out will be the one-year anniversary of Claude Code. And in that short time, it has completely transformed the job of a software engineer, and it is now starting to transform the jobs of many other functions in tech, which we talk about.
Claude Code itself is also a massive driver of Anthropic's overall growth over the past year. They just raised a round at over $350 billion. And as Boris mentions, the growth of Claude Code itself is still accelerating. Just in the past month, their daily active users have doubled.
Boris is also just a really interesting, thoughtful, deep-thinking human. And during this conversation, we discover we were born in the same city in Ukraine. That is so funny—I had no idea. A huge thank you to Ben Mann, Jenny Wen, and Mike Krieger for suggesting topics for this conversation. Don't forget to check out lennysprodpass.com for an incredible set of deals available exclusively to Lenny's newsletter subscribers. Let's get into it after a short word from our wonderful sponsors.
Today's episode is brought to you by DX, the developer intelligence platform designed by leading researchers. To thrive in the AI era, organizations need to adapt quickly. But many organizational leaders struggle to answer pressing questions like: Which tools are working? How are they being used? What's driving value? DX provides the data and insights that leaders need to navigate this shift. With DX, companies like Dropbox, Booking.com, Adyen, and Intercom get a deep understanding of how AI is providing value to their developers and what impact AI is having on engineering productivity. To learn more, visit DX's website at getdx.com/lenny.
Applications break in all kinds of ways: crashes, slowdowns, regressions, and the stuff that you only see once real users show up. Sentry catches it all. See what happened, where, and why, down to the commit that introduced the error, the developer who shipped it, and the exact line of code—all in one connected view.
I've definitely tried the "five tabs and a Slack thread" approach to debugging. This is better. Sentry shows you how the request moved, what ran, what slowed down, and what users saw. Sayer, Sentry's AI debugging agent, takes it from there. It uses all of that Sentry context to tell you the root cause, suggest a fix, and even opens a PR for you. It also reviews your PR and flags any breaking changes with fixes ready to go. Try Sentry and Sayer for free at sentry.io/lenny and use code **Lenny** for $100 in Sentry credits.
Boris, thank you so much for being here and welcome to the podcast.
Yeah, thanks for having me on.
I want to start with a spicy question. About six months ago—I don't know if people even remember this—you left Anthropic. You joined Cursor, and then two weeks later, you went back to Anthropic. What happened there? I don't think I've ever heard the actual story.
It's the fastest job change that I've ever had. I joined Cursor because I'm a big fan of the product and, honestly, I met the team and I was just really impressed. They're an awesome team. I still think they're awesome, and they're just building really cool stuff. They saw where AI coding was going, I think, before a lot of people did. So the idea of building a good product was just very exciting for me.
I think as soon as I got there, what I started to realize is what I really missed about Anthropic was the mission. And that's what originally drove me to Anthropic also. Before I joined Anthropic, I was working in big tech, and then at some point I wanted to work at a lab to just help shape the future of this crazy thing that we're building in some way.
The thing that drew me to Anthropic was the mission. It's all about safety. When you talk to people at Anthropic—just find someone in the hallway—if you ask them why they're here, the answer is always going to be "safety." This kind of mission-drivenness just really resonated with me. I just know personally it's something I need in order to be happy, and I found that whatever the work might be—no matter how exciting, even if it's building a really cool product—it's just not really a substitute for that. So for me, it was pretty obvious that I was missing that pretty quick.
Okay. So let me follow the thread of just coming back to Anthropic and the work you've done there. This podcast is going to come out around the year anniversary of launching Claude Code. So I'm going to spend a little time just reflecting on the impact that you've had.
There's this report that recently came out that I'm sure you saw by SemiAnalysis that showed that 4% of all GitHub commits are authored by Claude Code now. And they predicted it'll be a fifth of all code commits on GitHub by the end of the year. The way they put it is, "while we blinked, AI consumed all software development."
The day that we're recording this, Spotify just put out this headline that their best developers haven't written a line of code since December thanks to AI. More and more of the most advanced senior engineers, including you, are sharing the fact that you don't write code anymore—that it's all AI-generated. And many aren't even looking at code anymore; that's how far we've gotten, in large part thanks to this little project that you started and that your team has scaled over the past year. I'm curious just to hear your reflections on this past year and the impact that your work has had.
These numbers are just totally crazy, right? 4% of all commits in the world is just way more than I imagined. And like you said, it still feels like the starting point. These are also just public commits. So we think if you look at private repositories, it's quite a bit higher than that.
I think the craziest thing for me isn't even the number that we're at right now, but the pace at which we're growing. If you look at Claude Code's growth rate across any metric, it's continuing to accelerate. It's not just going up; it's going up faster and faster.
When I first started Claude Code, it was just supposed to be a little hack. We broadly knew at Anthropic that we wanted to ship some kind of coding product. For a long time, we were building the models in a way that fit our mental model of how we build safe AI—where the model starts by being really good at coding, then it gets really good at tool use, then it gets really good at computer use. This is roughly the trajectory.
We've been working on this for a long time. The team I started on was called the Anthropic Labs team. Mike Krieger and Ben Mann kicked this team off for "round two." The team built some pretty cool stuff: we built Claude Code, we built MCP (Model Context Protocol), we built the desktop app. You can see the seeds of this idea: it's coding, then it's tool use, then it's computer use.
The reason this matters for Anthropic is because of safety. AI is getting more and more powerful, more and more capable. What's happened in the last year is that, at least for engineers, the AI doesn't just write the code. It's not just a conversation partner, but it uses tools. It acts in the world. I think now with Claude, we're starting to see the transition for non-technical folks also.
For a lot of people that use conversational AI, this might be the first time they're using something that acts. It can use your Gmail, it can use your Slack, it can do all these things for you and it's quite good at it. And it's only going to get better from here.
For Anthropic, for a long time there was this feeling that we wanted to build something, but it wasn't obvious what. When I joined, I spent one month hacking and built a bunch of weird prototypes—most of them didn't ship, weren't even close to shipping. It was just understanding the boundaries of what the model can do. Then I spent a month doing post-training to understand the research side of it.
For me as an engineer, I find that to do good work, you really have to understand the layer under the layer at which you work. With traditional engineering, if you're working on product, you want to understand the infrastructure, the runtime, the virtual machine, the language—the system that you're building on. If you're working in AI, you just really have to understand the model to some degree to do good work.
So I took a little detour to do that and then I came back and started prototyping what eventually became Claude Code. The very first version of it—I have a video recording of this because I posted it—was called Claude CLI back then. I showed off how it used a few tools, and the shocking thing for me was that I gave it a bash tool and it was able to use that to write code to tell me what music I'm listening to.
This is the craziest thing, right? Because I didn't instruct the model to say, "use this tool for this." The model was given this tool and it figured out how to use it to answer a question that I wasn't even sure if it could answer: "What music am I listening to?"
I started prototyping this a little bit more. I made a post about it and announced it internally, and it got two likes. That was the extent of the reaction at the time. I think people internally, when they think of coding tools, they think of an IDE and sophisticated environments. No one thought that this thing could be terminal-based; that's sort of a weird way to design it. That wasn't really the intention, but from the start, I built it in a terminal because for the first couple months it was just me, so it was just the easiest way to build.
For me, this is a pretty important product lesson: you want to under-resource things a little bit at the start. Then we started thinking about what other form factors we should build, and we decided to stick with the terminal for a while. The biggest reason was the model is improving so quickly. We felt that there wasn't really another form factor that could keep up with it.
For the last year, Claude Code has been all I think about. Late at night, I'd be thinking: "Okay, the model is continuing to improve. What do we do? How can we possibly keep up?" The terminal was honestly just the only idea that I had. And it ended up catching on after I released it pretty quickly. It became a hit at Anthropic, and the daily active users just went vertical.
Before I launched it, Ben Mann nudged me to make a DAU chart. I was like, "It's early, should we really do it right now?" and he was like, "Yeah." And the chart went vertical immediately. Then in February, we released it externally. Something that people don't really remember is Claude Code was not initially a hit when we released it. It got a bunch of users, there were early adopters, but it took many months for everyone to really understand what this thing is. It's just so different.
Part of the reason Claude Code works is this idea of latent demand—where we bring the tool to where people are and it makes existing workflows a little bit easier. But because it's in a terminal, it's a little surprising, a little alien. You had to be open-minded and learn to use it. Now, of course, Claude Code is available in the iOS and Android Claude apps, the desktop app, the website, and as IDE extensions in Slack and GitHub. All these places where engineers are, it's a little more familiar, but that wasn't the starting point.
At the beginning, it was a surprise that this thing was even useful. As the team grew and the product grew, it started to become more and more useful to people around the world—from small startups to the biggest FAANG companies. They started giving feedback, and reflecting back, it's been such a humbling experience because we just keep learning from our users. The most exciting thing is that none of us really know what we're doing; we're just trying to figure it out along with everyone else, and the single best signal for that is feedback from users.
It's incredible how fast something can change today. You launched this a year ago, and it wasn't the first time people could use AI to code, but in a year, the entire profession of software engineering has dramatically changed. There were all these predictions—"AI's code is going to be written 100% by AI"—and everyone was like, "No, that's crazy." Now it's like...
Of course it's happening, exactly as they said. Things move and change so fast now.
Back at Code with Claude in May—our first developer conference as Anthropic—I did a short talk. In the Q&A, people asked for my predictions for the end of the year. My prediction in May of 2025 was that by the end of the year, you might not need an IDE to code anymore, and we're going to start to see engineers not doing this. I remember the room audibly gasped. It was such a crazy prediction.
At Anthropic, we think in exponentials; this is deep in our DNA. Three of our co-founders were the first three authors on the Scaling Laws paper. If you look at the exponential growth of the percent of code written by Claude at that point, if you just trace the line, it's pretty obvious we were going to cross 100% by the end of the year, even if it doesn't match intuition at all. All I did was trace the line. In November, that happened for me personally, and that's been the case since. We're starting to see that for a lot of different customers too.
I thought it was really interesting what you shared about the journey—this idea of just playing around and seeing what happens. This comes up with open Claude a lot, where someone was just playing around and a thing happened. It feels like a central ingredient to a lot of the biggest innovations in AI: people sitting around trying stuff and pushing the models further than most other people.
That's the thing about innovation—you can't force it. There's no roadmap for innovation. You just have to give people space. You have to give them psychological safety—that it's okay to fail, it's okay if 80% of the ideas are bad. You also have to hold them accountable: if the idea is bad, you cut your losses and move on instead of investing more.
In the early days of Claude Code, I had no idea this thing would be useful at all. Even in February when we released it, it was writing maybe 20% of my code. In May, maybe 30%; I was still using Cursor for most of my code. It only crossed 100% in November. It took a while. But from the earliest days, it felt like I was onto something. I was spending every night, every weekend hacking on this. Luckily my wife was very supportive. Sometimes you find a thread and you just have to pull on it.
So at this point, 100% of your code is written by Claude Code. Is that the current state of your coding?
Yeah, 100% of my code is written by Claude Code. I am a fairly prolific coder; this has been the case even when I worked back at Instagram. I was one of the top few most productive engineers, and that's still the case here at Anthropic.
Wow. Even as Head of the team.
Yeah. I still do a lot of coding. Every day I ship 10, 20, 30 PRs.
Every day?
Every day.
Good God.
100% written by Claude Code. I have not edited a single line by hand since November. I do look at the code—I don't think we're at the point yet where you can be totally hands-off, especially when there are a lot of people running the program. You have to make sure it's correct, safe, and so on. We also have Claude doing automatic code review for everything. At Anthropic, Claude reviews 100% of pull requests. There's still a layer of human review after it, but you still want those checkpoints. You want a human looking at the code, unless it's pure prototype code that isn't going to run anywhere.
What's the next frontier? At this point, 100% of your code is being written by AI. This is clearly where everyone is going. That felt like a crazy milestone; now it's just like, "Of course this is the world now." What's the next big shift that either your team is already operating in or you think we'll head towards?
I think Claude is starting to come up with ideas. Claude is looking through feedback, bug reports, and telemetry, and it's starting to come up with ideas for bug fixes and things to ship. It's becoming more like a co-worker.
Second, we're starting to branch out of coding. At this point, coding is largely solved—at least for the kind of programming that I do. So now we're starting to think about what's beyond this. There are a lot of things adjacent to coding that are coming. But also general tasks: I use Claude every day now to do all sorts of things that are not related to coding. For example, I had to pay a parking ticket the other day; I just had Claude do it. All of my project management for the team—Claude does all of it: syncing spreadsheets, messaging people on Slack, email, all that stuff.
I think the frontier is something like this. I don't think it's coding anymore because coding is pretty much solved. Over the next few months, it's going to become increasingly solved for every kind of codebase and tech stack.
This idea of helping you come up with what to work on is so interesting. A lot of people listening to this are product managers and they're probably sweating. How do you use Claude for this? Do you just talk to it? Is there anything clever you've come up with?
Honestly, the simplest thing is to open Claude Code or the Claude app and point it at a Slack thread. For us, we have a channel that's all the internal feedback about Claude Code. Since we first released it, it's been this firehose of feedback, and it's the best. In the early days, anytime someone sent feedback, I would go in and fix every single thing as fast as I possibly could—within a minute or five minutes.
This fast feedback cycle encourages people to give more feedback. It makes them feel heard. Usually, you give feedback and it goes into a black hole; if you make people feel heard, they want to contribute. Now I do the same thing, but Claude does a lot of the work. I point it at the channel and it says, "Okay, here's a few things I can do. I just put up a couple PRs. Want to take a look?" and I'm like, "Yeah."
Have you noticed that it's getting much better at this? Because this is the holy grail. Building is "solved," and code review became the next bottleneck—all these PRs, who's going to review them all? The next big question is: humans are necessary for figuring out what to build and what to prioritize. You're saying Claude Code is starting to help you there. Has it gotten a lot better with, say, Opus 4.6?
Yeah, it's improved a lot. Some of it is training specific to coding—obviously the best coding model in the world—but also a lot of training outside of coding translates well too. There is this transfer where you teach the model to do X and it gets better at Y. The gains have been insane. At Anthropic over the last year, we've probably 4x'd the engineering team, but productivity per engineer has increased 200% in terms of pull requests.
This number is crazy for anyone that works on dev productivity. In a previous life, I was at Meta, and one of my responsibilities was code quality for the company—Facebook, Instagram, WhatsApp, etc. A lot of that was about productivity; if you make the code higher quality, engineers are more productive. In a year with hundreds of engineers working on it, you would see a gain of maybe a few percentage points. Nowadays, seeing gains of hundreds of percentage points is absolutely insane.
What's also insane is how normalized this has become. We hear these numbers—"Of course AI is doing this"—but it's unprecedented. The amount of change happening to software development and building products is so easy to get used to, but it's important to recognize this is crazy.
I have to remind myself once in a while. There's a downside on a personal level: the model changes so often that I sometimes get stuck in an old way of thinking. I find that new people on the team, even new grads, do stuff in a more AGI-forward way than I do.
For example, a couple months ago there was a memory leak in Claude Code—memory usage was going up and eventually it would crash. Traditionally, you take a heap snapshot, put it into a special debugger, and use special tools to see what's happening. I was doing this, looking through traces. An engineer who was newer on the team just had Claude Code do it: "Hey Claude, it seems like there's a leak. Can you figure it out?"
Claude Code did exactly the same thing I was doing. It took the heap snapshot, wrote a little tool for itself so it could analyze it—it was sort of a just-in-time program—found the issue, and put up a PR faster than I could. For those of us who have been using the model for a long time, you have to transport yourself to the current moment and not get stuck in an old model. It's not Sonnet 3.5 anymore; the new models are completely different.
I hear you have specific principles codified for your team. I believe one of them is: "What's better than doing something? Having Claude do it." It feels like that's exactly what you described with the memory leak. You almost forgot that principle of seeing if Claude can solve it for you.
There's also an interesting thing that happens when you underfund everything a little bit, because then people are forced to "Claudify." We see this where we put just one engineer on a project, and the way they ship quickly—because of that intrinsic motivation to do a good job—is by using Claude to automate a lot of the work.
So one principle is underfunding things a little bit. Another is encouraging people to go faster: if you can do something today, you should just do it today. Early on, that was our only advantage—speed. That's the only way we could ship a product that would compete in this very crowded market. If you want to go faster, a really good way to do that is to have Claude do more stuff.
This idea of underfunding is so interesting. Generally, there's a feeling like AI is going to allow you to have fewer employees. You're saying you will actually do better if you underfund—that you'll get more out of the AI tooling if you have fewer people working on something.
If you hire great engineers, they'll figure out how to do it if you empower them. My advice to CTOs generally is: don't try to optimize or cost-cut at the beginning. Start by giving engineers as many tokens as possible. At Anthropic, everyone can use a lot of tokens. This is starting to come up as a perk at some companies: "Join us and get unlimited tokens."
It makes people free to try ideas that would have been too crazy. If an idea works, then you can scale and optimize—figure out if you can use Haiku or Sonnet instead of Opus. But at the beginning, you want to throw a lot of tokens at it and give engineers the freedom to do that.
The advice here is: be loose with your tokens and the cost of using these models. People might say, "Of course, he works at Anthropic, he wants us to use tokens." but you're saying the most innovative ideas come from someone taking it to the max and seeing what's possible.
At small scale, you're not going to get a giant bill. If an individual engineer is experimenting, the token cost is relatively low compared to their salary. Once you build something awesome and it scales up, then you optimize. But don't do that too early.
Have you seen companies where their token cost is higher than their salary?
At Anthropic, we're starting to see some engineers spending hundreds of thousands a month in tokens. We're starting to see similar things at other companies too.
Going back to coding, do you miss writing code? Is this something you're sad about?
It's funny—when I learned engineering, it was very practical. I learned it so I could build stuff. I was self-taught; I studied economics in school, but I taught myself engineering early on. I was programming in middle school, and from the beginning, it was practical. I learned to code so I could cheat on a math test. We had these graphing calculators—a TI-83 Plus—and I programmed the answers into it.
The next year, the math was too hard; I couldn't just program the answers because I didn't know the questions. So I had to write a little solver for algebra questions. Then I figured out you could get a cable and give the program to the rest of the class so everyone got A's. We all got caught and the teacher told us to knock it off. But from the beginning, programming was a way to build a thing, not an end in itself.
At some point, I fell into the rabbit hole of the "beauty of programming." I wrote a book about TypeScript and started what was at the time the world's biggest TypeScript meetup. I got deep into functional programming and type systems—there's a certain buzz you get when you solve a complicated math problem or balance the types.
Coding is very much a tool for me. That said, not everyone feels this way. One engineer on the team, Lena, still writes C++ on the weekends by hand because she just really enjoys it. There's always space to enjoy the art if you want.
Do you worry about your skills atrophying as an engineer?
I think it's just the way it happens. Programming is on a continuum. Way back, it was punch cards, then switches, then hardware, then pen and paper. Programming has always changed. You still want to understand the layer under the layer because it helps you be a better engineer—for the next year or so. But pretty soon, it won't really matter; it'll be like the assembly code running under the programmer.
As a programmer, you're always learning new frameworks and languages, so it doesn't feel that new. But for some people, there will be a sense of loss or nostalgia. Elon was saying: why isn't the AI just writing straight to binary? What's the point of all this programming abstraction in the end?
Yeah, it's a good question. It totally can do that if you wanted to.
What I'm hearing is, regarding the question "Should I learn to code?", your take is that in a year or two, you won't really need to. You still have to understand the layer under it today, but soon it won't matter. I was thinking about the right historical analog for this. The thing that's come closest for me is the printing press.
In the mid-1400s, literacy was sub-1%. Scribes did all the writing and reading, employed by lords and kings who often weren't literate themselves. Then the printing press came along. In the 50 years after it was built, more printed material was created than in the thousand years before. The cost went down 100x. Literacy took a while—it takes an education system and free time—but over 200 years, it went up to 70% globally.
There's an interview with a scribe from the 1400s asking how they felt about the printing press. They were excited because they didn't like copying books; they liked drawing the art and doing the bookbinding, and they were glad their time was freed up. As an engineer, I feel a parallel: I don't have to do the tedious work of coding anymore. Messing with Git and all these tools wasn't the fun part. The fun part is figuring out what to build, talking to users, and thinking about these big systems.
What's amazing is that the tool you're building allows anybody to do this. People with no technical experience can do exactly what you're describing. I've been doing random little projects, and anytime I get stuck, I just say, "help me figure this out." I remember spending so much time on libraries and dependencies early in my career; now it's just step-by-step instructions.
Exactly. I was talking to an engineer earlier who was writing a service in Go. It had been a month, and the service was working well. I asked how he felt writing it, and he said, "I still don't really know Go." We're going to see more of this: if you know it works correctly and efficiently, you don't have to know all the details.
Clearly, the life of a software engineer has changed dramatically. What is the next role that will be most impacted—product managers, designers, or even roles outside of tech?
It's going to be the roles adjacent to engineering—PMs, design, data science. It's going to expand to pretty much any work you can do on a computer. The Claude product is the first way to get at this, bringing agentic AI to people who haven't really used it before.
A year ago, no one really knew what an agent was. Now, it's just the way we do our work. "Agent" has a specific technical meaning: an LLM that is able to use tools. It doesn't just talk; it can act—use your Google Docs, send email, run commands on your computer. Any job where you use computer tools will be next.
This is something we have to figure out as a society. It feels very urgent to do this work at Anthropic because we take it seriously. We have economists, policy folks, and social impact folks talking about this so we can figure out what to do as a society—it shouldn't just be up to us.
The big question is job loss. There's Jevons paradox—as we can do more, we hire more. What have you experienced so far? Are you hiring more than if you didn't have AI?
Our team is hiring! Check out the Anthropic jobs page. Personally, this has made me enjoy my work more. I've never enjoyed coding as much as I do today because I don't have to deal with the minutiae. We hear from a lot of customers that Claude Code makes coding delightful again.
Again, I reach for the printing press analogy. Technology that was locked away to a small set of people became accessible to everyone. It was inherently democratizing. Without it, the Renaissance could never have happened, because knowledge needed to spread through written records.
I imagine a world a few years in the future where everyone is able to program. What does that unlock? Anyone can just build software anytime. In the 1400s, no one could have predicted what the printing press would enable—our microphones, the internet, etc. But in the meantime, it's going to be very disruptive and painful for a lot of people.
For folks that want to succeed in this turmoil, any advice?
Experiment with the tools, get to know them, don't be scared of them. Dive in and stay on the bleeding edge. Second, try to be a generalist. In school, people study CS and learn to code but don't learn much else. Some of the most effective engineers and PMs I work with cross over disciplines.
On the Claude Code team, everyone codes—the PM, the engineering manager, the designer, the finance guy, the data scientist. The strongest engineers are hybrid product and infrastructure engineers, or product engineers with great design sense. The people rewarded most over the next few years won't just be AI native; they'll be curious generalists who can think about the broader problem rather than just the engineering part.
Do you find these three separate disciplines—engineering, design, product management—still useful?
In the short term, they'll persist, but we're seeing a 50% overlap. For example, I code more, whereas Kat, our PM, does more coordination, planning, and forecasting.
Stakeholder alignment.
Exactly. I think by the end of the year, these will get even murkier. In some places, the title "software engineer" is going to go away and be replaced by "builder," or everyone will be a product manager and everyone codes.
Every founder and hiring manager I speak with feels the same pressure: hire the best people as fast as possible. But recruiting is time-consuming. That's why teams like ElevenLabs, Brex, Replit, Deel, and 5,000 others use MetaView, the AI company giving high-performance teams a real unfair advantage in hiring. They give you AI agents that find candidates, take interview notes automatically, and identify the best candidates in your pipeline. MetaView customers close roles 30% faster. Try it for free at metaview.ai/lenny.
You talked about how you're enjoying coding more. I did an informal survey on Twitter where I asked engineers, PMs, and designers if they were enjoying their job more or less since adopting AI tools. For engineers and PMs, 70% said more and 10% said less. Designers, interestingly, only 55% said more and 20% said less.
That's super interesting. I'd love to talk to those people.
We're doing a follow-up poll that we'll link in the show notes. The designers didn't share a lot of detail on why it's less fun, so I'm curious what's going on there.
At Anthropic, everyone is fairly technical; we screen for that. Our designers largely code. For them, they've enjoyed it because instead of bugging engineers, they can just go in and code. Even designers who didn't code before have started to do it and unblock themselves. But I bet it's not uniform.
If you're listening to this, leave a comment if you're finding your job less fun.
We do see that people use different tools. Our designers use the Claude desktop app more to do their coding—there's a code tab right next to the chat, and it's the exact same Claude Code agent. You can run as many Claude sessions in parallel as you want; we call this "multi-Clauding."
You don't want to make people go out of their way to learn a new thing; if you can make whatever they're already doing easier, that's a better product. This is the principle of latent demand—the single most important principle in product.
Explain what this principle is and what happens when you unlock it.
Latent demand is the idea that if you build a product that can be hacked or "misused" by people for something it wasn't designed for, it helps you learn where to take the product next. An example is Facebook Marketplace. Fiona, the founding manager for that team, talks about this. Marketplace started based on the observation in 2016 that 40% of posts in Facebook groups were buying and selling stuff. People were abusing groups for commerce. If you build a better product for that, they're going to like it.
Facebook Dating started in a similar place. 60% of profile views were people who weren't friends and were opposite gender. People were "creeping" on each other, so maybe if you build a product for that, it might work.
This is also where the Claude app for work came from. We saw that for six months, a lot of people using Claude Code were not using it to code. Someone was using it to grow tomato plants; another was analyzing their genome; someone was recovering wedding photos from a corrupted hard drive.
People were jumping through hoops to use a terminal to do these things. In May of last year, I walked into the office and our data scientist, Brendan, had a terminal up. I was shocked—he had figured out how to download Node.js and Claude Code and was doing SQL analysis in a terminal. When you see people "abusing" the product like that, it's a strong indicator you should build a special-purpose product for them.
There's a second dimension to latent demand: look at what the model is trying to do and make that easier. When we started Claude Code, a lot of people were putting the model in a box—"you're going to do this one component of my application." For Claude Code, we inverted that. The product is the model. We want to put minimal scaffolding around it and give it the tools so it can decide which tools to run and in what order. In research, we call this being "on distribution."
You talked about the team building that Claude "work" product in 10 days. That's insane.
Claude Code was not immediately a hit. It had inflection points—Opus 4.0 was one, then November. The growth just keeps getting steeper. But for the first few months, people didn't know what it was for. The "work" product was immediately a hit. Credit goes to Felix, Sam, Jenny, and the team. Someone was just like, "What if we take Claude Code and put it in the desktop app?" Over 10 days, they used Claude Code to build it.
We ship an entire virtual machine with it, and Claude Code wrote all of that code. We launched it early, even though it was rough. This is how we learn: we have to release things early so we can understand what people want and that shapes the future product.
That point is so interesting. It's hard to even know what the AI is capable of until people use it.
At Anthropic, as a safety lab, the other dimension is safety. One layer is alignment and mechanistic interpretability—understanding what's happening in the neurons. The second layer is evals—a laboratory setting where the model is in a petri dish. The third layer is seeing how the model behaves in the wild.
We released Claude Code early because we wanted to study safety in the wild. We used it internally for four or five months before release because it was the first big coding agent, and we weren't sure if it was safe. For the work product, it's the same: it looks good on alignment and evals, but we have to make sure it's safe in the real world. That's why we call it a research preview.
What I'm hearing is these three layers: observing the model's brain, evals, and releasing early. I haven't heard a ton about that first piece. You can peek inside the model's brain and see how it's thinking?
You should have Chris Olah on the podcast; he's the expert. He invented mechanistic interpretability. The idea is that your brain is a bunch of connected neurons; it turns out model neurons behave similarly. We've learned about how concepts are encoded, how the model does planning—there's strong evidence it's doing something deeper than just predicting the next token. This is called superposition. Anthropic exists to make sure this goes well for the world, so we publish this research freely to inspire other labs to do it safely.
We call this the "race to the top" internally. For Claude Code, we released an open-source sandbox so the agent can't access everything on your system. It works with any agent, not just Claude Code.
I've noticed an anxiety people feel when their agents aren't working—a sense that an agent has a question I need to answer, or I'm losing productivity. Do you feel that?
I always have a bunch of agents running. The first thing I did when I woke up was check the Claude iOS app to see what an agent had done. It's so easy now. I don't feel the anxiety as much because I'm not locked into a terminal anymore—a third of my code is terminal, a third desktop app, and a third iOS app.
I love that you still describe it as "coding"—describing what you want, not writing actual code.
I wonder what people who used punch cards would have said if you showed them modern software. My grandpa was one of the first programmers in the Soviet Union—I was born in Ukraine. He programmed using punch cards. My mom remembers him bringing stacks of punch cards home and her drawing on them with crayons. He never saw the transition to software. There was probably an older generation that didn't take software seriously and said, "it's not really coding." But this field has always been changing.
I was born in Ukraine also.
Oh, which city?
I'm from Odessa.
Oh, me too!
What? Yeah, that's crazy. Wow. Incredible. What a different life it would have been if our families hadn't left.
I feel so lucky every day that I got to grow up here.
My family, anytime there's a toaster or a meal, they toast "to America."
We do the same toast, but it's still vodka!
One tip you shared: give your team as many tokens as they want. What other advice do you have for folks building AI products?
Don't try to box the model in. People try to make it behave a very particular way with strict workflows—"you must do step one then step two." You almost always get better results if you give the model tools, a goal, and let it figure it out. Don't try to over-curate it or give it a bunch of context up front. Give it a tool so it can get the context it needs.
A second principle is the "Bitter Lesson," from Rich Sutton's blog post. The idea is that the more general model will always outperform the more specific model. Always bet on the more general model—don't try to fine-tune tiny models for everything. Scaffolding might improve performance by 10-20%, but those gains get wiped out by the next model.
Final principle: build for the model six months from now, not for the model of today. Early on, Claude Code wrote very little code because I didn't trust it—Sonnet 3.5 wasn't great at coding yet. But the bet was that at some point, the model gets good enough. We saw that inflection with Opus 4.0 in May. For startups, build for the model six months out so when it comes out, your product "clicks" and starts to work.
One way it's going to get better is tool use and computer use. Another is running for long periods of time. A year ago, Sonnet 3.5 could run for 30 seconds before going off the rails. Nowadays, Opus 4.6 will run for 20-30 minutes unattended. I have quads running for hours or days at a time. This will become normal.
Any pro tips for someone using Claude Code for the first time?
There's no one right way; you have to find your own path. But first: use the most capable model—currently Opus 4.6. I have "maximum effort" enabled always. It's often cheaper to use the most intelligent model because it takes fewer tokens to do the task and requires less handholding.
Second: use plan mode. I start 80% of my tasks in plan mode. All we do is inject one sentence: "Please don't write any code yet." In the terminal, it's shift-tab twice. Once the plan looks good, let it execute. It'll get it right the first time almost every time with Opus 4.6. Third: play around with different interfaces—terminal, desktop app, web, mobile, Slack. Find the thing that feels right to you.
What's your take on CodeEx?
I haven't really used it, but competition is good. For our team, we're just focused on solving user problems. We don't spend a lot of time looking at competing products. I love talking to users and acting on feedback.
Ben Mann had a question for you: what's your plan post-AGI?
Before I joined Anthropic, I was living in rural Japan. I was the only engineer and English speaker in town. I'd bike past rice paddies to the farmers market. We got to know our neighbors by trading pickles and miso. I got decently good at making miso. Miso teaches you to think on long time scales—white miso takes three months, red miso takes 2-4 years. You have to be patient. Post-AGI, I'd probably be making miso.
Ben asked me to ask you about miso! Boris, this was incredible. I feel like we're brothers from Ukraine now. Anything else you want to leave listeners with?
Starting at coding, then tool use, then computer use—this has been our belief as a company for a long time. Claude Code becoming a multi-billion dollar business is a total surprise in some ways, but totally unsurprising in others. Most of the world still does not use AI, so it feels like this is 1% done.
Cloud Code alone is making $2 billion in revenue, and Anthropic is making $15 billion?
It's crazy. The reason it keeps improving is the users. Everyone keeps giving feedback, and that's the single most important thing.
We've reached our lightning round! First question: what are two or three books you find yourself recommending most?
*Functional Programming in Scala*. It's the best technical book I've ever read. Even if you don't use Scala, there's an elegance to functional programming and thinking in types. Second, *Accelerando* by Charles Stross. It captures the essence of this moment—the pace gets faster and faster until it ends with collective lobster consciousness orbiting Jupiter. Third, *The Wandering Earth* by Cixin Liu. His short stories have a very different perspective than Western sci-fi.
Sci-fi prepared me for where things are going. Living in rural Japan, thinking on those long time scales while reading sci-fi, I felt I had to contribute to it going well—that's why I ended up at Anthropic.
Have you read *A Fire Upon the Deep*?
Vernor Vinge, right? It's great. I like *A Deepness in the Sky* also.
Do you have a favorite recent movie or TV show?
I don't really watch TV or movies; I just don't have time.
I'm going to bring up another Cixin Liu, but the *Three-Body Problem* series on Netflix I really loved. I thought that was a great rendition of the book series.
So, the common pattern across AI leaders is no time to watch TV or movies, which I completely understand. Is there a favorite product you've recently discovered that you really love?
I'm going to shill a little bit and just say Claude because this is legitimately the one product that's been pretty life-changing for me. I have it running all the time, and the Chrome integration in particular is just really excellent. It paid a traffic fine for me; it canceled a couple subscriptions for me. The amount of tedious work it gets out of the way is awesome.
I also don't know if it's a product, but I'll also mention another podcast that I really love, obviously besides Lenny's.
Obviously.
Yeah, it's the *Acquired* podcast by Ben and David. It's just super awesome. I feel like the way they get into business history and bring it alive is really good. I would start with the Nintendo episode if you haven't listened to it.
Great tip. With Claude, just so people understand if they haven't tried this: you type something you want to get done, and it can launch Chrome and just do things for you. I saw someone went on pat leave from Anthropic and he had it fill out medical forms for him—these really annoying PDFs—where it just loads up the browser, logs in, fills them out, and submits them.
Yeah, exactly. And it just kind of works. We tried this experiment like a year ago and it didn't really work because the model wasn't ready, but now it just works. And it's amazing. I think a lot of people just don't really understand what this is because they haven't used an agent before. It feels very similar to me to Claude Code a year ago, but it's growing much faster than Claude Code did in the early days. It's starting to break through a bit.
And there's also this Chrome extension that you mentioned that you can just use standalone. It sits in Chrome and you can just talk to Claude looking at your screen, and have it summarize what you're looking at or do stuff.
Exactly. For people starting to use Claude, the thing I recommend is: download the Claude desktop app, go to the "work" tab—it's right next to the code tab. Start by having it use a tool: clean up your desktop, summarize your email, or respond to the top three emails.
Second: connect tools. If you say "look at my top emails and then send Slack messages" or "put them in a spreadsheet"—for example, I use it for all my project management. We have a single spreadsheet for the whole team with a row per engineer. Every week everyone fills out a status, and every Monday Claude just goes through and messages every engineer on Slack who hasn't filled out their status, so I don't have to do this anymore. This is just one prompt; it'll do everything.
Third: run a bunch of Claudes in parallel. You can have as many tasks running as you want. I'll start one task for project management, then something else, and I'll just go get a coffee while it runs.
There's a post I'll link to that shares a bunch of ways people use what was previously Claude Code and now you can do through the "work" tab. Once you see these examples, I think that's what people need to hear: "Oh wow, I didn't know I could do that."
Yeah, some of this was also inspired by you, Lenny! You had this post about 50 non-technical use cases for Claude, and one of our PMs used that to evaluate the product before we released it. At the point where Claude was able to do 48 out of the 50, they were like, "Okay, it's pretty good."
Wow, I did not know that. That is awesome. I've become an eval!
Yeah. How does that feel?
Amazing. I feel like I'm valuable to the future of AI. Two more questions. Do you have a favorite life motto that you often come back to?
Use common sense. I think a lot of the failures that I see in work environments are people just failing to use common sense. They follow a process without thinking about it, or they're working on a product that's not a good idea and they're just following the momentum. The best results I see are people thinking from first principles and developing their own common sense. If something smells weird, then it's probably not a good idea. This is the single piece of advice I give to co-workers more than anything.
I feel like that alone could be its own podcast conversation. Final question: you've become more active on Twitter. I'm curious why and what your experience has been.
For a long time, I used Threads exclusively because I helped build Threads back in the day, and I like the design—it's a very clean product. I started using Twitter because I was bored. My wife and I were traveling in Europe in December—nomading around Copenhagen and a few other countries. For me, it was a "coding vacation"—coding all day is my favorite kind of vacation.
At some point, I got bored and ran out of ideas for a few hours. I opened Twitter, saw people tweeting about Claude Code, and started responding. I thought I should look for bugs and feedback people have. I think they were surprised by the pace at which we're able to address feedback nowadays. For me, it's normal: if someone has a bug, I can fix it within a few minutes because I just "Claude it," and while it works, I go answer the next thing.
The experience on Twitter has been pretty great—engaging with people, hearing about bugs and features. I saw complaints to Nikita Bier the other day about threads breaking.
Yeah, there was a bug. I hope it's fixed now.
Boris, I could chat with you for hours. I'll let you go. Thank you so much for doing this—you're wonderful. Where can folks find you online?
Yeah, find me on Threads or on Twitter. That's the easiest place. Please tag me on stuff—send bugs, feature requests, what's missing, what we can do to make the products better. I love hearing it.
Amazing. Boris, thank you so much for being here.
Cool. Thanks, Lenny.
Bye, everyone. Thank you so much for listening. If you found this valuable, you can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. Please consider giving us a rating or leaving a review. You can find all past episodes at lennyspodcast.com. See you in the next episode.