100% of my code is written by Claude Code. I have not edited a single line by hand since November. Every day I ship 10, 20, 30 pull requests. So at the moment I have like five agents running while we're recording this.
Yeah. Yeah. Do you miss writing code?
I have never enjoyed coding as much as I do today because I don't have to deal with all the minutia. Productivity per engineer has increased 200%.
There's always this question, should I learn to code? In a year or two, it's not going to matter. Coding is largely solved. I imagine a world where everyone is able to program. Anyone can just build software anytime. What's the next big shift to how software is written?
Claude is starting to come up with ideas. It's looking through feedback. It's looking at bug reports. It's looking at telemetry for bug fixes and things to ship a little more like a co-worker or something like that.
A lot of people listening to this are product managers and they're probably sweating. I think by the end of the year, everyone's going to be a product manager and everyone codes. The title software engineer is going to start to go away. It's just going to be replaced by builder and it's going to be painful for a lot of people.
Today my guest is Boris Cherny, head of Claude Code at Anthropic. It is hard to describe the impact that Claude Code has had on the world. Around the time this episode comes out will be the one-year anniversary of Claude Code. And in that short time, it has completely transformed the job of a software engineer and it is now starting to transform the jobs of many other functions in tech which we talk about.
Claude Code itself is also a massive driver of Anthropic's overall growth over the past year. They just raised a round at over $350 billion. And as Boris mentions, the growth of Claude Code itself is still accelerating. Just in the past month, their daily active users has doubled. Boris is also just a really interesting, thoughtful, deep-thinking human. And during this conversation, we discover we were born in the same city in Ukraine. That is so funny. I had no idea.
A huge thank you to Ben Mann, Jenny Wen, and Mike Krieger for suggesting topics for this conversation. Don't forget to check out lennisprodpass.com for an incredible set of deals available exclusively to Lenny's newsletter subscribers. Let's get into it after a short word from our wonderful sponsors.
*[Sponsor break for DX, Sentry]* Boris, thank you so much for being here and welcome to the podcast.
Yeah, thanks for having me on.
I want to start with a spicy question. About 6 months ago, I don't know if people even remember this, you left Anthropic. You joined Cursor and then two weeks later, you went back to Anthropic. What happened there? I don't think I've ever heard the actual story. It's the fastest job change that I've ever had.
I joined Cursor because I'm a big fan of the product and honestly I met the team and I was just really impressed. They're an awesome team. I still think they're awesome and they're just building really cool stuff and they saw where AI coding was going before a lot of people did. So the idea of building good product was just very exciting for me.
I think as soon as I got there, what I started to realize is what I really missed about Anthropic was the mission. And that's what originally drove me to Ant also because before I joined Anthropic, I was working in big tech and then at some point I wanted to work at a lab to just help shape the future of this crazy thing that we're building in some way. And the thing that drew me to Anthropic was the mission. It's all about safety. When you talk to people at Anthropic, if you ask them why they're here, the answer is always going to be safety. This kind of mission-drivenness just really resonated with me. I just know personally it's something I need in order to be happy. I found that whatever the work might be, no matter how exciting, even if it's building a really cool product, it's just not really a substitute for that. So for me it was pretty obvious that I was missing that pretty quick.
Okay. So let me follow the thread of just coming back to Anthropic and the work you've done there. This podcast is going to come out around the year anniversary of launching Claude Code. So I'm going to spend a little time just reflecting on the impact that you've had. There's this report by SemiAnalysis that showed that 4% of all GitHub commits are authored by Claude Code now. And they predicted it'll be a fifth of all code commits on GitHub by the end of the year. The way they put it is "while we blinked, AI consumed all software development."
The day that we're recording this, Spotify just put out this headline that their best developers haven't written a line of code since December thanks to AI. More and more of the most advanced senior engineers, including you, are sharing the fact that you don't write code anymore—that it's all AI generated. And many aren't even looking at code anymore is how far we've gotten, in large part thanks to this little project that you started and that your team has scaled over the past year. I'm curious just to hear your reflections on this past year and the impact that your work has had.
These numbers are just totally crazy, right? Like 4% of all commits in the world is just way more than I imagined and like you said, it still feels like the starting point. These are also just public commits. So we think if you look at private repositories, it's quite a bit higher than that. And I think the craziest thing for me isn't even the number that we're at right now, but the pace at which we're growing because if you look at Claude Code's growth rate across any metric, it's continuing to accelerate. It's not just going up, it's going up faster and faster.
When I first started Claude Code, it was just supposed to be a little hack. We broadly knew at Anthropic that we wanted to ship some kind of coding product. For Anthropic, for a long time we were building the models in this way that kind of fit our mental model of how we build safe AI—where the model starts by being really good at coding, then it gets really good at tool use, then it gets really good at computer use. Roughly this is like the trajectory and we've been working on this for a long time.
When you look at the team that I started on, it was called the Anthropic Labs team. Mike Krieger and Ben Mann just kicked this team off again for round two. The team built some pretty cool stuff: we built Claude Code, we built MCP, we built the desktop app. You can see the seeds of this idea: it's coding, then tool use, then computer use. The reason this matters for Anthropic is safety. The thing that's happened in the last year is that for at least for engineers, the AI doesn't just write the code. It's not just a conversation partner, but it uses tools. It acts in the world.
And I think now with co-work, we're starting to see the transition for non-technical folks also. For a lot of people that use conversational AI, this might be the first time that they're using the thing that acts. It can use your Gmail, it can use your Slack, it can do all these things for you and it's quite good at it—and it's only going to get better from here.
For a long time there was this feeling that we wanted to build something but it wasn't obvious what. When I joined Ant I spent one month hacking and built a bunch of weird prototypes. Most of them didn't ship and weren't even close to shipping—it was just understanding the boundaries of what the model can do. Then I spent a month doing post-training to understand the research side of it. I find that to do good work you really have to understand the layer under the layer at which you work. With traditional engineering work, you want to understand the infrastructure, the virtual machine, the language. If you're working in AI, you just really have to understand the model to some degree to do good work.
So, I took a little detour to do that and then I came back and just started prototyping what eventually became Claude Code. The very first version of it—I have a video recording of this demo I posted—it was called ClaudeCLI back then. I showed off how it used a few tools and the shocking thing for me was that I gave it a bash tool and it was able to use that to write code to tell me what music I'm listening to when I asked it. This is the craziest thing, because I didn't instruct the model to use this tool for this. The model was given this tool and it figured out how to use it to answer this question.
I started prototyping this a little more. I made a post about it and announced it internally and it got two likes. That was the extent of the reaction because people internally, when you think of coding tools, you think of an IDE—sophisticated environments. No one thought that this thing could be terminal-based. I built it in a terminal because for the first couple months it was just me, so it was the easiest way to build. For me, this is a pretty important product lesson: you want to under-resource things a little bit at the start.
Then we started thinking about what other form factors we should build and we decided to stick with the terminal for a while because the model was improving so quickly. We felt that there wasn't really another form factor that could keep up with it. Honestly, this was just me struggling with what should we build. Late at night, this is just something I was thinking about: the model is continuing to improve, what do we do? The terminal was honestly just the only idea that I had. And yeah, it ended up catching on after I released it pretty quickly. It became a hit at Anthropic and the daily active users just went vertical. Ben Mann nudged me to make a DAU chart and it went vertical pretty immediately.
In February, we released it externally. Something that people don't really remember is Claude Code was not initially a hit. It got a bunch of users, there were early adopters, but it took many months for everyone to really understand what this thing is. Part of the reason Claude Code works is this idea of latent demand—bringing the tool to where people are and making existing workflows easier. Of course now Claude Code is available in the iOS and Android apps, the desktop app, on the website, and as IDE extensions in Slack and GitHub. All these places where engineers are, it's a little more familiar, but that wasn't the starting point.
At the beginning it was kind of a surprise that this thing was even useful. As the product grew, people around the world from small startups to the biggest FAANG companies started using it. Just reflecting back, it's been such a humbling experience because we just keep learning from our users. None of us really know what we're doing; we're just trying to figure it out along with everyone else and the single best signal for that is feedback from users. I've been surprised so many times.
It's incredible how fast something can change in today's world. You launched this a year ago and it wasn't the first time people could use AI to code, but in a year the entire profession of software engineering has dramatically changed. All these predictions that code would be 100% written by AI—everyone's like "no, that's crazy." Now it's happening exactly as they said. Things move so fast and change so fast now.
Yeah, it's really fast. Back at Code with Claude in May, that was our first developer conference. I did a short talk and in the Q&A people were asking what are your predictions for the end of the year. My prediction back in May of 2025 was by the end of the year you might not need an IDE to code anymore. I remember the room audibly gasped. It was such a crazy prediction but at Anthropic this is just the way we think about things—exponentials. Three of our co-founders were the first three authors on the scaling laws paper. If you look at the exponential of the percent of code written by Claude at that point and just trace the line, it's pretty obvious we're going to cross 100% by the end of the year even if it does not match intuition at all. So all I did was trace the line and yeah, in November that happened for me personally and that's been the case since.
I thought was really interesting what you just shared there about the journey—this idea of just playing around and seeing what happens. This comes up with Anthropic a lot, just like Peter was playing around and then a thing happened. It feels like that's a central ingredient to a lot of the biggest innovations in AI: people just sitting around trying stuff and pushing the models further than most other people.
You can't force innovation. There's no roadmap for innovation. You just have to give people space and psychological safety—that it's okay to fail, it's okay if 80% of the ideas are bad. You also have to hold them accountable a bit: if the idea is bad, you cut your losses and move on. In the early days of Claude Code, I had no idea that this thing would be useful at all. In February when we released it, it was writing maybe 20% of my code. In May, it was maybe 30%. I was still using Cursor for most of my code. It only crossed 100% in November. But even from the earliest day, it just felt like I was onto something. I was just spending every night and weekend hacking on this. Sometimes you find a thread and you just have to pull on it.
So at this point, 100% of your code is written by Claude Code. Is that kind of the current state of your coding?
Yeah. 100% of my code is written by Claude Code. I am a fairly prolific coder and this has been the case since back at Instagram—I was one of the top few most productive engineers. That's still the case here at Anthropic.
Wow. Even as head of the team.
Yeah. I still do a lot of coding. Every day I ship like 10, 20, 30 pull requests.
Every day. Good god.
100% written by Claude Code. I have not edited a single line by hand since November. I do look at the code. I don't think we're at the point yet where you can be totally hands-off, especially when there's a lot of people running the program. You have to make sure that it's correct and safe. We also have Claude doing automatic code review for everything. Here at Anthropic, Claude reviews 100% of pull requests. There's still a layer of human review after it, but you still want a human looking at the code unless it's pure prototype code.
What's kind of the next frontier? At this point 100% of your code is being written by AI. This is clearly where everyone is going. What's kind of the next big shift to how software is written that either your team's already operating in or you think will head towards?
I think something that's happening right now is Claude is starting to come up with ideas. It's looking through feedback, bug reports, telemetry, and it's starting to come up with ideas for bug fixes and things to ship. It's starting to get a little more like a co-worker. Second, we're starting to branch out of coding a little bit. Coding is largely solved. Now we're thinking about what's next. There's a lot of things that are adjacent to coding. I use co-work every day to do all sorts of things that are not related to coding. I had to pay a parking ticket the other day—I just had co-work do it. All of my project management for the team, co-work does all of it: syncing stuff between spreadsheets, messaging people on Slack and email. The frontier is something like this. Coding is pretty much solved and over the next few months across the industry it's going to become increasingly solved for every kind of codebase and tech stack.
This idea of helping you come up with what to work on is so interesting. product managers are probably sweating. How do you use Claude for this? Do you just talk to it?
Honestly, the simplest thing is opening Claude Code or co-work and pointing it at a Slack thread. We have this channel that's all the internal feedback about Claude Code. Since we first released it, it's just been this firehose of feedback. In the early days, I would fix every single thing as fast as I possibly could—within minutes. This fast feedback cycle encourages people to give more feedback because it makes them feel heard. Usually feedback goes into a black hole. If you make people feel heard, they want to contribute. So now I do the same thing, but Claude does a lot of the work. I point it at the channel and it says, "Okay, here's a few things that I can do. I just put up a couple PRs. Want to take a look?" And I'm like, "Yeah."
Have you noticed that it is getting much better at this? Because building is solved, code review is the next bottleneck. priority is the next open question. Has it gotten a lot better with say Opus 4.6?
Yeah, it's improved a lot. Some of it is training specific to coding—best coding model in the world—but also training outside of coding translates well too. There is this transfer where you teach the model to do X and it gets better at Y. The gains have just been insane. Since we introduced Claude Code, we probably 4x'd the engineering team, but productivity per engineer has increased 200% in terms of pull requests. This number is just crazy. Back at Meta, one of my responsibilities was code quality for Facebook, Instagram, WhatsApp. A lot of that was about productivity. In a year with hundreds of engineers you would see a gain of like a few percentage points. Nowadays seeing gains of hundreds of percentage points is absolutely insane. What's also insane is how normalized this has all been. It's so easy to get used to it, but it's important to recognize this is crazy.
There's sort of a downside because the model changes so often that I sometimes get stuck in an old way of thinking. I find that new people on the team or new grads do stuff in a more AGI-forward way than I do. A couple months ago there was a memory leak in Claude Code. Traditionally you take a heap snapshot, put it into a debugger, and figure out what's going on. I was looking through traces and the engineer that was newer on the team just had Claude Code do it. Claude Code did exactly the same thing I was doing—took the snapshot, wrote a little tool for itself to analyze it—and it found the issue and put up a pull request faster than I could. For those of us using the model for a long time, you have to transport yourself to the current moment and not get stuck back in an old model.
I hear you have these very specific principles for your team. I believe one of them is "what's better than doing something? having Claude do it." It feels like you almost forgot that principle with the memory leak.
There's this interesting thing that happens when you under-fund everything a little bit because then people are forced to "Claudify." Sometimes we just put one engineer on a project and the way they ship quickly is because they want to ship quickly—that intrinsic motivation. If you have Claude, you can use that to automate a lot of work. So one principle is under-funding. Another is encouraging people to go faster. If you can do something today, you should just do it today. Early on, our only advantage was speed to compete in this crowded market. A really good way to go faster is to just have Claude do more stuff.
This idea of under-funding is so interesting. You're saying you will do better and get more out of the AI tooling if you have fewer people working on something.
If you hire great engineers, they'll figure out how to do it. My advice to CTOs generally is don't try to cost-cut at the beginning. Just give engineers as many tokens as possible. We're starting to see this come up as a perk at some companies: "unlimited tokens." It makes people free to try crazy ideas. If an idea works, then you can optimize and cost-cut—maybe use Haiku instead of Opus—but at the beginning you just want to throw a lot of tokens at it. Be loose with your token cost.
At small scale you're not going to get a giant bill. If it's an individual engineer, the token cost is relatively low relative to their salary. At Anthropic, we're starting to see some engineers that are spending hundreds of thousands a month in tokens. So we're starting to see this a bit.
Do you miss writing code? Is this something you're sad about?
For me learning engineering was very practical. I learned so I could build stuff. I taught myself engineering early on. I learned to code so that I can cheat on a math test in middle school using a TI-83 Plus. I programmed the answers in, and then when the tests got too hard, I had to write a solver. I figured out you could get a cable and give the program to the rest of the class. We all got A's and then we all got caught. teacher told us to knock it off. Programming is a way to build a thing; it's not the end in itself.
At some point I fell into the rabbit hole of the beauty of programming. I wrote a book about TypeScript and started the world's biggest TypeScript meetup. There is a beauty to functional programming and type systems—a buzz you get when you solve a complicated math problem. But it's really not the end of it. For me coding is very much a tool. Not everyone feels this way. Lena on the team still writes C++ on the weekends by hand because she enjoys it. Everyone is different. There's always space to enjoy the art if you want.
Do you worry about your skills atrophying?
It's just the way that it happens. Software is relatively new—writing programs on virtual machines has only been around since the 1960s. Before that it was punch cards, switches, or pen and paper. Programming has always changed. Pretty soon understanding the layer under the layer just won't really matter. It'll be like assembly code running under the programmer. As a programmer, learning new things doesn't feel that new because there's always new frameworks. But some people will feel a sense of loss or nostalgia. Elon was saying "why isn't the AI just writing binary?" Yeah, it totally can do that if you wanted to.
So in a year or two, you don't really need to learn to code. What's the right historical analog for this shift?
The thing that's come closest for me is the printing press. In the mid-1400s, literacy was sub 1%—it was scribes doing the writing for lords who weren't literate themselves. The Gutenberg printing press came along and in 50 years there was more printed material created than in the thousand years before. The cost went down 100x. Literacy took a while to catch up because education is hard, but over 200 years it went up to 70%. There was an interview with a scribe in the 1400s who was excited because he didn't like copying books; he liked drawing the art and doing the book binding. As an engineer I feel a parallel: I don't have to do the tedious work of coding anymore. The fun part is figuring out what to build, talking to users, collaborating.
And your tool allows anyone to do this. I was an engineer for 10 years and remember spending so much time on libraries and dependencies. Now it's just "help me figure this out."
Exactly. I was talking to an engineer who's been writing a service in Go for a month. It's working well, but he said, "I still don't really know Go." We're going to see more of this. If you know it works correctly and efficiently, you don't have to know all the details.
What is the next role that will be most impacted by AI?
roles adjacent to engineering: product managers, design, data science. It will expand to pretty much any kind of work you can do on a computer. A year ago no one really knew what an agent was. nowadays it's just the way we do our work. non-technical folks use conversational AI, but no one has really used an agent before. This word "agent" gets thrown around and misused, but it has a specific meaning: an AI that's able to use tools. It can act, use Google docs, send email, run commands. Any job where you use computer tools is next. This is something we have to figure out as a society. At Anthropic we have economists and policy folks talking about this because it shouldn't be up to us.
The big question is jobs. Jevons paradox says as we can do more we hire more. What have you experienced so far?
We're hiring for the Claude Code team. Personally, all this stuff has made me enjoy my work more. I've never enjoyed coding as much as I do today because I don't have to deal with the minutia. The printing press was democratizing—the Renaissance couldn't have happened without knowledge spreading through written records. It's about what this enables next. We couldn't be talking today if the printing press hadn't been invented. I imagine a world where everyone is able to program. It's going to be very disruptive and painful for a lot of people, and we have to figure this out together.
Any advice for folks wanting to succeed in this turmoil?
experiment with the tools, don't be scared of them, be on the bleeding edge. Try to be a generalist. In school people study CS and don't learn much else. On the Claude Code team, everyone codes—the PM, EM, designer, finance guy, data scientist. Everyone. The strongest engineers are hybrids: product and infra, or product and design, or engineers with a sense of business. curious generalists who cross multiple disciplines will be rewarded.
Do you find those three separate disciplines—engineering, design, product management—still useful?
In the short term they'll persist, but there's a 50% overlap. Our PM does more coordination and planning, while I code more. By the end of the year the title "software engineer" might start to go away and be replaced by "builder." Everyone will be a product manager and everyone codes.
*[Sponsor break for MetaView]* I did a poll on Twitter: are you enjoying your job more or less since adopting AI? 70% of engineers and PMs said "more." Only 55% of designers said "more," and 20% said "less."
That's super interesting. I'd love to talk to those people. Everyone at Anthropic is fairly technical—we screen for that. Our designers largely code. They enjoy it because instead of bugging engineers they can just go in and code. They can unblock themselves. But I bet the experience isn't uniform.
Designers use the Claude desktop app a lot more to code. There's a code tab right next to co-work—same agent as Claude Code. You can run as many Claude sessions in parallel as you want; we call it "multi-Clauding." It's more native for non-engineers. Bringing the product to where the people are is the single most important principle in product: latent demand.
talk about that principle.
Latent demand is the idea that if you build a product in a way that can be hacked or misused by people, it helps you learn where to take the product next. Facebook Marketplace started when they observed that 40% of posts in groups were buying and selling. No one designed the product for this, but people figured it out. Facebook Dating started when they saw that 60% of profile views were people who weren't friends and were opposite gender. Co-work came from seeing that for 6 months people using Claude Code were not using it to code—growing tomatoes, analyzing genomes, recovering corrupted hard drives. Our data scientist Brendan was doing SQL analysis in a terminal. When you see people "abusing" the product to do something useful, build a product for that.
There's a modern framing: look at what the model is trying to do and make that easier. Usually people put the model in a box as one component of a bigger system. For Claude Code, we said the product IS the model. We put minimal scaffolding around it so it can decide which tools to run and in what order. In research we call this "being on distribution."
You mentioned co-work was built in 10 days. Used by millions quickly.
Claude Code was not immediately a hit. It inflected with Opus 4 and again in November. Co-work was immediately a hit. We saw people using Claude Code for non-technical things and someone said, "what if we just take Claude Code and put it in the desktop app?" Over 10 days they used Claude Code to build it. We ship an entire virtual machine with it. We launched it early, still rough around the edges, but that's how we learn.
As a safety lab, the other dimension is safety. Lowest level is alignment and mechanistic interpretability—understanding what's happening in the neurons. We can monitor if a neuron related to "deception" is activating. Second layer is evals—studying the model in a "petri dish." Third layer is seeing how it behaves in the wild. We used Claude Code internally for 5 months before releasing it because we weren't sure if it was safe—it was the first big agent released. We release co-work as a "research preview" to make sure it's safe in the real world.
You should have Chris Olah on. He invented "mechanistic interpretability." Study model neurons similarly to animal neurons. We've learned how the model maps concepts, how it does planning. superposition—a single neuron might correspond to a dozen concepts. We open source a lot of this work to inspire other labs to do it safely. We released an open source sandbox for Claude Code so others could use it with any agent. We call this "race to the top."
People working with agents feel anxiety when their agents aren't working—a sense of losing productivity. Do you feel that?
I have like five agents running at any moment. I wake up and check them on the iOS app. It's so easy now. I don't feel locked into a terminal anymore. A third of my code is in the terminal, a third in the desktop app, and a third in the iOS app. I did not think that would be the way I code in 2026. Coding now is describing what you want, not writing actual code.
My grandpa was one of the first programmers in the Soviet Union. He programmed using punch cards. He would bring stacks home and my mom would draw on them with crayons. He never saw the software transition. An older generation of programmers didn't take software seriously—they'd say "it's not really coding." I was born in Ukraine also.
I'm from Odessa.
Me too!
What?! That's crazy. Incredible. I came in '95.
We left in '88. I feel so lucky every day to grow up here. My family anytime there's a toast they're just like "to America."
We do the same toast, but it's still vodka.
Any other advice for folks building AI products?
Don't try to box the model in. People try to make it behave in a very particular way with strict workflows. You get better results if you just give the model tools and a goal and let it figure it out. Give it a tool so it can get the context it needs. "Ask not what the model can do for you."
Second is "The Bitter Lesson" by Rich Sutton: the more general model will always outperform the more specific model. Always bet on the more general model. Scaffolding might improve performance by 10-20%, but those gains get wiped out by the next model. It's better to just wait.
We bet on building for the model six months from now. Early versions wrote so little of my code because I didn't trust Sonnet 3.5. The bet was that at some point the model gets good enough. That happened with Opus 4. Build for the model 6 months out. Product-market fit won't be good for the first 6 months, but you'll hit the ground running when the new model drops. It's going to get better at running for long periods of time unattended. Opus 4.6 will run 20-30 minutes unattended. They can run for days.
Pro tips for someone using Claude Code?
There's no one right way. Use the most capable model (Opus 4.6) with "maximum effort" enabled. Less intelligent models take more tokens in the end to do the same task. Second, use plan mode. I start 80% of tasks in plan mode. It injects a sentence: "please don't write any code yet." Once the plan looks good, I auto-accept edits. It'll one-shot it. Third, play around with different interfaces—mobile, desktop, Slack. Find what feels right.
What's your take on Codex?
I haven't really used it. It looked a lot like Claude Code, which was flattering. Competition is good. We're just focused on solving problems for users.
Plan post-AGI?
Before Anthropic I lived in rural Japan—total opposite of SF. I got decently good at making miso. White miso takes 3 months, red miso takes 2-4 years. It teaches you to think on long time scales. Post-AGI I'd probably be making miso.
Is there anything you want to double down on?
Starting at coding, then tool use, then computer use—that's the way we think about safety and model development. Claude Code becoming a huge business is a surprise in some ways, but totally unsurprising in others. It feels like we're only 1% done.
Cloud Code alone is making $2 billion in revenue. Anthropic is making 15 billion. It's early.
The only reason it keeps improving is everyone is using it and giving feedback. talking to users is the single most important thing.
Lightning round. Books?
1. *Functional Programming in Scala*. Best technical book I've ever read. 2. *Accelerando* by Charles Stross. Captures the essence of this fast-paced moment and the approach to the singularity. 3. *The Wandering Earth* by Liu Cixin. beautifully written Chinese sci-fi.
Movie or TV show?
I don't really have time to watch TV. I did love the *Three Body Problem* series on Netflix.
Favorite product?
Co-work. The Chrome integration is excellent—it paid a traffic fine and canceled subscriptions for me. Also the *Acquired* podcast.
Great tip. Someone at Anthropic had co-work fill out medical PDF forms for him.
It just works now. It's growing much faster than Claude Code did. I recommend starting by having it use a tool (clean up desktop, summarize email) and then connecting tools. message everyone on Slack who hasn't filled out a status.
Life motto?
"Use common sense." failures come from following process without thinking. If something smells weird, it's probably not a good idea.
Why more active on Twitter?
I was bored on a coding vacation in Europe in December. I intro'd myself and asked for bugs. People were surprised by the pace at which we fixed things.
Boris, thank you so much for being here. where can folks find you?
Find me on Threads or Twitter. Tag me on stuff—bugs, feature requests. I love hearing it.
Amazing. Boris, thank you. *[Podcast outro]*