PJFP.com

Pursuit of Joy, Fulfillment, and Purpose

Tag: Cowork Anthropic

  • Anthropic’s Growth Strategy Explained: $1B to $19B in 14 Months, Automating Experiments With Claude, and Why Old Playbooks Are Dead (Lenny’s Podcast Recap)

    TLDW (Too Long, Didn’t Watch)

    Amol Avasare, Head of Growth at Anthropic, sat down with Lenny Rachitsky to explain how Anthropic grew from $1 billion to over $19 billion in annual recurring revenue in just 14 months. He breaks down their internal tool called CASH (Claude Accelerates Sustainable Hypergrowth) that automates growth experimentation, why 50 to 70 percent of traditional growth playbooks are now obsolete, why the PM-to-engineer ratio may need to flip, and how Anthropic’s early bet on AI coding created a research flywheel that competitors are only now starting to copy. He also shares how he cold emailed his way into the job, why activation is the single hardest problem in AI products, and how he uses Cowork to detect team misalignment across Slack channels automatically.

    Key Takeaways

    1. Anthropic’s growth trajectory is historically unprecedented. Revenue went from $1 billion at the start of 2025 to over $19 billion ARR by February 2026. That 19x growth in 14 months dwarfs companies like Atlassian, Snowflake, and Palantir, which took 15 to 20 years to reach $4.5 to $6 billion ARR. The number Amol quoted was already outdated by the time the episode aired.

    2. Anthropic is automating growth experimentation with an internal tool called CASH. CASH stands for Claude Accelerates Sustainable Hypergrowth. The growth platform team uses Claude to identify opportunities, build experiments (mostly copy changes and minor UI tweaks so far), test them against quality and brand standards, and analyze results. Amol describes the current win rate as roughly equivalent to a junior PM with two to three years of experience, but notes it was not possible at all before Opus 4.5 and has improved significantly with Opus 4.6. Human review is still in the loop but decreasing week over week.

    3. Activation is the single highest-leverage growth problem in AI. The core challenge is capability overhang: models are improving so fast that users do not know what they can do. By the time you have tested and optimized onboarding for one model’s capabilities, the next model has already shipped with entirely new features that make your learnings obsolete. Anthropic addresses this by adding intentional friction in onboarding to understand who users are and funnel them to the right products and features.

    4. Anthropic indexes 70/30 toward big bets, the opposite of most growth teams. Traditional growth teams spend 60 to 70 percent of effort on small to medium optimizations. Anthropic flips that ratio because they believe the product value delivered two years from now will be 100x to 1,000x what it is today. In that exponential environment, micro-optimizations capture a negligible percentage of future value. Large strategic bets are where the leverage lives.

    5. The PM-to-engineer ratio may need to flip. Engineers are getting 2 to 3x more productive with tools like Claude Code, effectively turning a team of 5 engineers into the equivalent of 15 to 20. But PMs and designers have not seen the same multiplier. The result is that product management and design are “absolutely squeezed.” Anthropic is responding by hiring more PMs and deputizing product-minded engineers to act as mini-PMs on projects under two weeks. The counterintuitive insight: companies may need more PMs, not fewer, as AI accelerates engineering output.

    6. Cold emailing still works if you do it right. Amol got his job by cold emailing Mike Krieger, Anthropic’s Chief Product Officer (and co-founder of Instagram), at a time when no growth role was even listed. Key tactics: use a high-converting subject line you have tested over time, find personal email addresses instead of competing in crowded LinkedIn inboxes, keep the message extremely short, and follow up relentlessly until someone explicitly asks you to stop.

    7. PRDs are largely obsolete at Anthropic. Amol estimates that 60 to 80 percent of what his team ships does not have a formal PRD. For small projects, coordination happens entirely in Slack. For larger initiatives, he will sometimes throw his thoughts into Cowork five minutes before a kickoff meeting to generate a rough document. His default philosophy: if you can skip the doc and jump straight to prototyping or action, do it.

    8. The AI coding bet created a research flywheel. Anthropic’s deep focus on coding was not just a commercial play. A document written by co-founder Ben Mann in 2021, just months after the company was founded, laid out the case for focusing on AI coding because better coding models would accelerate their own researchers, which would produce better models, which would produce better coding tools, in a compounding loop. This is something competitors are only now starting to recognize and copy.

    9. Cowork is being used to detect organizational misalignment. Amol runs a weekly scheduled task in Cowork that uses the Slack MCP to scan conversations across the company and surface areas of potential misalignment. He describes cases where this caught teams about to do overlapping work or spin their wheels on conflicting priorities. He also uses Cowork to simulate coaching sessions with his manager, Ami Vora, by asking Claude to analyze her public writing and internal Slack activity and then deliver feedback from her perspective.

    10. Anthropic’s culture is its most defensible moat. Amol describes a culture where every single person is fully engaged, nobody is checked out, and there is radical transparency through “notebook channels” on Slack where anyone, including leadership, shares their thinking publicly. Employees openly challenge Dario Amodei in these channels after all-hands meetings. These notebook channels also serve a practical purpose: they become training data that helps Claude understand how different teams think and operate.

    Detailed Summary

    The Cold Email That Started It All

    Amol Avasare was not recruited through a job listing, a referral, or a sourcing pipeline. He cold emailed Mike Krieger, Anthropic’s CPO and the co-founder of Instagram, with a short pitch: he loved the product, thought Anthropic badly needed a growth team, and wanted to talk. At the time, Anthropic had no growth roles posted. They were just beginning to think about it internally, and the timing was perfect.

    Amol’s approach to cold email is methodical. He has a subject line formula he has refined over years of founder outreach that produces abnormally high open rates (he declined to share the exact copy). He targets personal email addresses rather than work inboxes or LinkedIn, where competition for attention is fierce. The message itself is brutally short: who he is, why he would be a fit, and a request to chat. His follow-up philosophy is to keep reaching out until someone tells him to stop. Krieger responded on the first attempt.

    What $1B to $19B in 14 Months Actually Feels Like

    From the inside, Anthropic’s growth does not feel like a victory lap. Amol describes it as the hardest job he has ever had, harder than being a founder and harder than investment banking. About 70 percent of his time goes to what the team calls “success disasters,” which are problems created by things going extremely well. All the charts are green and up and to the right, but the underlying infrastructure, processes, and systems are constantly breaking under the strain of hypergrowth.

    The revenue trajectory tells the story: $0 to $100 million in 2023, $100 million to $1 billion in 2024, $1 billion to roughly $10 billion in 2025, and already $19 billion ARR by the end of February 2026. Amol notes that at the end of 2024, Dario Amodei was pushing for growth targets that the team thought were impossible. Those targets were hit and exceeded. The internal culture has adapted accordingly. Linear charts are considered uncool. Everything is presented on a log-linear scale.

    Why Activation Is the Hardest Problem in AI

    The central growth challenge for AI products is not acquisition. It is activation: getting users to understand what the product can actually do for them. Amol frames this as a capability overhang problem. Models are improving so rapidly that even internal teams struggle to keep up with what is newly possible. If Anthropic employees have to carve out dedicated time to explore a new model’s capabilities, the average user is even further behind.

    The danger is that someone signs up for Claude, asks it about the weather, and walks away thinking that is all it does. The product development cycle for onboarding is also under strain: by the time you have run tests, gathered learnings, and shipped an optimized activation flow for one model generation, the next model has shipped with capabilities that make your work irrelevant.

    Anthropic’s approach borrows from Amol’s experience at Mercury and MasterClass. They add deliberate friction to the signup flow, asking users questions about who they are and what they want to accomplish. This allows them to route users to the right products and features. The data also feeds downstream into lifecycle marketing and ad targeting. Amol has seen this pattern work consistently across every company he has worked at: the right friction, applied at the right time, outperforms frictionless flows that dump users into a blank canvas with no guidance.

    The CASH System: Automating Growth Experimentation

    Anthropic’s growth platform team, led by Alexey Komissarouk (who teaches growth engineering at Reforge), has built an internal system called CASH. The name stands for Claude Accelerates Sustainable Hypergrowth.

    CASH operates on a four-stage loop. First, Claude identifies growth opportunities by analyzing trends, metrics, and past experiment results. Second, Claude builds the actual feature or change. Third, Claude tests the output against quality and brand standards. Fourth, Claude analyzes the results and gathers learnings after the experiment ships.

    Currently, CASH handles mostly copy changes and minor UI tweaks. The win rate is comparable to a junior PM with two to three years of experience. A senior PM would still do better. But the trajectory matters: this was not possible at all before Opus 4.5 launched, and results have improved meaningfully with Opus 4.6. Human approval is still required before shipping, but the amount of human time spent reviewing is decreasing week over week.

    The part that Claude still cannot handle well is cross-functional stakeholder management. Getting six people in a room to align on a decision remains a fundamentally human problem. As Amol’s head of design joked: “We will have AGI and it will still be impossible to get six people in a room to get aligned.”

    Why the PM-to-Engineer Ratio Might Flip

    This is one of the most counterintuitive insights from the conversation. The conventional assumption is that AI will reduce the need for PMs. Amol argues the opposite: companies may need more PMs, at least in the near term.

    The math is straightforward. Tools like Claude Code are making engineers 2 to 3x more productive. A team of 5 engineers now produces the output equivalent of 15 to 20 engineers in the pre-AI era. PMs and designers have seen productivity gains, but not at the same multiplier. The result is a bottleneck: one PM managing the equivalent output of 15 to 20 engineers worth of work, while also handling cross-functional coordination, stakeholder alignment, and strategic direction.

    Anthropic’s response is twofold. First, they are actively hiring more PMs. Second, they have formalized a system where product-minded engineers act as mini-PMs on any project that is two engineering weeks or less. The engineer handles everything: talking to legal, talking to security, managing stakeholders. The PM only steps in if things go badly off track.

    For larger projects, the PM remains squarely accountable. But the key insight is about leverage: if you are one PM managing 20 engineers, the highest-value use of your time is not shipping the 21st feature yourself. It is getting 5 percent better at guiding the team on what the right opportunities are and upleveling every engineer’s product thinking.

    The Coding Flywheel That Changed Everything

    Anthropic’s deep bet on coding was not obvious at the time. A document from co-founder Ben Mann, dated 2021, laid out the strategic logic just months after the company was founded. The argument was that investing heavily in AI coding would create a compounding flywheel: better coding models would help Anthropic’s own researchers write code more effectively, which would accelerate model development, which would produce even better coding tools.

    This early focus gave Anthropic a structural advantage that competitors are only now trying to replicate. It also explains why the company went so deep on B2B and enterprise use cases rather than chasing consumer attention. The commercial opportunity of coding was large on its own, but the internal research acceleration made it doubly strategic.

    Amol notes that this focus was partly born from constraint. Anthropic was the smallest, least well-funded player in the space for years. They did not have Meta’s distribution or Google’s cash flow or OpenAI’s first-mover advantage. That constraint forced extreme focus, which is a principle Amol applies broadly. He calls it “freedom through constraints.”

    How Amol Uses AI to Manage His Day

    Amol’s personal AI usage is extensive and worth documenting for anyone looking to see how a power user at the frontier actually operates.

    Every morning, a scheduled Cowork task reviews 20 to 25 charts across Anthropic’s products and sends him a summary of what needs attention. The false positive and false negative rates are improving week over week, giving him increasing confidence in delegating this monitoring.

    He uses Cowork to handle administrative tasks he hates: booking meeting rooms, first-pass email triage, filing expense reports in Brex and reimbursements in Benpass. None of this requires his attention anymore.

    For management, he runs weekly Cowork tasks that review what his direct reports have done, cross-reference their work against team OKRs and meeting transcripts, and surface feedback he should give them. He also runs a parallel task for himself, asking Claude to impersonate his manager Ami Vora based on her public writing and internal Slack activity, and deliver feedback from her perspective.

    Perhaps most powerfully, he runs a weekly misalignment detection task that scans Slack conversations across the company and surfaces areas where teams may be working at cross purposes. He describes cases where this caught potentially expensive coordination failures before they compounded.

    Notebook Channels and the Culture Moat

    Anthropic uses “notebook channels” on Slack, which function like internal Twitter feeds where employees share their thinking, priorities, and provocative ideas publicly. Everyone has one, from researchers to growth PMs to Dario Amodei himself. Employees openly disagree with leadership in these channels, and that is encouraged.

    These channels serve a dual purpose. First, they help scale cultural values and operating principles as the company grows rapidly. When Amol posts about “the importance of being comfortable leaving money on the table,” every new engineer on the growth team absorbs that principle. Second, and perhaps more importantly for the long term, these channels become structured context that Claude can reference. The HR team has even documented which internal documents Claude should reference for specific topics. Amol sees this as something every company will eventually need to do: share thinking in a structured way so that the AI agents running throughout the organization have the context they need.

    AI Safety as Commercial Strategy

    Anthropic is structured as a Public Benefit Corporation (PBC), not a standard Delaware C-Corp. This legally allows the company to optimize for public benefit rather than being bound solely to maximize shareholder value.

    Amol says the company has repeatedly taken significant commercial hits for safety reasons, including delaying product launches when safety risks were identified. He also makes a striking claim: what Anthropic says publicly about AI risk is actually a softer version of what they believe internally. The internal view on the potential downsides of powerful AI is more aggressive than the public messaging.

    From a growth perspective, Amol frames safety as a long-term competitive advantage. Growth teams at most companies try to squeeze every last dollar. Anthropic’s growth team is “very comfortable forgoing metric impact” to protect brand, quality, and safety. He argues this is how all the best products operate, and that as the stakes of AI get higher, Anthropic’s credible commitment to safety will become a moat.

    Advice for Thriving in the AI Era

    Amol’s advice for product managers and growth practitioners boils down to four points. First, stay on top of the tools. Try every new model release. Something that did not work three months ago may work now, and you will not know unless you go back and test it. Second, go deep on your unique spike rather than trying to be well-rounded. The PM who can also design is a unicorn. The engineer who thinks like a PM is a unicorn. Find your interdisciplinary edge and double down. Third, be radically adaptable. Amol estimates that 50 to 70 percent of how you operated in the past is now irrelevant. Clinging to old playbooks creates friction. Fourth, think in exponentials, not linear projections. If you are looking at the AI landscape through a linear lens, you will consistently underestimate how quickly things are moving.

    Thoughts

    This interview is one of the most information-dense conversations about growth strategy in AI that has been published so far. A few things stand out.

    The CASH system is the most concrete example yet of a company using AI to automate its own growth loop. The fact that it currently performs at a junior PM level is almost beside the point. What matters is the trajectory: it went from impossible to functional in a few months. If models continue improving at their current pace, this system will be operating at a senior PM level within a year. Every growth team at every AI company should be building their own version of this right now.

    The PM ratio insight is genuinely surprising and underreported. The default assumption in the tech industry is that AI will reduce headcount across all functions. Amol is making the case that in the near term, the opposite is true for PMs. Engineering output is exploding, and someone needs to direct all that output toward the right problems. That is a fundamentally human, organizational, political job that AI is not close to automating.

    The coding flywheel story is also worth highlighting because it shows the power of strategic focus in a world of unlimited possibilities. Anthropic had a generalist technology that could do almost anything, and they deliberately narrowed their focus to one vertical. That decision, made in 2021 before anyone knew what the market would look like, is arguably the single most important strategic bet in the company’s history.

    Finally, the notebook channels concept deserves more attention. The idea that employees should share their thinking in structured, searchable formats is not just a culture tool. It is an infrastructure investment for an AI-native future where agents need organizational context to be effective. Companies that build this habit early will have a significant advantage when agent-driven workflows become the norm.

    The uncomfortable subtext of this entire conversation is that Anthropic’s growth team, as talented as they clearly are, is riding a wave created almost entirely by the research team. Several YouTube commenters pointed this out, and Amol himself acknowledges it directly. The models are the product. The growth team’s job is to make sure users discover and adopt what the models can do. That is not a small job, especially at this scale, but it is a fundamentally different job than driving growth at a product that does not sell itself.

  • Boris Cherny Says Coding Is “Solved” — Head of Claude Code Reveals What Comes Next for Software Engineers

    Boris Cherny Says Coding Is "Solved" — Head of Claude Code Reveals What Comes Next for Software Engineers

    Boris Cherny, creator and head of Claude Code at Anthropic, sat down with Lenny Rachitsky on Lenny’s Podcast to drop one of the most consequential interviews in recent tech history. With Claude Code now responsible for 4% of all public GitHub commits — and growing faster every day — Cherny laid out a vision where traditional coding is a solved problem and the real frontier has shifted to idea generation, agentic AI, and a new role he calls the “Builder.”


    TLDW (Too Long; Didn’t Watch)

    Boris Cherny, the head of Claude Code at Anthropic, hasn’t manually written a single line of code since November 2025 — and he ships 10 to 30 pull requests every day. Claude Code now accounts for 4% of all public GitHub commits and is projected to reach 20% by end of 2026. Cherny believes coding as we know it is “solved” and that the future belongs to generalist “Builders” who blend product thinking, design sense, and AI orchestration. He advocates for underfunding teams, giving engineers unlimited tokens, building products for the model six months from now (not today), and following the “bitter lesson” of betting on the most general model. The Cowork product — Anthropic’s agentic tool for non-technical tasks — was built in just 10 days using Claude Code itself. Cherny also revealed three layers of AI safety at Anthropic: mechanistic interpretability, evals, and real-world monitoring.


    Key Takeaways

    1. Claude Code’s Growth Is Staggering

    Claude Code now authors approximately 4% of all public GitHub commits, and Anthropic believes the real number is significantly higher when private repositories are included. Daily active users doubled in the month before this interview, and the growth curve isn’t just rising — it’s accelerating. Semi Analysis predicted Claude Code will reach 20% of all GitHub commits by end of 2026. Claude Code alone is generating roughly $2 billion in revenue, with Anthropic overall at approximately $15 billion.

    2. 100% AI-Written Code Is the New Normal

    Cherny hasn’t manually edited a single line of code since November 2025. He ships 10 to 30 pull requests per day, making him one of the most prolific engineers at Anthropic — all through Claude Code. He still reviews code and maintains human checkpoints, but the actual writing of code is entirely handled by AI. Claude also reviews 100% of pull requests at Anthropic before human review.

    3. Coding Is “Solved” — The Frontier Has Shifted

    In Cherny’s view, coding — at least the kind of programming most engineers do — is a solved problem. The new frontier is idea generation. Claude is already analyzing bug reports and telemetry data to propose its own fixes and suggest what to build next. The shift is from “tool” to “co-worker.” Cherny expects this to become increasingly true across every codebase and tech stack over the coming months.

    4. The Rise of the “Builder” Role

    Traditional role boundaries between engineer, product manager, and designer are dissolving. On the Claude Code team, everyone codes — the PM, the engineering manager, the designer, the finance person, the data scientist. Cherny predicts the title “Software Engineer” will start disappearing by end of 2026, replaced by something like “Builder” — a generalist who blends design sense, business logic, technical orchestration, and user empathy.

    5. Underfunding Teams Is a Feature, Not a Bug

    Cherny advocates deliberately underfunding teams as a strategy. When you assign one engineer to a project instead of five, they’re forced to leverage Claude Code to automate everything possible. This isn’t about cost-cutting — it’s about forcing innovation through constraint. The results at Anthropic have been dramatic: while the engineering team grew roughly 4x, productivity per engineer increased 200% in terms of pull requests shipped.

    6. Give Engineers Unlimited Tokens

    Rather than hiring more headcount, Cherny’s advice to CTOs is to give engineers as many tokens as possible. Let them experiment with the most capable models without worrying about cost. The most innovative ideas come from people pushing AI to its limits. Some Anthropic engineers are spending hundreds of thousands of dollars per month in tokens. Optimize costs later — only after you’ve found the idea that works.

    7. Build for the Model Six Months From Now

    One of Cherny’s most actionable insights: don’t build for today’s model capabilities — build for where the model will be in six months. Early versions of Claude Code only wrote about 20% of Cherny’s code. But the team bet on exponential improvement, and when Opus 4 and Sonnet 4 arrived, product-market fit clicked instantly. This means your product might feel rough at first, but when the next model generation drops, you’ll be perfectly positioned.

    8. The Bitter Lesson Applied to Product

    Cherny references Rich Sutton’s famous “Bitter Lesson” blog post as a core principle for the Claude Code team: the more general model will always outperform the more specific one. In practice, this means avoiding rigid workflows and orchestration scaffolding around AI models. Don’t box the model in. Give it tools, give it a goal, and let it figure out the path. Scaffolding might improve performance 10-20%, but those gains get wiped out with the next model generation.

    9. Latent Demand — The Most Important Product Principle

    Cherny calls latent demand “the single most important principle in product.” The idea: watch how people misuse or hack your product for purposes you didn’t design it for. That’s where your next product lives. Facebook Marketplace came from 40% of Facebook Group posts being buy-and-sell. Cowork came from non-engineers using Claude Code’s terminal for things like growing tomato plants, analyzing genomes, and recovering wedding photos from corrupted hard drives. There’s also a new dimension: watching what the model is trying to do and building tools to make that easier.

    10. Cowork Was Built in 10 Days

    Anthropic’s Cowork product — their agentic tool for non-technical tasks — was implemented by a small team in just 10 days, using Claude Code to build its own virtual machine and security scaffolding. Cowork was immediately a bigger hit than Claude Code was at launch. It can pay parking tickets, cancel subscriptions, manage project spreadsheets, message team members on Slack, respond to emails, and handle forms — and it’s growing faster than Claude Code did in its early days.

    11. Three Layers of AI Safety at Anthropic

    Cherny outlined three layers of safety: (1) Mechanistic interpretability — monitoring neurons inside the model to understand what it’s doing and detect things like deception at the neural level. (2) Evals — lab testing where the model is placed in synthetic situations to check alignment. (3) Real-world monitoring — releasing products as research previews to study unpredictable agent behavior in the wild. Claude Code was used internally for 4-5 months before public release specifically for safety study.

    12. Why Boris Left Anthropic for Cursor (and Came Back After Two Weeks)

    Cherny briefly left Anthropic to join Cursor, drawn by their focus on product quality. But within two weeks, he realized what he was missing: Anthropic’s safety mission. He described it as a psychological need — without mission-driven work, even building a great product wasn’t a substitute. He returned to Anthropic and the rest is history.

    13. Manual Coding Skills Will Become Irrelevant in 1-2 Years

    Cherny compared manual coding to assembly language — it’ll still exist beneath the surface, and understanding the fundamentals helps for now, but within a year or two it won’t matter for most engineers. He likened it to the printing press transition: a skill once limited to scribes became universal literacy over time. The volume of code created will explode while the cost drops dramatically.

    14. Pro Tips for Using Claude Code Effectively

    Cherny shared three specific tips: (1) Use the most capable model — currently Opus 4.6 with maximum effort enabled. Cheaper models often cost more tokens in the end because they require more correction and handholding. (2) Use Plan Mode — hit Shift+Tab twice in the terminal to enter plan mode, which tells the model not to write code yet. Go back and forth on the plan, then auto-accept edits once it looks good. Opus 4.6 will one-shot it correctly almost every time. (3) Explore different interfaces — Claude Code runs on terminal, desktop app, iOS, Android, web, Slack, GitHub, and IDE extensions. The same agent runs everywhere. Find what works for you.


    Detailed Summary

    The Origin Story of Claude Code

    Claude Code began as a one-person hack. When Cherny joined Anthropic, he spent a month building weird prototypes that mostly never shipped, then spent another month doing post-training to understand the research side. He believes deeply that to build great products on AI, you have to understand “the layer under the layer” — meaning the model itself.

    The first version was terminal-based and called “Claude CLI.” When he demoed it internally, it got two likes. Nobody thought a coding tool could be terminal-based. But the terminal form factor was chosen partly out of necessity (he was a solo developer) and partly because it was the only interface that could keep up with how fast the underlying model was improving.

    The breakthrough moment during prototyping: Cherny gave the model a bash tool and asked it what music he was listening to. The model figured out — without any specific instructions — how to use the bash tool to answer that question. That moment of emergent tool use convinced him he was onto something.

    The Growth Trajectory

    Claude Code was released externally in February 2025 and was not immediately a hit. It took months for people to understand what it was. The terminal interface was alien to many. But internally at Anthropic, daily active users went vertical almost immediately.

    There were multiple inflection points. The first major one was the release of Opus 4, which was Anthropic’s first ASL-3 class model. That’s when Claude Code’s growth went truly exponential. Another inflection came in November 2025 when Cherny personally crossed the 100% AI-written code threshold. The growth has continued to accelerate — it’s not just going up, it’s going up faster and faster.

    The Spotify headline from the week of recording — “Spotify says its best developers haven’t written a line of code since December, thanks to AI” — underscored how mainstream the shift has become.

    Thinking in Exponentials

    Cherny emphasized that thinking in exponentials is deep in Anthropic’s DNA — three of their co-founders were the first three authors on the scaling laws paper. At Code with Claude (Anthropic’s developer conference) in May 2025, Cherny predicted that by year’s end, engineers might not need an IDE to code anymore. The room audibly gasped. But all he did was “trace the line” of the exponential curve of AI-written code.

    The Printing Press Analogy

    Cherny’s preferred historical analog for what’s happening is the printing press. In mid-1400s Europe, literacy was below 1%. A tiny class of scribes did all the reading and writing, employed by lords and kings who often couldn’t read themselves. After Gutenberg, more printed material was created in 50 years than in the previous thousand. Costs dropped 100x. Literacy rose to 70% globally over two centuries.

    Cherny sees coding undergoing the same transition: a skill locked away in a tiny class of “scribes” (software engineers) is becoming accessible to everyone. What that unlocks is as unpredictable as the Renaissance was to someone in the 1400s. He also shared a remarkable historical detail — an interview with a scribe from the 1400s who was actually excited about the printing press because it freed them from copying books to focus on the artistic parts: illustration and bookbinding. Cherny felt a direct parallel to his own experience of being freed from coding tedium to focus on the creative and strategic parts of building.

    What AI Transforms Next

    Cherny believes roles adjacent to engineering — product management, design, data science — will be transformed next. The key technology enabling this is true agentic AI: not chatbots, but AI that can actually use tools and act in the world. Cowork is the first step in bringing this to non-technical users.

    He was candid that this transition will be “very disruptive and painful for a lot of people” and that it’s a conversation society needs to have. Anthropic has hired economists, policy experts, and social impact specialists to help think through these implications.

    The Latent Demand Framework in Depth

    Cherny credited Fiona Fung, the founding manager of Facebook Marketplace, for popularizing the concept of latent demand. The examples are compelling: someone using Claude Code to grow tomato plants, another analyzing their genome, another recovering wedding photos from a corrupted hard drive, a data scientist who figured out how to install Node.js and use a terminal to run SQL analysis through Claude Code.

    But Cherny added a new dimension specific to AI products: latent demand from the model itself. Rather than boxing the model into a predetermined workflow, observe what the model is trying to do and build to support that. At Anthropic they call this being “on distribution.” Give the model tools and goals, then let it figure out the path. The product is the model — everything else is minimal scaffolding.

    Safety as a Core Differentiator

    The interview made clear that safety isn’t just a talking point at Anthropic — it’s why everyone is there, including Cherny. He described the work of Chris Olah on mechanistic interpretability: studying model neurons at a granular level to understand how concepts are encoded, how planning works, and how to detect things like deception. A single neuron might correspond to a dozen concepts through a phenomenon called superposition.

    Anthropic’s “race to the top” philosophy means open-sourcing safety tools even when they work for competing products. They released an open-source sandbox for running AI agents securely that works with any agent, not just Claude Code.

    The Memory Leak Story

    One of the most memorable anecdotes: Cherny was debugging a memory leak the traditional way — taking heap snapshots, using debuggers, analyzing traces. A newer engineer on the team simply told Claude Code: “Hey Claude, it seems like there’s a leak. Can you figure it out?” Claude Code took the heap snapshot, wrote itself a custom analysis tool on the fly, found the issue, and submitted a pull request — all faster than Cherny could do it manually. Even veterans of AI-assisted coding get stuck in old habits.

    Personal Background and Post-AGI Plans

    In a touching segment, Cherny and Rachitsky discovered they’re both from Odessa, Ukraine. Cherny’s grandfather was one of the first programmers in the Soviet Union, working with punch cards. Before joining Anthropic, Cherny lived in rural Japan where he learned to make miso — a process that takes months to years and taught him to think on long timescales. His post-AGI plan? Go back to making miso.

    His book recommendations: Functional Programming in Scala (the best technical book he’s ever read), Accelerando by Charles Stross (captures the essence of this moment better than anything), and The Wandering Earth by Liu Cixin (Chinese sci-fi short stories from the Three Body Problem author).


    Thoughts and Analysis

    This interview is one of the most important conversations about the future of software engineering to come out in 2026. Here are some things worth sitting with:

    The “solved” framing is provocative but precise. Cherny isn’t saying software engineering is solved — he’s saying the act of translating intent into working code is solved. The thinking, architecting, deciding-what-to-build, and ensuring-it’s-correct parts are very much unsolved. This distinction matters enormously and most of the pushback in the YouTube comments misses it.

    The underfunding principle is genuinely counterintuitive. Most organizations respond to AI tools by trying to maintain headcount and “augment” existing workflows. Cherny’s approach is the opposite: reduce headcount on a project, give people unlimited AI tokens, and watch them figure out how to ship ten times faster. This is a fundamentally different organizational philosophy and one that most companies will resist until their competitors prove it works.

    The “build for six months from now” advice is dangerous and brilliant. Dangerous because your product will underperform for months and investors will get nervous. Brilliant because when the next model drops, you’ll have the only product that takes full advantage of it. This is how Claude Code went from writing 20% of Cherny’s code to 100% — the product was ready when the model caught up.

    The latent demand framework deserves serious study. The traditional version (watching users hack your product) is well-known from the Facebook era. The AI-native version (watching what the model is trying to do) is genuinely new. “The product is the model” is a deceptively simple statement that most AI product builders are still getting wrong by over-engineering workflows and scaffolding.

    The Cowork trajectory matters more than Claude Code. Claude Code transforms engineers. Cowork transforms everyone else. If Cowork delivers on even half of what Cherny describes — paying tickets, managing project spreadsheets, responding to emails, canceling subscriptions — then the total addressable market dwarfs coding tools. The fact that it was built in 10 days and was an immediate hit suggests Anthropic has found product-market fit for agentic AI beyond engineering.

    The safety discussion felt genuine. Cherny’s explanation of mechanistic interpretability — actually being able to monitor model neurons and detect deception — is one of the clearest public explanations of Anthropic’s safety approach. The fact that the safety mission is what brought him back from Cursor (where he lasted only two weeks) speaks to the culture. Whether you think safety is a genuine concern or a competitive moat, it’s clearly a core part of how Anthropic attracts and retains talent.

    The elephant in the room: this is Anthropic’s head of product telling you to use more tokens. Multiple YouTube commenters pointed this out, and they’re right to flag it. But the underlying logic holds: if a less capable model requires more correction rounds and more tokens to achieve the same result, then the “cheaper” model isn’t actually cheaper. That’s a testable claim, and most engineers using these tools regularly will tell you it checks out.

    Whether you agree with the “coding is solved” framing or not, the data is hard to argue with. Four percent of all GitHub commits. Two hundred percent productivity gains per engineer. A product that was built in 10 days and scaled to millions of users. These aren’t predictions — they’re measurements. And the curve is still accelerating.


    This article is based on Boris Cherny’s appearance on Lenny’s Podcast, published February 19, 2026. Boris Cherny can be found on X/Twitter and at borischerny.com.